Tuesday, March 7, 2017

Neuroeconomics and Reinforcement Learning: The Concept of Value in the Neuroscience of Morals

By Julia Haas

Julia Haas is an Assistant Professor in the Department of Philosophy at Rhodes College. Her research focuses on theories of valuation and choice.

Imagine a shopper named Barbara in the pasta aisle of her local market.  Just as she reaches for her favorite brand of pasta, she remembers that one of the company's senior executives made a homophobic statement. What should she do? She likes the brand's affordability and flavor but prefers to buy from companies that support LGBTQ communities. Barbara then notices that a typically more expensive brand of pasta is on sale and buys a package of that instead. Notably, she doesn't decide what brand of pasta she will buy in the future.

Barbara’s deliberation reflects a common form of human choice. It also raises a number of questions for moral psychological theories of normative cognition. How do human beings make choices involving normative dimensions? Why do normative principles affect individuals differently at different times? And where does the feeling that so often accompanies normative choices, namely that something is just right or just wrong, come from? In this post, I canvass two novel neuroethical approaches to these questions, and highlight their competing notions of value. I argue that one the most pressing questions theoretical neuroethicists will face in the coming decade concerns how to reconcile the reinforcement learning-based and neuroeconomics-based conceptions of value.

One popular approach to the problem of normative cognition has come from a growing interest in morally-oriented computational neuroscience. In particular, philosophers and cognitive neuroscientists have turned to an area of research known as reinforcement learning (RL), which studies how agents learn through interactions with their environments, to try and understand how moral agents interact in social situations and learn to respond to them accordingly. RL research suggests that human choice depends on several distinct decision systems, where each decision system relies on a different computational algorithm to calculate 'value.' Roughly, value is calculated in terms of how much reward is associated with certain actions over time. Learned value assignments then underwrite choice and, where applicable, action selection.

The trolley problem, image courtesy of Wikimedia Commons.
Perhaps the most prominent RL theory of normative choice, presented by psychologist Fiery Cushman (2015), proposes that moral behaviors depend on one of the three systems typically identified in RL, what is known as the habit-based system. For example, Cushman suggests, American tourists frequently continue to tip in restaurants abroad, even when there is no local custom for doing so (2015, 59). More formally, one of the advantages of Cushman's view is that it may explain why participants provide surprisingly inconsistent responses to what is known as the trolley problem.

Typically, in switch versions of the trolley problem, people support the killing of a single individual in order to save five others, but find it difficult to endorse the harm of one agent in footbridge versions of the problem, where the harm is more ‘hands on.’ Since a purely numerical assessment favors the saving of five people rather than one in both cases, Cushman reasons, people’s tendency to resist harming the single agent in the footbridge version is “the consequence of negative value assigned intrinsically to an action: direct, physical harm” (2015, 59). That is, Cushman suggests, participants’ responses to the footbridge version of the dilemma may be underwritten by the model-free decision-system: since directly harming others has reliably elicited punishments in the past, this option represents a bad state-action pair, and leads people to reject it as an appropriate course of action.

A second approach to Barbara’s example comes from a branch of behavioral economics known as neuroeconomics. Like their RL-research counterparts, neuroeconomists employ the concept of ‘value’ to help explain how choices between multi-faceted alternatives are possible. In the context of neuroeconomics, however, value specifically refers to the ‘worth’ of a given commodity or action as computed by the agent - that is, it refers to subjective values. Correspondingly, within the framework of neuroeconomic research, understanding what takes place in choice amounts to uncovering how humans and other animals compute subjective value.

Extending this approach to the problem of normative choice, Shenhav and Greene (2010) asked participants undergoing fMRI to imagine scenarios in which they could save a group of individuals at the expense of leaving a single individual to die. For example, they invited participants to evaluate the moral acceptability of saving a group of skydivers with faulty parachutes at the expense of letting a single skydiver with a faulty parachute die. The number of skydivers in the group and the probability of the group’s survival varied from trial to trial (see Figure 1) (see Shenhav and Greene 2010, supplemental materials). Consistent with traditional economic and utilitarian models, they found that many of the study’s participants found it morally acceptable to sacrifice the life of one individual in order to prevent a greater loss of life. Interestingly, Shenhav and Greene also found that participants’ ratings of moral acceptability were correlated with degrees of activation in their posterior cingulate cortex and ventromedial prefrontal and medial orbitofrontal cortices, i.e., with brain activations relatively similar to those seen in instances of valuing physical goods (2010, 671, Table 1 (expected value)).

Figure 1: Shenhav and Green argue that "Average
moral acceptability ratings across trial value space
reveal a graded behavioral sensitivity to 'expected
moral value'" (669).
The RL and neuroeconomic approaches thus seem to overlap in several important ways. Both theories take value as a fundamental unit of choice. Both traditions also recognize that neurons in the OFC are responsible for encoding value in the brain (Padoa-Schioppa and Schoenbaum 2015). But the views diverge when it comes to characterizing when and how value is computed. In RL, value is something that is often learned gradually over time; by contrast, in neuroeconomics, it is suggested that subjective value is calculated online, i.e., at the time of choice. Consequently, it is not clear whether and how RL's algorithms can be used to model subjective valuation in neuroeconomic choice. This is a shame, because neuroeconomics could benefit from RL's strong computational foundations, and RL could benefit from the many incisive behavioral and neuroscientific experimental paradigms on offer in neuroeconomic research.

Increasingly, researchers in the two independent fields recognize the need to collaborate and find common conceptual and empirical ground. These kinds of conversations would benefit the field of neuroethics, too. Both of these intersecting disciplines will help up make sense of Barbara’s case in the years to come. In other words, it would help us gain a better understanding of the brain’s role in our moral experiences: How do my past learning experiences and present choice environment influence my future moral choices? What is the difference between something that just ‘feels wrong’ and something that has good reasons for being thought of as immoral? And – perhaps most importantly - how can I shape my own neural moral ‘values,’ as well as the neural moral values of those around me, to try and make more consistent decisions over all? The concept of value may turn out to be the basic unit of the neuroscience of morality.

Want to cite this post?

Haas, J. (2017). Neuroeconomics and Reinforcement Learning: The Concept of Value in the Neuroscience of Morals. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/03/neuroeconomics-and-reinforcement_7.html

No comments: