Wednesday, February 8, 2012

Neuroethics Journal Club: Autonomous Linguini

How responsible are people for their decisions? Can neuroscience help us answer that question? If not, can a Pixar movie about a cooking rat help clear things up? If you’re stumped by these questions, you may have missed the most recent meeting of the Emory Neuroethics Program’s journal club. Those of us that were there took part in a discussion led by Jason Shepard, graduate student in the Wolff lab and Neuroethics program scholar. You can thank him for the reference to the Pixar film, Ratatouille. He used the plot of the movie to get us talking about the paper we read, “How the neuroscience of decision making informs our conception of autonomy”, by Gidon Felsen and Peter Reiner. Jason got us talking so much that we kept him from making all the points he wanted to about the paper—look for a blog post from him soon. To introduce the paper, I’ll recap his “Autonomous Linguini” though experiment:

The movie Ratatouille tells the story of Alfredo Linguini, who works as a lowly garbage boy at a restaurant until his run in with the rat Remy. When Linguini spills a pot of soup, Remy saves his hide by helping him re-create it—luckily, Remy is not just a rat but also a gifted chef. Customers love the soup, and suddenly Linguini finds himself a cook-in-training. How will the hapless Linguini survive in the kitchen? Naturally, he will let Remy control him, like a marionette, by yanking on his hair. Here’s the question: is Linguini making autonomous decisions in the kitchen when he cedes control to Remy?

Autonomy is the ability to make decisions rationally and free from outside influence. While Linguini's autonomy may not matter much to you, the extent to which people are responsible for their decisions concerns philosophers, bioethicists, and lawyers alike. Some might argue that if we knew how the brain arrives at decisions, then we could resolve all sorts of long-standing ethical and moral debates. For example, knowing how the brain decides could help us choose between systems of government. Maybe we need a nanny state, because people have to be saved from their poorly designed brains, or maybe we need a night watchman state, because people’s brains make great decisions once freed from the feds.

Now that I've introduced autonomy, let's get to the article we discussed in journal club, in which Felsen and Reiner attempt to do two things: (1) present a "standard model" of autonomous decisions, and (2) discuss whether evidence from neuroscience supports the standard model. While I don't think anyone at this meeting of the journal club had a problem with the idea that neuroscience can inform the debate on autonomy, many took issue with how Felsen and Reiner fleshed out that idea.

The authors identify three criteria that a decision must meet to be autonomous. Let me focus on one, just to give an idea of how their article was received. Their first criterion states that, to be autonomous, a decision must be consistent with the “’higher order’ beliefs and desires” of an individual. The phrase “higher order … desires” refers to the idea, found in the writings of philosopher Harry Frankfurt, that desires can be ranked in a hierarchy. “[P]hysiological needs” like hunger and “reflexive emotions” are first-order desires, and desires about those desires are second-order or higher. Consider Felsen and Reiner’s example: you are in the lunch line at a cafeteria, facing a decision between salad and cake. Your desire to eat is first-order; your desire to eat salad because you don’t want to gain weight is higher order.

What does neuroscience have to say about our ability to rank desires and choose between them? Plenty, according to Felsen and Reiner. They claim that the brain has a hierarchical organization, like desires, so that lower-order desires map onto the brainstem and higher order desires map onto the cortex. Decisions can be autonomous, then, because the pre-frontal cortex can modulate activity in areas involved with lower-order desires and tip the scales in favor of one lower-order desire over another. Felsen and Reiner acknowledge—in a footnote—that there are other models of how the brain makes decisions, where the pre-frontal cortex is not in control, but then state that “specific brain regions constituting each level of the hierarchy are not critical for the purposes of our discussion."

Unfortunately, if the authors want to constrain the debate on autonomy with neuroscience, the details do matter. They’re right that neuroanatomy suggests the brain is hierarchically organized, but whether it acts that way in real time is another question. More on that below. They are not right when they state that the brainstem evolved before other brain areas. As someone with a background in brain evolution, and with aspirations to being a crotchety old man, I have to point this out. To the contrary, we know that all of the major brain structures, from brainstem to cortex, are present in almost all extant vertebrates, in one form or another. I’ll muzzle my inner old man for a second, and assume that Felsen and Reiner wanted to say something like, “the fact that during evolution the human brain underwent a massive expansion in the size of the cortex has allowed people to have more autonomy”. That statement might let them hold on to their hierarchy. However, it’s probably more useful to think of brains as highly inter-dependent networks than to think of them as hierarchies. Different nodes in the network take control at different times. When we’re deciding whether to cut someone off on the highway, we might be relying on our basal ganglia. This description of decisions brings to mind Daniel Dennet’s “multiple drafts” model of consciousness, as Steve Potter pointed out at the journal club meeting. Similarly, different regions of the pre-frontal cortex might process different inputs to help modulate between desires. During our discussion, neuroscience graduate student Kathy Reding brought up the hypothesis that the ventromedial part of prefrontal cortex processes somatic signals, i.e., sympathetic responses that let us tag certain events emotionally. (Felsen and Reiner also discuss the somatic marker hypothesis as it relates to their second criterion.)

If it’s just a matter of figuring out how we choose between lower-order desires though, then Felsen and Reiner might respond that I haven’t presented a problem for their first criterion. They already said that the details of the hierarchy don’t matter. I would say that we do have a problem, though, if it takes the whole network to make an autonomous decision. The problem is that we’re going to have to do a lot more neurosciencing before we can decide (hello, irony) whether certain decisions are autonomous. Currently, America's legal system has a certain concept of what decisions are autonomous that depends on the mental state of the person that made them, as we talked about at the journal club meeting. If you drink a bottle of Jack Daniels and then sign a contract, you can’t be held responsible, but if you drink the same bottle, get into your car, and then drive that car through the plate glass windows of a bank, you can be held responsible. These cases seem like no-brainers (pardon the pun), but the legal landscape will likely turn treacherous as we learn more about the brain. What if I’m a recovering alcoholic with enlarged cerebral ventricles and I drive my car through the plate glass windows of a bank? How will we handle decisions made by people with brain injuries? Schizophrenics?

Few of us can pretend to have a command of both the nuances of neuroscience and the philosophy of autonomy, but Felsen and Reiner have taken the first steps toward reconciling the two. Maybe we can hash out the details over a plate of autonomous linguini (sorry, my family is genetically predisposed towards puns).

The Neuroethics Program journal club will meet again on February 22nd from 12:30-1:30pm. Neuroethics Program Associate Dr. Gillian Hue will facilitate a discussion of "Examining the Effects of Sleep Deprivation on Work-place Deviance". If you’d like to get in on the discussion of the pons, politics, or pasta, be sure to RSVP by sending an e-mail to
--David Nicholson
Emory Neuroscience Graduate Student, Sober lab

Want to cite this post?
Nicholson, D. (2012). Neuroethics Journal Club: Autonomous Linguini. The Neuroethics Blog. Retrieved on , from


Riley Zeller-Townson said...

Thanks, David, from those of us who couldn't attend!

I have to take issue with the definition of autonomy that you present. The idea of making a decision without outside influence sounds like building a robot with no sensors. That's the extreme extension, of course, but you do get into this hazy area where the line between autonomous and mechanical disappears. Since the word autonomy is frightening to me, I might try to look for concreteness in the application: law, for example. like the spectacular little survey available to us above, how do we decide who to blame for particular events- brain tumors, rats, Linguini, or God? Certainly, if Linguini gives up control of his body to the rat, who then uses Linguini's opposable thumbs to commit acts of unspeakable c(r)ookery, Linguini would be thrown in prison (as the rat, we assume, is not respected as a moral creature in French law, and could not receive our wrath beyond the execution of a common household pest [which is hardly as satisfying a response as throwing a human in prison], and we would always be suspicious that perhaps Linguini was making up a fantastic excuse)

to be a bit more critical- I think that matters of 'who is in charge' are less important than those of 'how well was it done.' It doesn't matter who crashed your car into the bank, or what psychological state they were in, if throwing you in prison is the most effective way to make sure it doesn't happen again (which it may or may not be). I think the neuroscience comes in most strongly when we get to talking about what the most effective next step is- was this undesired event caused by a tumor? we should remove the tumor. Was it caused by a prefrontal cortex that has been structured to prioritize personal pleasure over the well being of others? perhaps we need to reprogram the cortex with the best tools we have.

all that said, the word autonomous gets used because it is useful (even if I personally find it frightening). My own use might be- capable of making rational decisions, without the aid of another intelligent system- but that's nearly recursive, and incredibly difficult to apply to neuroscience questions where the pieces involved may or may not be intelligent. Thoughts?

Jason Shepard said...

Hi Riley,

I like your "extreme extension" of the no-outside-influence requirement. At minimum, though, some sorts of external influence would be sufficient for compromising autonomy. For example, coercive force or intentional manipulation are often seen as outside influences that would undermine autonomy. Felsen and Reiner specifically claimed that "covert influences" undermine autonomy. They mentioned priming effects as an example of what they had in mind when they were talking about "covert influences." (I would actually disagree with Felson and Reiner that covert influences such as priming effects necessarily undermine autonomy ...)

But I am a little confused by your "concrete application" of autonomy. It actually _does_ matter who crashed your car into the bank and it _does_ matter what psychological state they were in. Almost all crimes in American penal code (and many other law codes) have a mens rea condition. The mens rea conditions stipulate the specific psychological states required to be convicted of certain crimes. The law very much does care "who is in charge" and whether whoever was in charge was in such-and-such psychological state.

Riley Zeller-Townson said...

why do you think priming doesn't necessarily undermine autonomy?

re: concrete application: yes, I was fantasizing about the way the world works in my imaginary kingdom where laws make sense (where mens rea is an outdated concept, along with cars and If I'm passing laws in this world, then I'll see what I can do to adapt stated principles to it- though I suppose this doesn't really change the end results, only the vocabulary involved, and the same open questions we have remain- the only value I'm adding (?) is that I think the concept of autonomy isn't overly useful in this situation.
Lets take the drunk driving into a bank example. What caused it? perhaps some PFC circuit initiated a train of events where the risk of the collision increased (by deciding to drink in the first place), the alcohol decreased the PFC's reasoning capabilities (there's a rational soul in that PFC!), a lost cellphone pushes the PFC to start driving rather than staying the night in a sketchy part of town, and a dog running across the road late at night in the rain sealed the deal. How do we prevent this from happening again? We only have control over the PFC in this case, so we assign responsibility to it and reprogram it using the traditional methods of prison, which also helps to reprogram nearby PFCs to avoid such behavior in the future. (please forgive me, I work with cells, not systems, I'm fairly certain this will work with any neural structure where the best programming method we have is through, replace PFC with your favorite).
with the example of the man who signs a contract while drunk, we have other options to prevent this from happening again. Causes were the drunk thinking it was a good idea to sign the document, and the presumed non-drunk convincing them to sign the document. This is assumed to be important to both parties. Gut reactions tell us that drunky is less at fault than non-drunky (but they might share some of the blame). Why? Non-drunky is more likely to know that he shouldn't do this. If we reprogram non-drunky (and use him as an example), we get a better shot at preventing this from happening again. Drunky's PFC is exerting less relative control (though perhaps the same amount of absolute control) as the first example, so we let him off a bit.
I don't know, perhaps relative control is a good stand in for autonomy (scares me a little less, at least).

David Nicholson said...

Thanks, Riley and Jason, for your comments. I'll just reply to them all at once. If I understand you correctly, Riley, you're saying "why bother with this autonomy thing if we can just ask whether the brain's working correctly?" My guess is that, as far as their "outside influence" criterion is concerned, Felsen and Reiner would agree with you. They're more concerned with ways that *normal* brain function could undercut autonomy. That's why they talk about priming, which bothers me for a whole lot of reasons I won't get into. Jason, why do you feel like priming's not a problem for autonomy? With regard to outside influence, Felsen and Reiner also talk about "framing" effects: changes in decisions people make due to the way choices are presented to them. Apparently, framing effects show up in studies of medical decisions made by patients--that's according to John Banja's comments at journal club. The problem there, though, is with the way doctors present patients their choices. So, you're right, Riley, that the PFC that's not functioning in such a case might be the doctors, and maybe the way to deal with their outside influence on our decisions is to penalize doctors for framing things in a way that misleads our crappily designed brains. So, in general, I think Felsen and Reiner want to know if our (healthy, normally functioning) brains compromise our autonomy. That's why they bring up Clark and Chalmers' extended mind hypothesis. They're saying: "omg what if my iPhone is part of my mind ... and therefore controls my thoughts?!" Considering the amount of time some people spend mentally composing text messages, I might be inclined to agree with them.

Peter B. Reiner said...

I want to thank David and everyone at the Emory neuroethics group for discussing our paper in journal club. The comments made here, similar to the ones that were published along with the paper, are remarkably astute, and I find that I am in agreement with much of what is said.
When Gidon and I set out on this adventure, we were attempting to marry the extant neurophysiological data onto a standard view version of autonomy, one that accords not only with bioethical practice but also with the folk views of many (knowing full well that there are philosophers aplenty who find that view wanting).
The main issue that David raises is whether the details of the neuroanatomical arrangement of decision-making matter. There is no doubt that details matter, but the relevant question is whether they matter for the purposes of the discussion that we wished to pursue. I might cede to David the likelihood that decisions (whatever they may be) may be made in different parts of the brain at different times in different contexts. And I might even give up some ground on the veracity of the convenient parallel between the folk view of hierarchical decision making in the brain and in the standard view of autonomy.
Where I would dig in my heels would be with the following: irrespective of the neuroanatomical details of how and where decisions are made, the data from a variety of sources (Dehaene S, Changeux J-P. Experimental and theoretical approaches to conscious processing. Neuron. 2011 70(2):200–227) suggests that decisions are often considerably less autonomous than we may wish them to be. Whether the details of the framework that we have described – first order desires competing for primacy to be recognized as dominating the decision-making network – or some other system rules the day is one that, as David properly suggests, will require a lot more ‘neurosciencing’. But the data is sufficient to conclude that some decisions, perhaps even many decisions, are influenced in myriad ways by events around us that are opaque to conscious deliberation. And that matters.

By the way, I love the concept of autonomous Linguini. If I may, I shall use it with my students.

David Nicholson said...

Thanks for taking the time to respond, Peter. It's good to hear you enjoyed Jason's Autonomous Linguini thought experiment as much as we did. With regard to conscious processing, I can't pretend to be an expert (I guess I should read that review you referenced), but the little I do know gives me no reason to argue with your main point. Processes that we're not aware of probably *are* influencing our conscious decisions. The question then becomes, how much does it matter? Maybe more 'neurosciencing' will lead us to focus on whether brains are making good decisions instead of worrying about how consciously those decisions are made, as Riley seemed to be saying in his comments. Or maybe that's the best response I can come up with while trying to get things done in the lab. Either way, you and Gidon have given us a lot to think about (consciously). Thanks again