Skip to main content

Joshua Greene: On Neuro-Improvement, Neuroenhancement, and Chekhov

In their paper on the neuroenhancement of love and marriage, Savulescu and Sandberg argue that “there is no morally relevant difference between marriage therapy, a massage, a glass of wine, a fancy pink, steamy potion and a pill.” [1] But is this quite right? At a recent Emory Neuroethics Journal Club, participants discussed whether a distinction might be drawn between attending couples’ counseling and being exposed to oxytocin and, more broadly, whether there are differences between ‘traditional,’ conscious improvements and more immediate, pharmacological neuroenhancements. How should we go about comparing and contrasting these two processes?

Since this issue has important implications for research, treatment, and education, I invited Dr. Joshua Greene to weigh in on the debate for the Neuroethics Blog. Dr. Greene is the John and Ruth Hazel Associate Professor of the Social Sciences in the Department of Psychology at Harvard University, and widely recognized for his experimental work on moral judgments and decision-making. His research focuses on the affective and cognitive processes constituting decision-making, and he is a strong proponent of consequentialism as a means for achieving rational moral outcomes. Recently, Dr. Greene presented a paper entitled “Beyond Point-And-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics” at the New York University 2012 Bioethics Conference, and took part in a panel discussion devoted to one of the conference’s two main themes: ‘Can Moral Behavior be Improved or Enhanced?’ To open up the discussion, I asked Dr. Greene about Savulescu and Sandberg’s view on improvement and enhancement.

NEB: As a philosopher and a scientist, would you argue that there are any essential or morally relevant differences between ‘neuro-improvement’ and neuroenhancement procedures?

JG: In the end, all therapies—talking, massage, pills, injections—ultimately affect behavior and experience by affecting the brain. Thus, there is no deep metaphysical difference between traditional forms of therapy and high-tech ones.  But, given our limited knowledge, the risks are different. The odds that you’re going to produce a totally unforeseen extreme harm by talking to someone about her marriage is vanishingly small. But a new pill is something for which our brains and our bodies may be unprepared.

NEB: Would you argue that either of these mechanisms is intrinsically preferable for the course of our human development, either as individuals or as a species?

JG: In principle, there’s no difference. You can imagine a pill that does to the brain exactly what an hour of talk therapy does.  But, in practice, and if all else is equal, we should be more cautious with high-tech stuff. These things have yet to be vetted by evolution.

In this way, Dr. Greene agrees with Savulescu and Sanders that there is no essential difference between ‘neuro-improvement’ and neuroenhancement. But he does emphasize that there are practical considerations pertaining to enhancement which should be taken into account. I understand this to mean that, at least for the time being, these practical factors are substantial enough to keep us from straightforwardly identifying conscious self-discipline with neuroenhancement technologies. At the same time, these principles do suggest that, once we gain a better understanding of the neural and biochemical mechanisms involved, the boundaries between ‘neuro-improvement’ and neuroenhancement will become increasingly blurred.

Next, I went on to ask Dr. Greene about the applications of improvement and enhancement in more specific contexts. Although ‘neuro-improvement’ and neuroenhancement may both be understood as forms of ‘compensation’ – i.e., as ways of making up for our sometimes less-than-ideal human tendencies – certain kinds of human behaviors and judgments are more complex than others. Correspondingly, some shortcomings will also be harder to regulate than others, and it seems likely that we will find moral intuitions and moral actions on the more challenging end of spectrum. I wondered whether the difficult case of morality could shed any light on the future of improvement and/or enhancement.

NEB: In “The Secret Joke of Kant’s Soul,” you clearly and persuasively recommend consequentialism as a way to offset our affective moral responses, and thereby pursue more rationally-informed ends (Greene, 2007) [2]. This seems to imply that we ought to change certain aspects of how we behave and even, in one sense, to change certain aspects of ‘who we are’ (Greene, 2002). If they were shown to be safe and effective, the preemptive enhancement of some of our basic intuitions and reactions could theoretically enable us to ‘bypass’ our need for conscious, rationally-guided self-correction. If they could realistically be developed, would you endorse these kinds of enhancements?

JG: Again, in principle, there could be a safe and effective pill for any job that can be done by other means.  But I don’t see on the horizon any technological fix to replace old-fashioned talking and thinking and reasoning and arguing.

I have to admit that, as a student in philosophy, my first thought was, ‘Philosophy, represent!’ But, of course, there is a lot more to Dr. Greene’s suggestion than that. If I understand him correctly, we are currently faced with a unique window for investigating scientifically-informed techniques for moral improvement: in the past, philosophy sought to improve moral judgment but was not sufficiently informed by the sciences, and in the future, we may possess enough knowledge to simply bypass improvement and enhance our moral capabilities. But for now, we are just beginning to gain enough scientific knowledge to begin improving our moral lives, and we will likely have a quite a long time at this stage before (and if) we begin to genuinely master moral enhancement.

Happily, a growing number of researchers are working to map out the relationship between neuroscience and moral improvement. Among them, Narvaez and Vaydich (2008) have sought to develop practical programs for fostering moral development in children and young adults [3]. They further argue that these kinds of programs should be deployed in schools, youth organizations and other social institutions, and I asked Dr. Greene what he thought of this approach.

NEB: Throughout much of your work, you are quite optimistic about the possibility of improving some of the ways we think. Do you believe scientifically-informed programs promoting moral development should be incorporated into the educational curriculum?

JG: This sounds good to me.  But, of course, it all depends on the details.  Mostly, I think that we should learn about the science of human nature from a young age.  One of my favorite quotes (gendered language aside) comes from Anton Chekhov, by way of Steven Pinker: “Man will become better when you show him what he is like.” I think the way to promote moral development is to promote self-knowledge, grounded in science.

NEB: What specific educational tools and programs would you like to see neuroscientists developing over the next twenty years?

JG: Behavioral scientists (psychologists, anthropologists, economists, neuroscientists, geneticists, etc.) have learned a lot of surprising and fascinating things about human nature. I think that these lessons need to be translated in to educational programs suitable for children and adolescents. In particular, students [should] be learning the basics of experimental psychology around the same time that they start to learn the basics of physics, chemistry, and biology.

I think this last proposal is especially encouraging insofar as it is both quite possible and potentially transformative. And as Dr. Greene points out, these kinds of changes could form the foundation for a more scientifically-informed theory of morality.

–Julia Haas

Emory Philosophy Graduate Student

Want to cite this post?

Hass, J. (2012). Joshua Greene: On Neuro-Improvement, Neuroenhancement, and Chekhov. The Neuroethics Blog. Retrieved on
, from


[1] Savulescu, J. & Sandberg, A. (2008) The neuroenhancement of love and marriage: the chemicals between us. Neuroethics 1:31-44.

[2] Greene, J. D. (2007). The secret joke of Kant’s soul, in Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development, W. Sinnott-Armstrong, Ed., MIT Press, Cambridge, MA.

See also, Greene, J.D. (2003) From neural “is” to moral “ought”: what are the moral implications of neuroscientific moral psychology?  Nature Reviews Neuroscience, Vol. 4, 847-850; Greene, J. D., Cohen J. D. (2004) For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society London B. 29 November 2004 vol. 359

[3] Narvaez, D. & Vaydich, J. (2008) Moral development and behavior under the spotlight of the neurobiological sciences. Journal of Moral Education, 37(3), 289-313.


  1. Really cool post, Julia. I remember stumbling across Dr. Greene's Ph.D. dissertation in high school when I was looking for ammunition against non-consequentalists, which I look back on as a fonder part of my childhood than one might think. It was a bit of a shock (in a good way) to see his name here.

    I think Greene's spot on about the context-dependence of neuroenhancers: I drink coffee, and it's safe, effective, delicious, and a neuroenhancer. The question for present and future nootropics isn't whether they enhance cognition, but whether they do so in a safe and effective way (and perhaps also in a way that doesn't promote inegalitarian social outcomes).

    I wonder what Dr. Greene thinks a scientifically sound moral judgment looks like. In one sense, what does it mean to promote the "best outcome?" In another sense, how good is science at predicting these outcomes? What is the evidence that the interventions he suggests produce more moral behavior, and what constitutes such behavior in the first place?


  2. Hi Julia,
    Very interesting post! I like the idea of scientifically informed moral education. My question is about something that Dr. Greene says: “I think the way to promote moral development is to promote self-knowledge, grounded in science.” To play devil’s advocate, do we really need to promote self-knowledge to improve moral reasoning? Perhaps we really only need a few experts on moral reasoning to develop evidence-based treatments (pharmacological, psychological and/social) for “deficiencies” in moral reasoning, which can then be administered to the rest of us non-experts. To use a (perhaps poor) analogy, for the average allergy patient, learning about how the immune system works may not be necessary for her to get better, as long as she has an expert to prescribe her medicine and tell her how to avoid allergens. Do you have thoughts about this?


  3. Thanks for your comment, Ross, and sorry for the delay in my response!

    I’m not sure if Greene is after a ‘scientifically sound moral judgment’ so much as he is a ‘scientifically-vetted’ one. Here’s an example to help explain what I mean.

    When you look at a straw in a glass of liquid, the straw appears to be bent. Although you may have wondered why the straw was actually bent you were little, you now recognize it as an illusion produced by refraction, and you probably came to understand refraction either by learning about it at home or in an early science class. (So what do straws and moral intuitions have to do with each other?)

    If I understand him correctly, Greene holds that some (or even an important number) of our moral intuitions are a lot like straws in a glass of water. Our moral intuitions may give us the sense that certain beliefs or actions are right or wrong, but this sense is the product of our evolved, instinctive responses, and may actually lead us astray when we make decisions. So, like we used optics to figure out why the straw appears to be bent, we can use science to examine our quick, affective moral intuitions, and thereby learn to separate them out from what might actually be ‘the best thing to do.’ Greene gives explicitly moral examples of what he means by this here, in the ‘Moral Dilemmas and the "Trolley Problem”’ section (the crying baby is a particularly good one).

    To put it slightly differently, I don’t think Greene is suggesting that science will itself come around to formulating ‘best outcomes’ (N.B. he may believe this – but as far as I know, it just isn’t the claim he is defended so far). Instead, he views science as a way to see ourselves as though ‘from outside’ – like when I watch a video of myself teaching and think, ‘That one diagram I put on the board and thought was so awesome? It was actually crooked as hell – and impossible to read!’ So I fix it the next time I teach the course. Similarly, if science can tell us ‘from the outside’ that we make much harsher decisions when we’re hungry, we should learn not to issue life-altering verdicts before lunch. That’s not the same as saying a scientific experiment will ever be able tell a judge how to rule on a case (although again, I’m not ready to defend or deny that claim – it’s just a different issue).

    Finally, there is a growing body of evidence that individuals who are made aware of their biases are actually better at using their conscious minds to overrule them. But how we want to define moral behavior is a whole other kettle of fish!


  4. Sorry, typo: "it just isn’t the claim [Greene] has defended so far."


  5. Hi Kristina,

    thanks for your comment! I had to think about it a little bit before responding 🙂

    I like your analogy of an allergy patient, but I want to tweak it a little bit and consider a patient with diabetes. Let’s call her Alice. The complexities of Alice’s disease mean that several levels of self-knowledge come into play in her treatment:
    1. What the scientific community currently knows about diabetes
    2. What Alice’s doctor knows about (1), and also what she knows about Alice’s specific situation (family history, other health issues, etc.)
    3. What Alice learns from (1) and (2), including the medication she takes and the changes she makes to her lifestyle (diet and exercise are especially important for regulating blood glucose levels), together with what she knows about how her body specifically responds to certain factors (e.g. not getting enough sleep)

    I think morality and diabetes are analogous because, unlike having run-of-the-mill allergies (life-threatening allergies are another story), both will probably never be fully resolved using just a pill, and both will instead rely (to varying degrees) on ongoing lifestyle management. In other words, both will require all three levels of knowledge, from the lab all the way down to managing one’s own specific day-to-day situation. And I think Greene would include all three of these levels in his conception of self-knowledge (although he might very well break them down into different categories).

    Does that make sense?


Post a Comment

Emory Neuroethics on Facebook