Skip to main content

Consciousness and Ethical Pain

Imagine you find that a beloved uncle has received a terrible injury that leaves him paralyzed, but still totally aware of his environment – a condition known as locked in syndrome. Now imagine that a doctor comes to you with a miracle cure: a new experimental treatment will repair your uncle’s damaged brainstem allowing him to regain control of his body. The catch, however, is that this procedure is extremely painful. It actually seems like it might be the most painful experience possible: fMRI scans reveal that all the brain regions that are active during extreme pain are activated during this (imaginary) procedure. And it lasts for hours. However, your uncle won’t complain about the procedure because 1) he’s paralyzed and thus can’t voice his suffering, and 2) the experience of this miracle treatment will mercifully be forgotten once the procedure is over, so your uncle won’t raise any complaint afterwards. While many of us would probably sign off on the procedure, we might still feel guilty as we imagine what it must be like to go through that, even if our uncle wouldn’t recall it later.

The neural ‘signature’ of pain, as seen through fMRI [8].  Image from here.

This scenario is meant to illustrate that there seems to be an aspect of the moral weight of pain – its significance in ethical discussion and decision making and guilt – that has to do specifically with what pain feels like. Not the way it makes us act, not the danger it represents, but that first person, qualitative, subjective experience of being in pain, or suffering through pain. The ability to have such qualitative, subjective experiences is called qualitative (or sometimes phenomenal) consciousness. We tend to assume that most humans are conscious, and that this is the primary reason why hurting them is wrong- indirect selfish reasons (like avoiding jail time or losing them as a friend and ally) are seen as being secondary to this primary fact: the evil of pain[1].

For this reason, discussions of pain taking place in unfamiliar creatures (which I’m using to refer to anything that isn’t able to explicitly tell you how it feels – including humans with certain neurological conditions, as well as almost all non-human animals, and perhaps even stranger entities) are often intimately tied to the possibility of that creature being conscious. This occurs for instance when deciding whether a patient with Unresponsive Wakefulness Syndrome (formerly called vegetative state) should receive analgesia[2,3], or when debating about the necessary precautions that should be taken when fishing or slaughtering chickens. If it can be demonstrated that something doesn’t meet our requirements for consciousness, suddenly we have free range to treat that thing as more of an object than a person[4].  If consciousness is suspected on the other hand, we become much more cautious with our treatment of the entity.  

Unfortunately, consciousness is very difficult to work with, especially from a neuroscience perspective. Since qualitative consciousness is by definition a private, personal thing, neuroscientists are limited to dealing with consciousness indirectly. This is done by looking at neural correlates of consciousness. These are the physical events that occur in brains that we agree are conscious (such as awake, healthy humans) but not in brains that we agree aren’t conscious (such as the brains of humans that are in dreamless sleep)[5]. By comparing and contrasting these two ‘known’ points, it should be possible to identify what it is about ‘conscious’ brains that gives them ‘consciousness,’ right?

In healthy humans, probably. The danger might come from extrapolating these results to a neuroessentialist view of consciousness – the idea that consciousness is created purely as the result of the brain doing its thing. We might imagine that some neural circuit, once carefully tuned through currently unknown self-organizing properties, spontaneously generates a mysterious feedback loop.  Like Robert Downey Jr’s glowing arc reactor in the Iron Man films, this unassuming physical matter almost magically produces something totally alien and awesome. But instead of enough power to defeat Jeff Bridges in hand-to-hand combat, the neural circuit produces something much stranger – a metaphysical shift that creates a conscious entity in a sea of unconscious biochemical machinery (that is, the rest of the brain). If this circuit was missing from either a brain damaged human or a non-human animal (or not yet fully developed in new born infants), we could calmly assert that no matter how much these entities appeared to suffer, such displays were unconscious reflexes that didn’t actually reflect any internal, private, real suffering.  They would merely be biological machines, with the same moral status as toenails.  As Stephen Pinker phrased it in a 2007 article in Time [6], “…once we realize that our own consciousness is a product of our brains and that other people have brains like ours, a denial of other people’s sentience becomes ludicrous. ‘Hath not a Jew eyes?’ asked Shylock. Today the question is more pointed: Hath not a Jew–or an Arab, or an African, or a baby, or a dog–a cerebral cortex and a thalamus?”

Compare and contrast. Left: the Arc Reactor activates in the first Iron Man film. Right: one of many blue glowing brains that one can find on while looking through popular articles on neuroscience. Is this how we should think about consciousness emerging from the brain? Images from here and here.

An issue that can be raised with this neuroessentialist view is that consciousness and suffering, in their roles of marking creatures as having a moral status, don’t necessarily refer to events that happen in the nervous system.  Instead, they could refer to ethical relationships between entities, irrespective of the underlying neural circuitry that might enable those relationships.  Much like a giant set of quadriceps might enable a sprinter to win a race, a healthy brain might enable ethical relationships between humans.  However, much like the quadriceps need skeletal and nervous systems, as well as the cultural notion of a ‘race’ and a worthy opponent in order to win, so too does a brain need ‘opposing players’ and a culturally constructed moral landscape to participate in any ethically charged interaction, including pain and suffering.  ‘Winning’ isn’t something that occurs purely in the muscles of the sprinter, and neither is ‘suffering’ something that occurs purely in the brain.  

This dependence of morally charged terms like ‘consciousness’ and ‘suffering’ on non-neural components has been argued both in the discussion on the moral status of humans and non-humans.  In regards to humans, neuroethicist Grant Gillett points out that human identity and subjectivity (a term that is related to qualitative consciousness, specifically its necessarily private nature) is constructed through social relationships, not something that can be reduced down to individual neural circuits[7].  Likewise, neuroscientist Patrick Wall reprimanded veterinarians to avoid using human conceptions of pain to describe neural events in non-human animals.  Dr. Wall suggested that such a practice was purely culturally driven, rather than based on scientific evidence, and that non-human animals should be dealt with using values derived from their own reality, not by imposing human values onto them[8].  We might imagine that if the concept of consciousness developed to describe something about humans (about their moral status, ability to socially interact or have intentions, etc), using consciousness as the measuring stick to determine moral status in non-humans is effectively saying, “My compassion towards you is proportional to your resemblance towards me.”

What are we left with if we avoid using consciousness to determine the value of neural correlates of pain and suffering (such as the fMRI signal discussed earlier)?  Should we still feel bad about the uncle we abandoned after the first paragraph?  I think the answer is still yes, but for different reasons than those initially given.  I think that the neural correlates of suffering that are now accessible through fMRI[9] and similar devices don’t provide direct insight into a hidden world of subjective experience, so much as they provide novel channels of communication with otherwise isolated individuals[10].  Thus, these patients can now voice their suffering and participate in an ethically charged interaction – an interaction that is similar to describing their suffering verbally, though now mediated through complicated machinery that they do not control.  Likewise, a neural correlate of (human) suffering in non-human animals needs to be viewed as, at most, a cry for help rather than a private hell. It should be understood that different animals will likely cry for help in totally different ways (and thus have totally different neural correlates of suffering-like states).  Using this perspective, neuroscience is simply one more tool to enable ethically charged interactions to occur, rather than a final statement about the reality underlying all ethical interactions.  


[1] Ryder, Richard.  “All beings that feel pain deserve human rights” The Guardian, Saturday 6 August 2005 
[2]Farisco, Michele. “The ethical pain.” Neuroethics (2011): 1-12.
[3] Demertzi, Athina, et al. “Pain perception in disorders of consciousness: neuroscience, clinical care, and ethics in dialogue.” Neuroethics (2012): 1-14.
[4] Shriver, Adam. “Knocking out pain in livestock: Can technology succeed where morality has stalled?” Neuroethics 2.3 (2009): 115-124.
[5]Tononi, Giulio. “An information integration theory of consciousness.” BMC neuroscience 5.1 (2004): 42.

[6] Pinker, Steven. “The mystery of consciousness.” Time Magazine 29 (2007): 55-70.
[7] Gillett, Grant R. “The subjective brain, identity, and neuroethics.” The American Journal of Bioethics 9.9 (2009): 5-13.
[8] Wall, Patrick D. “Defining pain in animals.” Animal pain. New York: Churchill-Livingstone Inc (1992): 63-79.

[9] Wager, Tor D., et al. “An fMRI-based neurologic signature of physical pain.” New England Journal of Medicine 368.15 (2013): 1388-1397.

[10] This view is also very similar to one discussed here: Levy, Neil, and Julian Savulescu. “Moral significance of phenomenal consciousness.” Progress In Brain Research 177 (2009): 361-370.

Want to cite this post?

Zeller-Townson, RT.
(2013). Consciousness and the Ethical Pain. The Neuroethics Blog.
Retrieved on
, from


  1. Aren't pain and suffering two, distinct concepts? Pain should be looked at only biologically and suffering as a social construct (as you do here)? How much pain a human or animal feels – is that not distinct from the social and cultural influences?


  2. Good catch, Zi! Pain and suffering are certainly not identical, despite the fact that I use them somewhat interchangeable in this post. However, there is also some overlap between the two. The IASP definition of pain includes a subjective component ('unpleasant…experience'-, which is a strong part of at least some definitions of suffering (DeGrazia, David, and Andrew Rowan. "Pain, suffering, and anxiety in animals and humans." Theoretical Medicine and Bioethics 12.3 (1991): 193-211.) I do agree that the neural structures that relay nociceptive and other types of pain information to the brain are certainly more solidly in the domain of biology than the multitude of psychological, cultural, social, and biological structures that lead to suffering. To clarify my point though, I think that if we define pain and suffering as being experiences, that have qualitative character and intrinsic ethical value, then pain and suffering immediately also take on a strong social component.


  3. Two great books that discuss consciousness in general as well as pain and consciousness are

    Consciousness: Confessions of a Romantic Reductionist by Christof Koch


    Incognito: The Secret Lives of the Brain by David Eagleman

    I agree that pain and suffering are distinct but tightly related concepts. Suffering requires reflection of pain onto oneself be it human or animal(i.e that the pain is occurring to them) and requires some degree of consciousness.


Post a Comment

Emory Neuroethics on Facebook