Pain in a Vat

Previously on this blog I've discussed the case of cultures of living rat neurons, removed from their natural environment (the inside of the skull of a rat), and grown on top of an electrical interface that allows the neurons to communicate with robotic systems - effectively, we remove part of the rat's brain, and then give this reprocessed bit of brain a new, robotic body.  One of the stranger issues that pops up with this system is that it is extraordinarily easy to 'switch' between bodies in this situation. [1] For instance, I could easily write a computer program that creates a brief, pleasant sound reminiscent of raindrops every time the culture increases it's electrical activity.  Alternatively, the same burst of activity could be used to trigger an emotionless, electronic voice to say “Please help me. I am in pain.”

While nociception (the low-level transmission of pain information) and unconscious reactions to pain both occur in the spine and peripheral nervous system, the brain seems to hold the neurons that are responsible for the conscious sensation of pain.  This leads to the interesting suggestion that factory farmed chickens should be grown without their brains to prevent all that unnecessary suffering from occurring.  (And if you remove the feet, the chickens are stackable!) Image from here
Is it possible for a neural culture to feel pain?  This is admittedly an absurd suggestion.  Starting from common sense and our normal range of experience, if we see a small motionless, barely visible sliver of brain tissue sitting in a Petri dish, we have no reason to believe that we should feel sorry for it.  Even if we see that this sliver of brain tissue is actually quite active, generating a variety of patterns of electrical activity, such activity might seem so alien to us so as to not be worth our attention, much less our sympathy.  But as absurd as pain in a Petri dish might sound, we do in some sense have a duty  to explore the idea.  Pain and suffering are key moral issues in the treatment of biological systems.  Animal liberation ethicist Richard Ryder goes as far as to say that being able to experience pain is the only requirement for having rights [2], as pain is the only true evil.  If we are to consistently value the absence of pain, no matter who or what experiences it, do we need more comprehensive regulations that cover tissues as well as “full” animals? [3]

To begin with, let's be careful about what we mean when we use the word “pain.”[4]  Neuroscientists have long broken pain itself down into the “sensory” component (the location of the pain) and the “affective” component (the emotional, “unpleasant” side of pain).  The “affective” component is commonly held to be the “morally relevant” one. [6]  It is interesting to note that these two aspects of pain seem to be somewhat distinct at a neural level - for instance, morphine and endorphins both selectively inhibit the activity of structures that seem to underlie the affective component [7], and humans with damage to different brain regions can report either a loss of the sensory component (lesions of the somatosensory cortex [8]) or loss of the affective component (lesion of the anterior cingulate cortex). [9] So in principle, it might be possible to find the "affective pain circuit" in the brain, perhaps the Anterior Cingulate Cortex (ACC), and remove it from an animal.  Now we have an animal that can't suffer - but what about the tissue we removed (assuming it wasn't destroyed in the process)?  Is it still suffering? [10] Or does it need to be connected to the rest of the brain for that “suffering” to mean anything?

Electrodes placed in the Anterior Cingulate Cortex (ACC) of a patient suffering from chronic pain.  The electrodes were used to selectively lesion the ACC.  If the ACC had somehow been carefully removed, would we have had a moral obligation to prevent it from feeling pain?  From [9].
So if some sort of "affective pain circuit" were isolated [11], how would we determine if it was in pain or not?  Neural culture poses an interesting problem here.  The methods that have been used to suggest that other living systems feel pain, such as humans in a vegetative state, or non-human animals, don't work for neural culture.  The prime method for determining if a living creature is suffering from pain is to examine their behavior, and look for things like avoidance, emotional displays, or learning to associate neutral stimuli with pain. [12]  However, I've already pointed out how problematic the notion of “behavior” is for neural culture.  A second strategy might be to examine neural activity directly, and correlate that with the activity of “full” animals - however, there is disagreement over whether ACC activity means the same thing in different animals [15], so using that method to evaluate the significance of an ACC that was completely removed is even more problematic.

Without behavior or a known set of subjective correlates to use to determine if neural culture is suffering, we are left to mathematical and philosophical tools.  One such tool is Giulio Tononi's Integrated Information Theory (IIT) of consciousness. [16] For our purposes, the important parts of this theory are that the pattern of connections (not just the size, or the number of connections) in the network plays a big part in determining how conscious it is (called the “Phi” value), and the subjective state is defined by its relationship to all other possible subjective states.  So, pain would be defined by not being pleasure, and by all of the thoughts and actions and desires that pain can cause, or be caused by. [17] From this perspective, even a cultured slice of ACC wouldn't necessarily be experiencing affective pain when removed from an animal, as it's electrical activity wouldn't behave in the same way.  Additionally, as the network would be much smaller than the brain it was previously part of, the “Phi” value would be lower (and it could be argued that any affect that it did have was “less rich” or “less meaningful”).

However, note that currently even neural cultures are far too complex and difficult to measure to accurately estimate their Phi values, much less to understand the structure of their "qualia space."  Despite this, the promise of theories like IIT (or future developments within that theory) exposes a part that neural culture might play in the development of tools to scientifically evaluate consciousness.  With neural culture, we are forced to use theories that depend directly on tying network structure to conscious experience, rather than "surface level" features like behavior.  And despite current limitations, it is significantly easier to access the sorts of data required to use structural theories in culture, than it is in "full" organisms.  This accessibility associated with neural culture means that as both neurotechnologies (such as electrophysiology tools) and philosophical/mathematical tools (such as IIT) develop, neural culture will likely be one of the first places where the theory will be able to meet up with experiment to provide a rich understanding of what generates subjective experience.

Want to cite this post?
Zeller-Townson, RT. (2013). Pain in a Vat. The Neuroethics Blog. Retrieved on

[1] Note that over time the culture could potentially learn the differences between these two systems, but in that moment immediately after that switch it would be very difficult to tell the difference.
[2] Note that Ryder's views are controversial, however, even within the animal liberation community.
[3] God, I hope not.  Dealing with IACUC is bad enough as it is.
[4] It is also important to note that pain and suffering are usually not equated.  David DeGrazia and Andrew Rowan defined [5] both pain and suffering as 'inherently unpleasant sensations' - that is, they are both feelings (rather than physical events), and are in part defined by their unpleasantness.  DeGrazia and Rowan differentiate pain from suffering by specifying that pain is sensed to be local to a specific body part, whereas suffering is not.  By specifying pain as an experience, DeGrazia and Rowan distance themselves from some of the neuroscience literature which at times interchangeably speaks of pain and nociception, the physical process by which painful stimuli are relayed to the brain.
[5] DeGrazia, David, and Andrew Rowan. "Pain, suffering, and anxiety in animals and humans." Theoretical Medicine and Bioethics 12.3 (1991): 193-211.
[6]Shriver, Adam. "Knocking out pain in livestock: Can technology succeed where morality has stalled?" Neuroethics 2.3 (2009): 115-124.
[7] Jones, Anthony K., Karl Friston, and Richard S. Frackowiak. "Localization of responses to pain in human cerebral cortex." (1992).
[8] Ploner, M., H-J. Freund, and A. Schnitzler. "Pain affect without pain sensation in a patient with a postcentral lesion." Pain 81.1 (1999): 211-214.
[9]  Foltz, EL and White, LE, Pain 'relief' by frontal cingulumotomy, J. Neurosurg., 19 (1962) 89-100
[10] As I'm implying that the Anterior Cingulate Cortex would be the region removed, I should be clear and that I don't mean to say that this is all the ACC does.  The ACC is a pretty complicated beast, and has been implicated as playing a role in decision making, the evaluation of errors, tasks that require effort, as well as processing of empathy and emotion.
[11] Currently, the closest experimental preparation to this would be to culture a thin slice of tissue taken from the ACC.  This is often done to investigate how the ACC differs from other regions of the cerebral cortex, including how these differences could lead to new drugs that decrease the affective component of pain.  While the ACC is the neural tissue that we might most suspect a priori to be suffering, in theory it could be possible to grow (whether by accident or design) neural circuits from scratch (by first breaking down the connections between the neurons, and then allowing them to re-grow, a process called dissociation) that in some way replicate the "suffering" experience of the in vivo ACC.  The following discussion applies equally to ACC slices and dissociated culture.
[12] While it is easy to imagine all of these behaviors being performed by an unfeeling robot that was attempting to trick us into feeling sorry for it, it is interesting to note how much that last item (learning) is suggestive of a subjective negative experience.  The ACC, as mentioned, is used for several things beyond just feeling bad- it also appears to be used for turning that bad feeling into a learning experience, where the ACC equipped neural system learns to avoid whatever caused that painful experience in the first place. [13]  Thus, animals that can learn from pain (a category which was recently found to include crabs [14]) might be equipped with other ACC-associated properties, like suffering.  I'm curious how tricky it would be to argue that the subjective experience of suffering is the mental correlate of high-level avoidance learning- implying that if one learns to avoid abstract entities through association with nociception, one is suffering.  This view would further imply that temporary suffering is natural and even necessary for life, and that morally relevant suffering is effectively attempting to avoid something that cannot be avoided.
[13] Johansen, Joshua P., Howard L. Fields, and Barton H. Manning. "The affective component of pain in rodents: direct evidence for a contribution of the anterior cingulate cortex." Proceedings of the National Academy of Sciences 98.14 (2001): 8077-8082.
[14] Magee, Barry, and Robert W. Elwood. "Shock avoidance by discrimination learning in the shore crab (Carcinus maenas) is consistent with a key criterion for pain." The Journal of Experimental Biology 216.3 (2013): 353-358.
[15] Farah, Martha J. "Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals." Neuroethics 1.1 (2008): 9-18.
[16] Tononi, Giulio. "An information integration theory of consciousness." BMC neuroscience 5.1 (2004): 42.
[17] IIT also gives us a framework to tackle the question of why neural tissue should be so privileged to consciousness- what about other biological networks, like the immune system?  What about non-biological networks, like simulated neural networks or even the internet?  IIT says that the extent of consciousness is determined by the variety of possible states the system can be in, as well as how much the sub compartments of the network communicate with each other.  Thus, in principle any well connected network could be conscious, but some neural systems seemed to be optimized for high levels of consciousness.

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook