Tuesday, April 30, 2013

A Social Account of Suffering

50,000 cultured brain cells sit in a petri dish. Through a combination of electronic sensors, software engineering and robotic sculpture, the physiology of the cells interacts with the psychology of some patrons of an art gallery [1]. From this transaction, judgments arise - the audience might report feelings of being watched, of play, or simply of remotely observing an oblivious 'seizure machine.' One particular type of audience member, the Animal Ethicist, might even wonder if we should be worried that the culture of brain cells (as a former animal) might be in pain.

Brain cells, electrodes, and tiny Peter Singer (image from here). 
While in most cases it is fairly straightforward to determine that a human is in pain, when one starts to asks if non-humans (or even humans with severe communication problems, such as locked-in patients) are in pain, it is common to turn to neuroscience for help. The idea is that while mental states (such as pain and suffering) can only be deduced from behavior if the behaviors are 'wired up correctly,' mental states are always (or non-contingently, to borrow the language of Dr. Martha Farah [2]) related to brain states. Thus, the tools of neuroscience can give us direct access to the amount of pain an organism is experiencing, bypassing a body that might hide this information from us (which could happen because of injury, or because the body was never equipped with a human face). We are obligated to perform this scientific investigation, as we have an obligation to prevent pain and suffering.

Pain is something the brain does.  Nociception sends information about tissue damage (1) through the spinal cord, where such information can be modulated (2).  However, pain doesn't really become that nasty, unpleasant experience until it weasels its way into your limbic system (4).  Image retrieved from here.  

At this point, we are working with the following assumption:

1) “Neural systems are the substrate of private, internal 'mental' events (such as “suffering”), which can be a source of moral value”

Amusingly enough, this assumption can lead us to a mathematical formulation of suffering. The logic is as follows: the human brain, as well as perhaps other brains, has the capacity for suffering. We can be somewhat precise about this, and say that this capacity is defined as the 'natural' ability to orchestrate behaviors that have been identified as correlates of suffering (such as freezing, favoring limbs, calling for help, and 'pained' facial expressions), in response to stimuli that are 'naturally' associated with suffering (extended bouts of pain, either physical of social). The circuitry within the brain that provides this capacity, while not yet fully understood, should in principle be a physical, deterministic system. As part of a physical, deterministic system, this circuitry should be describable as a set of mathematical relationships. Furthermore, as we earlier decided that mental states are only contingently related to brain states, it is this mathematical relationship (which the 'suffering-behaviors' merely point to) that is the essence of suffering [3].

Is pain merely a mathematical construct?  Image from here
The implications of this are delightfully absurd: if suffering is a mathematical relationship, then any system that implements that relationship (a rat, a culture of brain cells, or a deviously engineered toaster) should qualify as being able to suffer, and under some ethical systems (here I am alluding to those of the animal ethicists Peter Singer and Richard Ryder) thus enter into the realm of morally relevant beings. That is, it becomes a moral obligation to prevent these entities from suffering, no matter how silly or alien their “suffering” might appear, simply due to the physical laws that govern part of their behavior.

Before we get too carried away (and start passing laws against posting such equations in public), let us carefully revisit that assumption number 1. This statement can be problematized from a variety of perspectives (it's Cartesian, for God's sake!), but lets stick to a neuroscientific perspective. First, we've supposed the existence of private, internal events (“qualia”) that can't be directly measured, and have to be accounted for in addition to the (public) neural events that we can measure. Secondly, we must also deal with the fact that we've given these mysterious, ghostly events control over value, effectively tying one mystery to another. Compare the above assumption then, to the one below:

2) “Neural systems are the substrate of public, embedded 'social' events (such as “suffering”), which can be a source of moral value”

This small change in wording has solved the two problems outlined with assumption 1. First, we no longer have to contend with an awkward metaphysics that describes two kinds of events (internal, unmeasurable “mental” events and our normal, measurable “physical” events)- instead, all events are now public and capable of being measured (which, as scientists, causes us a sigh of relief). Secondly, we no longer have value popping out of nowhere. Instead, suffering has moral value as it is a social event: a social, 'embedded' subject is needed to judge that suffering to has occurred, and while doing so judges this suffering to be 'bad.'
This can be seen as emphasizing (after Dr. Grant Gillett[4] and Dr. Daniel Goldberg[5]) the social components of subjectivity (specifically, handing the definitions of subjective experiences to the social realm), while denying that subjectivity exists outside of the social realm (after Dr. Daniel Dennett[6]).
Is "do cockroaches feel pain?" a question for neuroscientists studying cockroaches, or for social scientists studying humans?  Animated version here!
One disadvantage of this perspective is that it limits what we can expect to learn about pain if neuroscience focuses exclusively on "pain-pathways," while ignoring empathy.  This social perspective on suffering suggests that multiple entities (or at least multiple systems in the same entity) must be interacting before we can talk about suffering, or subjective states at all[7].  Instead, whatever abstractions or circuits that we come across within a single brain must be considered as the building blocks of subjectivity and morality, and not equivalent to such.  This implies an obligation to use value-neutral descriptions of neural states and circuits, or risk confusing “the substrate for the reality,” in the words of Dr. Gillett[4].  Thus, we can't look at a culture of rat neurons sitting in a dish and evaluate their subjective state by using the techniques of neuroscience- the question is one for the audience.

Lastly, one advantage of this conception of suffering:  by not rejecting the social component of suffering, we are forced to accept suffering as being a thick, value-laden concept that escapes reduction to a sterile set of equations, or carnal set of 'brain states.'  Suffering instead is seen as a function of not just the individual, but also the ever-changing culture the individual exists within.


[1] Zeller-Townson, RT. (2012). Why use Brain Cells in Art? The Neuroethics Blog. Retrieved on April 29, 2013, from http://www.theneuroethicsblog.com/2012/09/why-use-brain-cells-in-art.html
[2] Farah, Martha J. "Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals." Neuroethics 1.1 (2008): 9-18.
[3] Unless you hold that the fact that these equations are being expressed through neurons, rather than other physical systems, is morally salient- but that is another blog post.  I'm adopting a functionalist stance in this post.
[4] Gillett, Grant R. "The subjective brain, identity, and neuroethics." The American Journal of Bioethics 9.9 (2009): 5-13.
[5] I'd like to thank Dr. Daniel Goldberg for pointing out these particular accounts of pain (via twitter, of all places!) Goldberg, Daniel. "Subjectivity, consciousness, and pain: The importance of thinking phenomenologically." The American Journal of Bioethics 9.9 (2009): 14-16.
[6] Dennett, Daniel C. Consciousness explained. ePenguin, 1993.
[7]This isn't to say that someone can't suffer (or experience other, more pleasant qualia) in isolation.  I am merely suggesting that the judgment that this subject makes, "I am suffering," is as much a function of the subject's brain state, as it is the social environment that shaped the subject's operational definition of 'suffering.'  However, I am saying that if no one applies the label "suffering" to a state (say, the struggles of a virion as it battles your immune system), ever, then it hasn't suffered.


Want to cite this post? Zeller-Townson, R. (2013). A Social Account of Suffering. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2013/04/a-social-account-of-suffering.html#more

3 comments:

James Burkett said...

Both assumptions are highly problematic. With assumption 1, we are led to the somewhat questionable conclusion that any system which produces a set of responses to a specified stimulus can be said to "suffer." This leads us to an overly broad definition for suffering; one that applies the same interpretation to massively different systems organbized in completely different ways, and for which the same responses may not have isometric meanings.
Assumption 2 is also problematic. Assuming the only valuable component of "suffering" is social leads to a tree-falling-in-the-woods kind of absurdity. We all have the power to be the objects of our own subjectivity, which is to say that we can reflect on our own mental states. Suffering therefore does not need to be shared socially in order to be observable and measurable.

BubbaRich said...

I agree completely with James about assumption 2.

I'm not a strict behaviorist, and I'm certainly not going to define suffering purely on behavior. Like all cognitive constructs, I think it can be represented very well as a mathematical construct, but like all cognitive constructs, it is immensely complex. Part of that is just the complexity of the wires in the innate "AVOID THIS" reaction. But a more important (IMO) part is the more complex connections of analogies and associations that we learn and build up around it.

Riley Zeller-Townson said...

Thanks for your well-reasoned disagreement, James! First, to defend my straw man, assumption 1: Once we start talking about organisms that aren't us (and therefore we lack direct access to their subjective states), behaviors themselves are what are used to judge if suffering has occurred. (Even neuroanatomical comparisons are used on the grounds of first correlating specific structures with suffering behaviors.) Would you care if you found out your beloved pet (or your beloved martian friend, in the case of David Lewis's "mad pain and martian pain") had a nervous system that was completely different from your own (at least, when attributing the mental state of "pain" to it)? Or would you instead refer to the behaviors of that organism to judge such a state? Keep in mind that a robot that mimics such behaviors very likely differs from an animal in important ways- it would be difficult to take the pain of my cat seriously if I could quickly stop the cat's meowing by changing a few lines of code.
As to assumption 2- yep, you definitely caught a major issue here! Perhaps a better wording of my point is that suffering is defined by empathy. Taking some freedom with the definition, one can still "empathize" with oneself in isolation, preventing the tree-falling-in-the-woods issue you point out. However, what we (as humans) empathize with is very much a function of our social history- how we interact with others and how we are taught to interact with others. This isn't to say that "suffering" is totally divorced from biology- there are biological reasons for a society to attribute pain to one thing and not another- but it is to say that I don't think neuroscience will every find a "suffering circuit" that is conserved in all organisms that we attribute suffering to, and none we do not (the "Cartesian Pineal Gland of Suffering"). Instead, I think it much more likely that we find a bunch of different circuits that orchestrate behaviors that are common in that we empathize with all of them- and that criteria will surely change as society does, in part due to changes in public awareness of neuroscience.