Tuesday, December 17, 2013

200th Post! Why is Neurodiversity Useful?

Neurodiversity is a term that was coined by Australian social scientist and autism advocate Judy Singer. In her 1998 thesis, she wrote: “For me, the key significance of the ‘Autistic Spectrum’ lies in its call for and anticipation of a politics of Neurological Diversity, or what I want to call ‘Neurodiversity.’ The ‘Neurologically Different’ represent a new addition to the familiar political categories of class/gender/race and will augment the insights of the social model of disability.”[1] Similar to the way biodiversity is discussed as critical to the stability of the ecosystem, neurodiversity is considered to be critical for human and cultural stability. In other words, Autism Spectrum Disorders (ASD) and other neurological differences should be a part of our community and, thus, neither cured nor subject to intense rehabilitative or normalizing efforts. Before I discuss how neurodiversity is useful to my work and to ASD-related professions, I want to quickly review ASD and my current project for the Neuroethics Scholar Program.
Source: Cafe Press
ASD is traditionally defined as a neurodevelopmental disorder that affects a person’s social and communicative style and includes frequent displays of specific behavioral patterns.[2] There is a huge range of autistic expression, from significantly impaired to subtle displays of autistic characteristics. As the Neuroethics Scholar at Emory’s Center for Ethics, I am working on a project at the Marcus Autism Center exploring how to communicate the results of future infant screeners for ASD to parents. This project was described in The Neuroethics Blog on October 1, 2013. A neurodiverse perspective informs my work in two important ways: shaping the language I use to talk about ASD and ensuring I maintain a focus on the quality of life for ASD individuals and their families. I believe that neurodiversity can be similarly important for all professionals working with and studying ASD or related disabilities.

Tuesday, December 10, 2013

It's Complicated: Molly Crocket and Patricia Churchland Discuss the Future of the Neuroscience of Morality

Last month, as a recipient of the Emory Neuroethics Program Neuroethics Travel Award, I had the wonderful opportunity of attending the International Neuroethics Society Annual Meeting in San Diego, California. The conference brought together leading neuroethics scholars from around the world and focused on the themes of moral enhancement, disorders of consciousness, and the role of neuroscience in the courtroom. (The conference was structured around three star-studded panels. For a full program, please visit here. For full videos of the panels, please visit here.) There were also five oral presentations and a poster session. As part of the event, I exhibited a poster entitled “Revising Weakness of Will: A Reply to Neil Levy,” where I challenged Levy’s use of the theory of ego depletion as an explanation of weakness of will and provided an alternate, neurocomputational account.

Presenting my poster at INS.
Photo credit: Karen Rommelfanger
As a philosopher interested in the intersection of the computational neurosciences and morality, “The Science and Ethics of Moral Enhancement” session was a particularly enlightening one for me. It brought together three leading women neuroethics scholars, Barbara Sahakian (as Moderator), Molly Crockett, and Patricia Churchland, as well as neuroethicist Julian Savulescu of the Oxford University Center for Neuroethics. It was a remarkable conversation. Throughout their discussions and even in the question period that followed, I was struck by how clearheaded the panelists were about the challenges facing the field. At the same time, and despite their very different perspectives, they evidently shared a real optimism about the future of this area of research. As the session moderator, neuroscientist and neuroethicist Barbara Sahakian of Cambridge University set the tone by explaining that the panelists would tackle, “the science of what’s possible now,” but also look at “what we may be able to do in the future.”

Wednesday, December 4, 2013

Bias in the Academy Video Archive of Presymposium Seminars

Neuroethics Symposium December 10, 2013

Bias in the Academy: From Neural Networks to Social Networks

 

This neuroethics symposia is designed to discuss the complex influence of stereotype/bias on academia and apply advances in the science of stereotype bias to university policies and practices. Through a pre-symposia seminar series and symposia, a white paper will be produced to highlight challenges and to put forth practical solutions to move toward mitigating the detrimental influence of bias and stereotyping in academia.  





Tuesday, December 3, 2013

Neuroethics Journal Club: Neural Correlates of Negative Stereotype

Our everyday perceptions of others can potentially be biased by cultural stereotypes. However, research has suggested that an initial, and often negative, stereotype can be downregulated via a highly connected neural network. While this regulatory process has been studied under neutral conditions, for the third journal club of the semester Neuroscience graduate student Kim Lang led a discussion about regulation of this neural network when White individuals are not under neutral conditions, but actually primed for negative African American stereotyping.

A recent paper published by Forbes et al. used functional magnetic resonance imaging (fMRI) to study the amygdala, the prefrontal cortex (PFC), and the orbitofrontal cortex (OFC), three highly interconnected brain regions important for stereotyping and bias. Studies have shown that the amygdala, involved in arousal, is activated immediately when encountering a so-called out-group member. This first response can be downregulated though if an individual is given time for non-biased deliberation, and this is reflected by activation in the PFC. The OFC is the regulator of these two neural regions, especially if initial negative stereotyping is in conflict with an egalitarian view. Prior research has shown this amygdala inhibition by the lateral PFC region with an experiment where White participants were shown Black faces in either rapid succession (30 ms) or at a slower rate (525 ms). When participants did not have time to reflect on the faces during the fast exposure speeds, enhanced amygdala activation was observed reflecting the early arousing response. During the slow exposure time condition though, amygdala activity was not enhanced.  Instead, increased activity was observed in the dorsolateral prefrontal cortex (DLPFC), which correlates with decreased amygdala activation (Cunningham et al.). This study suggests that if given enough time, a biased view reflected in the activation of the amygdala, can be reconsidered.

Adapted from The Jury Expert

Tuesday, November 26, 2013

Just Neurons?

Neuroessentialism is the belief that you, your mind, your identity, are essentially just your brain. It gets touted as an example of how science has triumphed, once again, over superstitions of the past - your soul hasn't died, it was just an illusion! Created by the brain. With memory, sensation, speech, and just about every other human attribute found to be located in one gyrus or another, it seems like there isn't anything left that could be outside of the brain. Francis Crick referred to this as the “astonishing hypothesis[1],” and while Stephen Pinker pointed out that for most neuroscientists this idea hardly warranted much astonishment[2], what might be more astonishing is how quickly the idea is bleeding out of the laboratory into popular media.  The basic philosophical foundations of this notion have been around for a long time (as mentioned on the [highly entertaining] podcast “very bad wizards,” we've known for a long time that when you remove the head, the mind ceases to function. Grant Gillet mentions that even Aristotle held that the mind emerges as function of the body, rather than a separate spiritual entity that somehow inhabits the body).  However, the recent attention that neuroscience has been getting (especially with the advent of fMRI, which enabled a huge number of studies on healthy, awake humans) appears to have made this an easier pill for the public at large to swallow. Dr. Peter Reiner has even gone as far to document the rise of neuroessentialism and has begun to map out the potential positive and negative effects of this cultural shift[3].

A distant relative of the brain-in-a-vat: the brain-in-a-hat.  "I am a brain, Watson. The rest of me is a mere appendix. Therefore, it is the brain I must consider." A very neuroessentialist sentiment of Sherlock Homes, in The Adventure of the Mazarin Stone by Sir Arthur Conan Doyle.  Image from here.

With the rise of any sort of public awareness, we should expect there to be reactionaries who warn us about going too far with our new idea and neuroessentialism is no different. Rather than defending old arguments in the face of overwhelming experimental evidence, these thinkers instead point cautiously forward and advise us to make our claims about the mind carefully, rather than jump on the neuro-bandwagon.  I'd like to display two of these recent, anti-essentialist thinkers here.  Both argue what might be considered further expansions of Clark and Chalmers' extended mind hypothesis[4]--which itself argued that pen & paper, computers, (and today, cell phones) could all be considered as vital components of cognitive processes, and thus as part of the mind.   

Tuesday, November 19, 2013

Neuroethics Journal Club: Sexual Fantasies and Gender/Sex

In May of 2013, The New York Times Magazine published an article discussing the ongoing clinical trials of a unique new drug that caught the interest of Emory University neuroscience graduate student Mallory Bowers. The drug, dubbed “Lybrido”, was being tested for its ability to improve sexual desire in women.  However, Lybrido is not just a female Viagra-like formulation.  That is apparently one part of it but the other, perhaps more surprising part, is the pill’s testosterone coating that is designed to melt away immediately in the mouth. To better understand how testosterone (T) could modulate female desire, and to discuss the neuroethical implications of pharmaceutically targeting it, Ms. Bowers chose a recent paper in the Journal of Sex Research by Goldey et al. entitled “Sexual Fantasies and Gender/Sex: A Multimethod Approach with Quantitative Content Analysis and Hormonal Responses” for the second Neuroethics Journal Club of the year. 


Tuesday, November 12, 2013

The Future of Law and Neuroscience: An Interview with Owen Jones, The Director of the MacArthur Research Network on Law and Neuroscience

After watching the PBS “Brains on Trial” special that featured innovative brain imaging technologies and examined the subsequent implications for the legal field, I decided to take a deeper look at the status of current neuroscience research and the future ramifications for the emerging field of neurolaw. To that end, I interviewed Professor Owen Jones. Owen Jones currently directs the MacArthur Foundation Research Network on Law and Neuroscience taking the lead in crafting a conceptual framework, which seeks to define and outline many of the legal issues surrounding recent neuroscientific findings. Jones also designed, created, and now directs the Law and Neuroscience Research Network, an unprecedented interdisciplinary effort that has called upon scholars from a myriad of areas for the purpose of examining how neuroscience can inform legal decisions in criminal contexts.

Wednesday, November 6, 2013

Now Available! Bias in the Academy Pre-Symposium Series Archives on YouTube

This year's Neuroethics Symposia, a partnership of Emory's Neuroscience Graduate Program, Laney Graduate School and the Emory Center for Ethics Neuroethics Program, is designed to discuss the complex influence of stereotype/bias on academia and apply advances in the science of stereotype bias to university policies and practices. Through a pre-symposia seminar series and symposia, a white paper will be produced to highlight challenges and to put forth practical solutions to move toward mitigating the detrimental influence of bias and stereotyping in academia.

The first of four pre-symposium seminars was led by neuroscience graduate student, Jacob Billings.  The video of this seminar is available below.


An Introduction to Bias: A Social Network Primer 
Jacob Billings, Neuroscience graduate student, Emory University 


Tuesday, November 5, 2013

Experimental Neuroethics

By Peter Reiner, VMD, PhD

Dr. Reiner is Professor in the National Core for Neuroethics, a member of the Kinsmen Laboratory of Neurological Research, Department of Psychiatry and the Brain Research Centre at the University of British Columbia, and a member of the AJOB Neuroscience Editorial Board.

Four years ago, Neil Levy gave the concluding lecture at the first Brain Matters conference in Halifax. He alerted the audience of neuroethicists to the fact that the field of philosophy was undergoing a revolution – rather than muse from their armchairs in the ivory tower, a group of renegade philosophers were carrying out real experiments, asking people what their intuitions were about central issues in philosophy. Dubbed experimental philosophy, the new initiative was met with more than passing resistance from traditional philosophers. The apostate experimental philosophers responded by developing a logo of a burning armchair.

Photo credit: Timothy Epp, Shutterstock

The landmark experiment was carried out by Josh Knobe, and its findings subsequently became known as the Knobe effect (you can watch a great recreation of the phenomenon in this YouTube video). Essentially, what Josh did was repurpose an old method from social psychology called the contrastive vignette technique (CVT)1. At its simplest, the CVT involves designing a pair of vignettes that carefully describe a particular situation (in the case of experimental philosophy, one that is often morally charged) but crucially differ in one detail, hence the term contrastive. Respondents see one and only one version of the vignette, and are then asked questions about what they have just read, with responses commonly recorded as a numerical rating on a Likert scale. By comparing the averaged responses between separate groups of people who have read the vignettes, the experimenter can systematically investigate the effects of small changes (of which the respondents are entirely unaware) upon attitudes towards nearly any topic. The experimental philosophers tend to use the technique to explore the meaning of concepts. Neil Levy pointed out that this same approach could, in principle, be applied to the full range of issues in neuroethics.

Tuesday, October 29, 2013

Neuroethics Journal Club: The Ethical Issues behind Brain-to-Brain Interface (BTBI) Technologies

The first Neuroethics Journal Club of the Fall 2013 semester was a discussion led by graduate student John Trimper on the ethical implications behind brain-to-brain interface (BTBI) technologies. John introduced the topic by presenting the experimental details and results from a recent paper, published by the Nicoleis lab at Duke University (Vieira et al.), where researchers utilized a BTBI to transfer sensorimotor information between two rats. The BTBI technology allowed for a transfer of information from an “encoder” rat to a “decoder” rat, not using typical bodily interactions, but instead through intracortical microstimulation (ICMS).

"Rodent Mind Meld" (Via Wired)

The researchers conducted three experiments that demonstrated an artificial communication channel where cortical sensorimotor signals, coded for a specific behavioral response, were recorded in the encoder rat and transmitted to the decoder rat. Once received from the encoder rat, the decoder rat was instructed by these signals in making behavioral choices.  In the first experiment, a motor task, the encoder rat pressed one of two levels indicated by a LED light. This information was transferred via ICMS to the decoder rat, who would then choose the same lever without the help of the LED light. While the encoder rat performed better than the decoder rat, the decoder rat did perform correctly at levels significantly above chance. In the second experiment, the decoder rat again performed significantly better than chance, but in a tactile discrimination task. The encoder rats were trained to discriminate the size of an aperture with their whiskers; if the aperture were narrow, then the rats would nose poke on the left, while if the aperture were wide, the rats would nose poke on the right. Encoder rats explored the aperture, nose poked the right or the left, and then again, through ICMS, this information was sent to the decoder rat. The decoder rat would then also poke to the right or the left, but without any hint about the size of the aperture. Not only did researchers conduct this experiment with encoder and decoder rats residing in the same Duke laboratory, but impressively the same tactile discrimination task was also completed with an encoder rat in Brazil and a decoder rat at Duke, showing the potential of long-distance BTBI technology.

Tuesday, October 22, 2013

Tibetan monastics and a neuroscientist: Some lessons learned and others taught

By Guest Contributor Brian Dias, PhD

Imagine your day starting out near the Northern Indian town of Dharamshala with thirty minutes of spiritual chanting and meditation among Tibetan Buddhist monastics. Now you follow that by spending the whole day teaching Neuroscience to these same monastics. “Bliss”, “introspection”, “questioning”, “challenging” and “why” are some of the words that may come to mind. They certainly did for me while I had the privilege of being a Neuroscience faculty member as part of the Emory Tibet Science Initiative (ETSI) this past summer in India. Other faculty members included Dr. Melvin Konner (Evolutionary Anthropology, Emory University), Dr. Ann Kruger (Developmental Psychologist, GSU) and Dr. Carol Worthman (Medical Anthropology, Emory University).

An audience with His Holiness The XIV Dalai Lama,
and teaching monastics in Dharamshala, India.
I intend to use this blog post to shed light on the intersection of Buddhist philosophy and western science as seen through my fifteen days with the monastics (a term used to include both monks and nuns). Started in 2007 with the blessing of His Holiness The XIV Dalai Lama, the ETSI has been administered by The Library of Tibetan Works and Archives and Geshe Lobsang Negi who is a professor in the Department of Religion at Emory University. Over these years, the ETSI has been teaching Math, Physics, Neuroscience and Biology to cohorts of monastics from monasteries across India. After a 5 year science curriculum, this was the second ETSI graduating class. An immediate survey of the monastics revealed a skewed sex-ratio in that the class comprised of 42 monks and only 2 nuns. This inequality of representation is being slowly but surely remedied with the first group of nuns sitting for their Geshe exams that will confer upon them the status of a Buddhist scholar equivalent to the male scholars.

Tuesday, October 15, 2013

Gearing Up for New Currents in Sports Enhancement


By Anjan Chatterjee, M.D., F.A.A.N

Anjan Chatterjee is a Professor of Neurology at the University of Pennsylvania. His clinical practice focuses on patients with cognitive disorders. His research focuses on spatial cognition and language, attention, neuroethics, and neuroaesthetics. He is President of the International Association of Empirical Aesthetics and the Chair of the Society for Behavioral and Cognitive Neurology. He is also a member of the AJOB Neuroscience editorial board.


Alex Rodriguez is the latest in a long list of superstar athletes embroiled in a doping scandal. Lance Armstrong, Tyson Gay, Barry Bonds, Marion Jones, and Mark McGuire, among many others, preceded him. Competition in sports is predicated on athletes following rules; rules that try to codify fairness. Some combination of natural talent and effort is rewarded. Each athlete strives and may the best man and woman win.

Despite this ethos, doping scandals abound. Almost a third of the athletes responding to an anonymous survey about the 2011 World Track and Field competition admitted to using performance enhancing drugs [1]. Such competitions showcase the allure of enhancements that is magnified by winner take all environments. Rewards in sports do not scale linearly. Incremental differences, especially at top levels, deliver disproportionate rewards. Being the twentieth fastest runner in the world may be an extraordinary personal achievement, but it does not garner fame and fortune. Regulation, monitoring, and the possibility of public shame presumably restrain the desire to win by any means necessary when those means include breaking rules.

Tuesday, October 8, 2013

Consciousness and Ethical Pain

Imagine you find that a beloved uncle has received a terrible injury that leaves him paralyzed, but still totally aware of his environment - a condition known as locked in syndrome. Now imagine that a doctor comes to you with a miracle cure: a new experimental treatment will repair your uncle's damaged brainstem allowing him to regain control of his body. The catch, however, is that this procedure is extremely painful. It actually seems like it might be the most painful experience possible: fMRI scans reveal that all the brain regions that are active during extreme pain are activated during this (imaginary) procedure. And it lasts for hours. However, your uncle won't complain about the procedure because 1) he's paralyzed and thus can't voice his suffering, and 2) the experience of this miracle treatment will mercifully be forgotten once the procedure is over, so your uncle won't raise any complaint afterwards. While many of us would probably sign off on the procedure, we might still feel guilty as we imagine what it must be like to go through that, even if our uncle wouldn't recall it later.

The neural 'signature' of pain, as seen through fMRI [8].  Image from here.

This scenario is meant to illustrate that there seems to be an aspect of the moral weight of pain - its significance in ethical discussion and decision making and guilt - that has to do specifically with what pain feels like. Not the way it makes us act, not the danger it represents, but that first person, qualitative, subjective experience of being in pain, or suffering through pain. The ability to have such qualitative, subjective experiences is called qualitative (or sometimes phenomenal) consciousness. We tend to assume that most humans are conscious, and that this is the primary reason why hurting them is wrong- indirect selfish reasons (like avoiding jail time or losing them as a friend and ally) are seen as being secondary to this primary fact: the evil of pain[1].

For this reason, discussions of pain taking place in unfamiliar creatures (which I'm using to refer to anything that isn't able to explicitly tell you how it feels - including humans with certain neurological conditions, as well as almost all non-human animals, and perhaps even stranger entities) are often intimately tied to the possibility of that creature being conscious. This occurs for instance when deciding whether a patient with Unresponsive Wakefulness Syndrome (formerly called vegetative state) should receive analgesia[2,3], or when debating about the necessary precautions that should be taken when fishing or slaughtering chickens. If it can be demonstrated that something doesn't meet our requirements for consciousness, suddenly we have free range to treat that thing as more of an object than a person[4].  If consciousness is suspected on the other hand, we become much more cautious with our treatment of the entity.  

Tuesday, October 1, 2013

2013 Neuroethics Scholar, Jen Sarrett: Autism and the Communication of 'Risk'

The Neuroethics Scholar Program

By Jennifer C. Sarrett
This project is done through The
Neuroethics Scholar Program

As defined in the new Diagnostic and Statistical Manual of Mental Disorders (DSM 5), Autism Spectrum Disorder (ASD) is diagnosed in individuals who show differences in social communication—such as a reliance on non-verbal communication techniques or difficulties interpreting social signals—and specific behavioral patterns—such as repetitive vocal or motor behaviors or intense interests in specific items. These characteristics must be evident before the age of 3 in order to quality for an ASD diagnosis1. While there are diagnostic tools that are able to reliably diagnose ASD by the age of 2, most reports show that, on average, children are not diagnosed until school age. These rates vary by several factors, including race and urbanicity2. For many professionals, this delay in diagnosis is concerning because, as the Centers for Disease Control and Prevention3 and most other autism professionals stress, early identification and diagnosis leads to early intervention which leads to better outcomes for many children and families.

Finding innovative ways to identify autism earlier in life is a goal for many researchers, including a group of professionals at the Marcus Autism Center4. Marcus is a well-known research and clinical facility in Atlanta that is affiliated with Emory University, Children’s Healthcare of Atlanta, and Autism Speaks. In 2010, renowned autism researcher, Dr. Ami Klin, became the new director of Marcus and brought much of his research and clinical team from the Yale Child Study Center along with him. This team has been working on cutting edge technologies to identify autism-associated traits in infants and toddlers. In particular, they are using eye tracking technologies and vocalization software because, as their previous research has shown, very young children who eventually become diagnosed with autism show differences in their looking and vocalization patterns. 

Tuesday, September 24, 2013

Intelligence Testing: Accurate or Extremely Biased?

By Emily Young

In the early 1900s, psychologist Charles Spearman noticed that children who did well in one subject in school were likely to do well in other subjects as well, and those who did poorly in one subject were likely to do poorly across all subjects. He concluded that there is a factor, g, which correlates with testing performance (Spearman 1904). The g factor is defined as the measure of the variance of testing performance between individuals and is sometimes called “general intelligence”.

Later on, psychologist Raymond Cattell determined that there are two subsets of g, called fluid intelligence (denoted Gf) and crystallized intelligence (denoted Gc). Fluid intelligence is defined as abstract reasoning or logic; it is an individual’s ability to solve a novel problem or puzzle. Crystalized intelligence is more knowledge based, and is defined as the ability to use one’s learned skills, knowledge, and experience (Cattell 1987). It is important to note that while crystallized intelligence relies on knowledge, it is not a measure of knowledge but rather a measure of the ability to use one’s knowledge.

Tuesday, September 17, 2013

Neuroscience and Philosophy of Mind: The Relevance for Neuroethics

By Rabbi Ira Bedzow, MA

Rabbi Ira Bedzow is a 2013 recipient of the Emory Center for Ethics Neuroethics Travel Award. He is the project director for Moral Education research project for the TAG Institute, and is currently pursuing his PhD in Religion at Emory University.

Philosophy of mind examines the nature of the mind, mental functions, and consciousness, and their relationship to the body, i.e. the brain. Most contemporary philosophers of mind adopt a physicalist position, meaning that the mind is not something separate from the body. Nevertheless, they disagree as to whether mental states could eventually be explained by physical descriptions (reductionism), or whether they will always have its own vocabulary (non-reductionism). With the sophistication of neuroscience and the predominance of the physicalist position, it may seem that the importance of philosophy of mind is losing relevance, not only for those who are reductionist in their opinion about the relationship between the mind and the brain but even to non-reductionists as well. For example, William Vallicella recently answered the question, "Is Philosophy of Mind Relevant to the Practice of Neuroscience?" in the following way:
Off the top of my 'head,' it seems to me that […] it should make no difference at all to the practicing neuroscientist what philosophy of mind he accepts.
Valicella’s answer betrays an inclination towards neurocentrism (the view that human behavior can be best explained by looking solely or primarily at the brain). According to Vallicella, neuroscience would not be affected by any philosophies of mind, since one can always find a way to make philosophical premises correspond to biological findings (with a little work). Relevance is equated with correspondence, and philosophy of mind must fit into the findings of neuroscience. If it doesn't, there is no contradiction; rather, philosophy becomes irrelevant.

Tuesday, September 10, 2013

The Drug Made Me Do It: An Examination of the Prozac Defense

The plot of a recent Hollywood thriller, Side Effects, revolves around many pressing legal and ethical questions surrounding the use of anti-depressant medications. The movie explores the life of a supposedly depressed woman—Emily Taylor—who seeks treatment from her psychiatrist. Emily’s doctor prescribes her an anti-depressant—Ablixa. Emily then proceeds to murder her husband in cold blood while under the influence of the drug. The movie seeks to explore the culpability of this depressed woman in a legal sense. During the trial, the psychiatrist argues that neither he nor Emily Taylor is responsible; rather, Emily Taylor was simply “a hopeless victim of circumstance and biology.” Is it possible that a drug could be responsible for one’s actions as argued by the psychiatrist in the movie? The answer is not clear. Nonetheless, the possibility that someone could escape criminal punishment due to a certain anti-depressant represents a serious ethical quandary that should be examined.

Tuesday, September 3, 2013

The Effect of Theoretical Ethics on Actual Behavior: Implications for Neuroethics

Neil Levy
By Neil Levy, PhD

Neil Levy is the Deputy Director of the Oxford Centre for Neuroethics, Head of Neuroethics at Florey Neuroscience Institutes, University of Melbourne, and a member of the AJOB Neuroscience Editorial Board. His research examines moral responsibility and free will.

Might doing ethics be harmful to your moral health? One would expect just the opposite: the deeper you think about ethics, the more you read and the larger the number of cases you consider, the more expertise you acquire. Bioethicists and neuroethicists are moral experts, one might think. That’s why it is appropriate for media organizations to ask us for our opinion, or for hospitals and research institutions to ask us to serve on institutional review boards.

Tuesday, August 27, 2013

Report from the Society for Disability Studies: Bringing Ethics, Bioethics, and Disability Studies Together

By Jennifer C. Sarrett, MEd, MA

Jennifer Sarrett is a 2013 recipient of the Emory Center for Ethics Neuroethics Travel Award. She is also a doctoral candidate at Emory University’s Graduate Institute of Liberal Arts working on her dissertation which compares parental and professional experiences of autism in Atlanta, GA and Kerala, India as well as the ethical issues that arise when engaging in international, autism-related work.

From June 26 - 29, 2013, the Society for Disability Studies (SDS) held their annual conference in Orlando, Florida. SDS is the primary scholarly association for the field of Disability Studies, which is an academic field of study exploring the meanings and implications of normativity, disability, and community. As with other identity-based fields of studies, including Women’s Studies, Queer Studies, and African-American Studies, the Society for Disability Studies thinks about difference and works to expose and eradicate stigma and inequality related to people who identify as disabled. This particular field of identity-based work is closely related to Bio- and Neuroethics, as differences in minds and bodies present medical and scientific concerns to physicians, researchers, and scholars.

At SDS this year, I presented a paper titled “The Ethics of Studying Autism Across Cultures,” which is based on my research fieldwork. My dissertation looks at how culture influences parental and professional experiences of autism in Atlanta, GA and Kerala, India with the aim of developing guidelines for future scholars, interventionists, or advocates embarking on international work on autism and related disabilities. Because of many of the ethical issues I came across in my studies and research, my work extends to thinking about autism within current models of human rights and critically examining contemporary and historical ways of talking about and treating people on the autism spectrum. 

Tuesday, August 20, 2013

Perceptions of Animals

Dr. Frans de Waal
By Frans de Waal, Ph.D.

Frans de Waal is the Charles Howard Candler Professor of Primate Behavior at Emory University and the Director of the Living Links Center at the Yerkes National Primate Research Center. He is also a member of the United States National Academy of Sciences and the Royal Netherlands Academy of Sciences and a member of the AJOB Neuroscience editorial board. His research focuses on primate social behavior, including conflict resolution, cooperation, inequality aversion, and food-sharing. 

At a recent workshop on "Beastly Morality" (April 5, 2013, Emory Ethics Center), which drew participants from all over the country, I asked an innocent question. We had about sixty scholars presenting or listening to academic papers on the human-animal relationship or the place of animals in literature, and I asked how many of them worked with animals on a daily basis. The answer: no one.

It was a naive question, because if I had expected half of them to say that they did work with animals, these same academics would probably be writing on something totally different, such as the behavior of animals, their treatment by us, or their intelligence. That's what I do, being a scientist. We rarely write about anything that cannot be observed or measured, and so we assume it must be the same for everybody else. But if one's focus is how Thomas Aquinas viewed animals, the definition of personhood, or the moral status of animals in Medieval Japan -- all of which were topics at the workshop -- first-hand knowledge of animals is hardly required.

Tuesday, August 13, 2013

(Hypothetical) Crimes Against Neural Art

We would expect that if there was any moral outrage to have over the treatment of cultured neural tissue, it would occur in an art gallery. Something about an art gallery sensitizes us to the well-being of critters we might not usually care about - as in the case of Garnet Hertz's Cockroach Controlled Mobile Robot (a three wheeled robot about half the size of R2D2, driven by a Madagascar hissing cockroach) - and to cry out over events that we might otherwise willfully ignore or even accept as routine - as in Guillermo Vargas's infamous “You Are What You Read,” (where a starving dog was taken off the street and brought into a gallery) [1].  Instead, when neural tissue is given a robotic body and placed on display (sometimes remotely) in an art gallery, most responses seem to focus on the ambiguous nature of the works.  Artist Stephane Dumas wrote, referring to MEArt (a drawing robot controlled by a culture of rat brain cells), that “the public can experience the drawing activity and at the same time sense the presence of its remote initiator, the brain [2],” implying a felt mental presence associated with the biological components of the work.  However, Dr. Stuart Bunt, one of the scientists who worked on Fish and Chips (a precursor to MEArt that used tissue taken from fish rather than rats), wrote that “many viewers of Fish and Chips embodied it with impossible sentience and feared it unnecessarily [3],” indicating that the attributed mental life (and implied moral obligations towards it) was an illusion constructed by the framing of the piece. This contradiction between the audience and creator's interpretation of these pieces is reflected in Dumas's assertion that embodied neural bioart (here referring to Silent Barrage, which featured a distributed robotic body that audience members could walk through) “is a work in progress that raises more questions about the relationship between neural mechanisms and creative consciousness than it answers [2].”  This ambiguity is even praised by artist Paul Vanouse, who states that “MEArt's creators have cleverly designed their thought-provoking apparatus to maximize cognitive dissonance [4],” while Emma McRae describes MEArt as an example of one of “an infinite multiplicity of agencies [5]” that don't fit into well established categories, which  humans must learn to share the world with [6].

Tuesday, August 6, 2013

Intervening in the brain: with what benefit?

By Hannah Maslen, DPhil and Julian Savulescu, PhD

Hannah Maslen is based at the Oxford Martin School, University of Oxford

Julian Savulescu is Uehiro Professor of Practical Ethics at the University of Oxford, Fellow of St Cross College, Oxford and the Director of the Oxford Uehiro Centre for Practical Ethics. He is also a member of the AJOB Neuroscience editorial board.

Novel neurotechnologies
Last week, Nuffield Council on Bioethics released its report entitled Novel neurotechnologies: intervening in the brain. The aim of the report is to provide a reflective assessment of the ethical and social issues raised by the development and use of new brain intervention technologies. The technologies that the report examines include transcranial brain stimulation, deep brain stimulation, brain-computer interfaces and neural stem cell therapies. Having constructed and defended an ethical framework to navigate the ethical and social concerns raised by novel neurotechnologies, the report proceeds to discuss 1) the care of the patients and participants undergoing interventions, 2) what makes research and innovation in neurotechnologies responsible research and innovation, and 3) how novel neurotechnologies should be regulated.

Tuesday, July 30, 2013

In Sickness and in Health - What Jewish Law Can Say about Psychology and Psychiatry

By Rabbi Ira Bedzow, MA


Rabbi Ira Bedzow
Rabbi Ira Bedzow is a 2013 recipient of the Emory Center for Ethics Neuroethics Travel Award. He is the project director for Moral Education research project for the TAG Institute, and is currently pursuing his PhD in Religion at Emory University.

While it is obvious that the term "insanity" expresses the value judgments of a society's legal system, psychology and psychiatry also accept social mores as a guideline for determining mental illness and health, even when their practitioners deny doing so.  For example, according to the DSM-5, mental illness is diagnosed by dysfunctional behavior (though some psychiatrists are pushing for a biological categorization of mental illness) and thus assumes social or cultural norms by which to interpret behavior in order to determine whether it is dysfunctional or not.  Because insanity and mental illness are both predicated on social norms, they are by definition determined by society's ethical posture.

Tuesday, July 23, 2013

About the Physiological Society of Japan Ethics Symposium

By Tamami Fukushi, Ph.D

Tamami Fukushi is a Senior Research Scientist at the Platform for the Realization of Regenerative Medicine at the Foundation for Biomedical Research and Innovation in Kobe, Japan and a member of the AJOB Neuroscience editorial board. Her research focuses on areas in neuroethics, neurophysiology, and the regulation and ethics of stem cell research.

At every annual meeting since 2003, the Physiological Society of Japan has scheduled a research ethics symposium, usually dealing with animal experiments and research misconduct. One purpose of the symposia has been to raise audience awareness regarding current ethical issues in neuroscience research. In addition, the symposia have sought to open their audience’s eyes to taking action regarding ethical practices in their daily research activities.

This year, the society took up ethical issues in neuroscience. The symposium was organized by Dr. Kiyoshi Kurata, the society’s Chief of Research Ethics Committee, and Dr. Atsushi Iriki, the Editor-in-Chief of Neuroscience Research, which is published as the official journal of the Japan Neuroscience Society.

Tuesday, July 16, 2013

Robots: the Answer for Treating Children with Autism Spectrum Disorder?

By Guest Contributor Irina Lucaciu, Emory University  

A smile appears on Jack’s face as the robot he is playing with congratulates him for accomplishing a task. Aiden seems captivated by the moving arms of Nao, a robot that has become his new playmate. Thousands of miles away, in London, a copy of Nao sits in the middle of a circle of five boys no more than 10 years old, encouraging them to imitate his movements, touch his hands, and try to identify the feelings he is describing.

Nao
When asked how the robot makes him feel and why, one of the boys replies that he is happy because the robot feels happy too.

However, Nao and the other robots are not simply toys, and neither are Jack, Aiden, and the five British boys simply children at play. They have autism spectrum disorder, and Nao is acting as a treatment tool for improving their life experience and helping them develop socially-relevant skills. Above are described the results of robot-assisted therapy [1, 3, 9, 13].

What is autism spectrum disorder, and why would the use of robots benefit those who have it?

Tuesday, July 9, 2013

We’re All Mad Here

In the early 1970’s, eight people checked themselves into psychiatric hospitals throughout the United States, complaining of hearing voices. They were all admitted, and during their hospitalizations exhibited no unusual behavior and claimed to no longer be experiencing auditory hallucinations. After stays between 7 and 52 days in the institutions, the patients were discharged and given diagnoses of either schizophrenia or bipolar disorder. None of these people had any mental illnesses, and had, in fact, falsified their symptoms as part of an experiment conducted by psychologist David Rosenhan (who was himself one of the “pseudopatients”).

The results of the study were published in a 1973 paper in Science titled “On being sane in insane places”. In the paper Rosenhan argues that it is difficult to distinguish between “normality” and “abnormality” when it comes to mental health, and that, once applied, the label of a psychiatric diagnosis can be so strong that all of an individual’s actions are viewed in light of that label, especially in a place like a psychiatric hospital where patients are assumed to be “insane”. The study was seen as an eye-opening commentary on the American mental health system and also criticized for its methodology and conclusions.1,2


The founders of The Icarus Project believe that, just like Icarus' wings, "madness" can lift people to great heights but also send them falling to their doom.
Thirty years later, the study is still cited in debates about the science and ethics of psychiatric diagnoses and treatments, often by those critical of the field. One interesting and controversial voice active in this debate is the mad pride movement.3 In my previous post, I discussed the neurodiversity movement’s views on autism. Mad pride (which has recently been discussed on this blog) takes a similar approach to issues of mental health. Like neurodiversity (and most movements and ideologies in general) mad pride encompasses a wide variety of beliefs and causes, but the primary one is to give a voice to people living with mental illness (although some in the movement dislike that term 4) in the hopes of educating the public, creating patient-run communities and support networks, and pushing for reform in mental health systems.

Tuesday, July 2, 2013

A Creative Balance

When it comes to creativity, one might most readily think of children. The young, innocent imagination is a great conduit for idea generation. Or perhaps the term calls to mind an image of a prolific artist who could be well described by the term "eccentric." Generally speaking though, the thought of creativity may not be immediately associated with mental illness.

However there are mental illnesses that lend towards greater creativity, and some have believed that there are medicines which reduce creativity. Does our chemical attempt to monitor distraction and promote productive behavior tamper who we are at an essential level? Do the aspiring artsy-fartsy need to have some form of ADHD, bipolar disorder, psychosis, frontotemporal dementia, temporal lobe epilepsy, or depression in order to succeed? Are these types of illnesses part of our personhood makeup and consequently deserve to be embraced instead of adjusted? Perhaps a fine line can be drawn outlining what types of actions should be taken regarding mental health, creativity, and the value we place on humans in general.

Tuesday, June 25, 2013

Legal Pains

When a friend accidentally burns themselves on a stove-top, their pain is usually very obvious - cursing, gesturing wildly, and even the explicit verbal pronouncement of "I am in pain."  It's also very clear from this display that their pain is viewed as a "bad" thing - they want it to stop, they will be more vigilant in the future to prevent it from happening again, and they very likely either want or even expect you to help out in these endeavors. Pain, while being a survival affirming biological phenomena, is (at least in this simple case here) also inherently ethical.

We can then imagine that this same friend's nervous system might be manipulated (whether through mutation, injury, or pharmacological manipulation) to prevent them from feeling pain.  While we might initially be shocked at such a turn of events, we could be convinced of such a change if our friend stopped responding to usually painful stimuli (such as our villain the stove-top) with the same clear "pained" reaction.  So changing the nervous system clearly changes whether or not the individual can feel pain, leading us to believe that pain is something the brain does.

Pain is something the brain does- but how do we know when it is doing it?  Image from here.  
However, in addition to our verbal friend, there are an untold number of animals, (including other types of humans), plants, virions, rocks, alien beings, artificial intelligences, puppets, and tissue cultures whose internal states aren't as readily apparent. Whether it is that they can't talk (as in the case of the bulk of that list) or we suspect that their speech might not be truthful (as in the case of artificial intelligences or puppets), we must rely on other, perhaps subtler methods of inferring their internal state. Holding off on the very important question of how we should evaluate a creature's pain, let us instead focus on evaluating how we currently determine if a strange being is in pain or not. To simplify the problem significantly, let's focus on how this process occurs in the present day United States. While even in this limited scope we see substantial debate over what counts as pain in non-human animals, there is at least one place where some sort of official, if temporary, compromise is forced into existence: animal cruelty law.  Can US animal cruelty laws provide a formal definition for how Americans think of pain?

Tuesday, June 18, 2013

Can Human Brain Tissue Make Mice Smarter? Emory Neuroethics Journal Club Review

What makes humans smart?  This was the primary question posed in the final Journal Club of the Spring 2013 semester.  Led by Riley Zeller-Townson, the club discussed Han et al. (2013), a paper that discusses the enhancement of learning in mice after grafting human glial progenitor cells into their brains. Riley began by explaining the paper and the work leading up to it. Most of the roles of glial cells involve supporting and protecting neurons, such as synaptic plasticity, myelination, and maintaining the blood-brain barrier (Barres, 2003). This study focuses on one subtype of glia, called astrocytes, cells that provide nutrients to neurons (Tsacopoulos et al, 1996).

Neurons (shown on left) possess both axons and dendrites and are shaped differently than glial cells (Source).  The glial cell shown on the right is an astrocyte, which is more “star” shaped due to its many branched processes (Han et al 2013).
While people generally think of neurons as being the important type of brain cells, research is beginning to show that the merits of glial cells were previously underestimated. Interestingly, post-mortem analysis of Albert Einstein’s brain showed that he had more glia than the average person (Diamond et al, 1985).  Along the same vein, previous studies have shown that primate glia are larger, more complex, and faster than those of mice (Colombo, 1996; Oberheim et al., 2009). Therefore, is it possible that glial cells are the root of intelligence?

Tuesday, June 11, 2013

How do neuroscientists integrate their knowledge of the brain with their religious and spiritual beliefs?

By Kim Lang
Graduate Student, Neuroscience
Emory University 
This post was written as part of the Contemporary Issues in Neuroethics course 


As scientists, we’re generally a skeptical bunch (I’ll leave speculation of whether that is a cause and/or effect of a career in science for the Comments section).  While 95% of the American public believe in a deity or higher power (83% believe in God and 12% believe in a higher power) [1], only 51% of surveyed scientists believe the same (33% believe in God and 18% believe in a universal spirit or higher power) (Figure 1). [2]


According to surveys, this discrepancy is nothing new.  In 1914, sociologist James H. Leuba found that 42% of the polled US scientists believed in God while 58% did not. [1,3]  In 1996, Larry Witham and Edward Larson repeated Leuba’s survey and found that 40% of scientists believe in a personal God while 45% do not4.  While the wording of questions can be critiqued [3], the overall trend remains and is fairly constant across different scientific fields.  According the 2009 Pew Research Center survey, 51% of scientists in biological and medical fields believe in God or a higher power, as well as 55% of those in chemistry, 50% of those in geosciences, and 43% of those in physics and astronomy (Figure 2). [2]


As informative as this survey is, those are frustratingly wide categories of science.  With so much research into the neurological mechanisms of religious experiences and discussion about whether the brain is wired to “produce” or “perceive” God5, I’m curious to know how the neuroscience community would respond to this survey (being part of the neuroscience community also biases my curiosity a bit).  I’d also like to know how those neuroscientists that do believe in God or a higher power integrate this belief with their neuroscience knowledge (the atheist view seems pretty self-explanatory).

Friday, June 7, 2013

International Neuroethics Society Meeting on Nov 7-8, 2013 in San Diego!

The International Neuroethics Society announces its 5th Annual Meeting (a satellite of the Society for Neuroscience Meeting) November 7 & 8 San Diego.



 Abstracts are due June 15, 2013.  For more information and the program see here.

 Listen to INS Member Molly Crockett cordially invite you here.

Bring your friends and family to the open-to-the-public program November 7 on Neurogaming: What’s Neuroscience and Ethics Got to Do with It? Register for the meeting on November 8 here.

The speaker lineup includes Barbara Sahakian & John Pickard, University of Cambridge, Julian Savulescu, University of Oxford, Patricia Churchland, University of California-San Diego, Molly Crockett, University of Zurich, Jens Clausen, University of Tubingen, Lisa Claydon, Bristol Law School, University of the West England, Joe Fins & Niko Schiff, Weill Cornell Medical College, Holly Moore, Columbia University, New York State Psychiatric Institute, Mauricio Delgado, Rutgers University, Catherine Sebastian, Royal Holloway, University of London, J. David Jentsch, University of California – Los Angeles, and Honorable Robert Trentacosta, Presiding Judge of San Diego Superior Court. See you in San Diego!

Tuesday, June 4, 2013

NEW OPENING! Graduate Editorial Intern for The American Journal of Bioethics Neuroscience

A unique opportunity for graduate students to get high-level editorial experience for the premier neuroethics journal and the official journal of the International Neuroethics Society. Interns will have access to an international community of renowned neuroethics scholars and innovation in neuroethics scholarship. 

As editorial intern, you will be responsible for attending biweekly editorial meetings and contributing intellectually to the editorial responsibility of the journal; organization and transcription of interviews of prominent neuroethicists for publication in the journal; publicity of the journal to the neuroscience community; and maintenance of an internal organizational database. Innovation and initiative is valued and there is some liberty to pursue projects of your own design within the scope of the journal’s mission. Work runs approximately 10-20hrs a week, depending on the editorial cycle. 

Please contact jequeen@emory.edu for more information. 

Deadline for applications: June 28, 2013 

Eligibility: Must currently be a graduate student, from any discipline, with an interest in neuroethics and editorial work. Must be organized and capable of meeting deadlines. Web management skills an asset. Must be able to attend regular meetings at Emory University. 

How to apply: Send a 1-pg letter of interest, CV, and letter of recommendation to jequeen@emory.edu

American Journal of Bioethics Neuroscience
Emory University
1531 Dickey Drive
Atlanta, GA 30322
http://www.ajobneuroscience.com/


A Life With Others…In Your Head?

By Stepheni Uh
Undergraduate Neuroscience and Behavioral Biology Major
Emory University
This post was written as part of the Contemporary Issues in Neuroethics Course

Although decades have passed since the world first heard of Billy Milligan, his predicament and story still cause confusion and wonder. As the field of neuroscience is expanding, more light has been shed upon his condition: an extreme case of dissociative identity disorder (DID), formerly known as multiple personality disorder. Advancements in neuroscience (i.e. in research techniques) has led to the investigation of possible neurobiological correlations to the symptoms of DID – yet, due to the rare cases of this disorder, the possible neurobiological basis for DID has not been established. Despite the lack of raw data, per se, neuroscience has fueled new perspectives regarding the nature of DID such as those involving the ideas of culpability, personhood, and individuality.


Billy Milligan
Billy Milligan, whose birth name is William Stanley Milligan, had approximately 24 different personalities that fought to take over his body – Arthur the intelligent Englishman; Philip the Brooklyn criminal; David the eight-year-old “keeper of pain”; Adalana the lesbian and everyone else, including the Teacher who could fuse all of the personalities and help them develop [1,4]. Milligan was involved in robberies and other crimes before he was prosecuted for kidnapping and raping three women from the Ohio State University campus in October 1977 [4]. According to his psychiatric report, Adalana had taken over Milligan and consequently raped the women due to her desire for affection. The other personalities, however, had no recollection of the incident [4]. Billy Milligan was eventually acquitted of his crimes by reason of insanity and sent to the Athens Mental Health Center to “recover.” Experts attempted to treat him by fusing all the personalities into one, which was already established by the Teacher; so they attempted to make the Teacher take over his “consciousness,” which had never happened before. Milligan was finally released in 1988 and then became free from supervision in 1991 [4]. As of today, no one knows what has happened to Billy Milligan and many questions remain unanswered.

Tuesday, May 28, 2013

Let’s Put Our Heads Together and Think About This One: A Primer on Ethical Issues Surrounding Brain-to-Brain Interfacing

By John Trimper
Graduate Student, Psychology
Emory University
This post was written as part of the Contemporary Issues in Neuroethics course


Remember the precogs in Minority Report? The ones who could sync up their brains via the pale blue goo to see into the future?
The precogs from the movie Minority Report
Recent findings published in Scientific Reports (Pais-Vieira et al., 2013) suggest that the ability to sync up brains is no longer purely sci-fi fodder, and instead, has moved into the realm of laboratory reality. The relevant set of experiments, conducted primarily at the Nicolelis laboratory at Duke University, demonstrated that neural activity related to performance on a discrimination task could be recorded from one rat (“the encoder”) and transferred into a second rat’s brain (“the decoder”) via electrical stimulation. This brain-to-brain transfer of task-relevant information, provided the encoder rat was performing the task correctly, significantly enhanced the decoder’s ability to perform the task correctly (see Figure 2 for task description). That is, the decoder rat, who received no external clues as to which of two levers would provide a food reward, responded to the brain-to-brain transfer of information as if it cued him to choose the correct, food-rewarding lever. As a further proof of concept, the experimenters demonstrated that it wasn’t necessary for the rats to be hooked up to the same laboratory computer. In fact, it wasn’t even necessary for the rats to be on the same continent. Using the internet, the researchers were able to transfer information from the brain of an encoder rat at Duke University in real time to the brain of a decoder rat located in Brazil. Performance enhancements in this scenario were similar to those noted above (i.e., decoders chose the correct lever more often if brain-to-brain transfer was allowed).

Friday, May 24, 2013

Now Available! Neuroethics Journal Club Video Archives on YouTube

The Neuroethics Journal Club videos are now available on YouTube. Watch each discussion to learn about a variety of neuroethics issues, from treatments for pedophilia to neural plasticity in mice. For each video, one presenter introduced the journal topic and opened discussion to the audience. 

Tuesday, May 21, 2013

The identification of risk for serious mental illnesses: Clinical and ethical challenges


By Elaine Walker, Ph.D., Sandy Goulding, MPH, MA., Arthur Ryan, MA., Carrie Holtzman, MA., Allison MacDonald, MA.

Elaine Walker is Samuel Candler Dobbs Professor of Psychology and Neuroscience in the Department of Psychology at Emory University.  She leads a research laboratory that is funded by the National Institute of Mental Health  to study risk factors for major mental illness.  Her research is focused on child and adolescent development and the brain changes that are associated with adolescence.

The identification of risk factors for illness is receiving increased attention in all fields of medicine, especially cardiology, oncology, neurology and psychiatry.  There are three potential benefits to identifying risk factors. The first is to reduce morbidity by reducing risk exposure. The second is to enhance opportunities for targeting preventive treatment toward those who are most likely to benefit. Finally, the identification of risk factors can shed light on pathological mechanisms.

There are, of course, costs as well as benefits involved in this endeavor.  The benefits, in terms of reducing morbidity and mortality, are noncontroversial.  The costs, however, can be very controversial and they have generated discussion among ethicists. Foremost among the costs is the potential discomfort and distress that results from the identification of an individual as being at statistical risk for future illness.  There are also significant concerns about whether treatment should be initiated prior to the manifestation of symptoms that reach clinical threshold.  These concerns are especially salient in the field of psychiatry. In this post, we discuss current efforts to identify risk factors for serious mental illness and the ethical considerations they raise.

Tuesday, May 14, 2013

Dancing with the Devil

Hysteria usually calls to mind thoughts of the Salem Witch Trials and delirious frenzies from history. However, mass hysteria, or mass psychogenic illness, is not simply an improbable, incomprehensible madness of the past. It has occurred throughout history and into our current generation, taking form as dancing plagues, dissociative possession of nuns, and involuntary twitches of high school girls in New York. Is it something they all ate? Or maybe there is something in the water… How is it that anxiety manifests itself into a dance that spreads among populations?

Fear and distress terrorized populations in Medieval
Europe and made them more prone to psychogenic illness. Certainly it seems there must be more to the story than merely these common denominators, for fear, anguish, stress, and trauma are commonly faced and dealt with sans mass hysteria. But the other factors needed for the exact formula of mass hysteria are difficult to pinpoint. 

Is it the perfect combination of despair, devastation, and distress that manifests itself into a psychosomatic reaction? Does it require a specific threshold of suggestion and susceptibility in our belief and cultural context? The panic and frenzy that overtook groups throughout history is a fascinating and frightening occurrence. Epidemics surged along the Rhine River, taking hundreds as victims to the dancing plague [8].  This affliction of compulsive dancing ran rampant in regions where the population believed dancing to be some sort of sickness or a curse that could be cast upon them. Once they formed the belief that they had caught the dancing disease, or they had been cursed to dance, dance, dance, there was no stopping them. People would dance until muscles were strained; they would even dance to their deaths. In 1374 a plague swept through Germany and France that drove thousands to dance in “agony for days or weeks, screaming of terrible visions and imploring priests and monks to save their souls.” Also, in years to come, people danced for as long as six months, some even dying after breaking “ribs or loins” [7].

Wednesday, May 8, 2013

Social and Physical Pain on Common Ground

By Guest Contributor Jacob Billings 
Neuroscience Graduate Student 
Emory University

Societal changes, when they occur, coincide with changing outlooks among the populace. Take for example the American Civil Rights movement of the 1960’s. Largely, the motivations corresponding to economic and political enfranchisement for African-Americans and women resulted from changing identities among these groups during the mobilization of all of America’s resources during World War II. Notably, African Americans observed naturally pleasant interactions with European whites during tours of duty in WWII [1]. When returning to the US, it was impossible to allow American racism to continue unchallenged. During that same period, women acquired expertise in a great variety of professions for which they had been refused the opportunity to work [2]. The expectation that women return to a subordinate place in the household was immediately risen against.

In our modern age, the outlooks held by our friends and neighbors are being changed daily by new evidence from neuroscience. Using an arsenal of tools and techniques at colleges and hospitals around the world, including functional magnetic resonance imagers (fMRI) that can peer into our brains as we think and dream [3], the science marshals each facet of lived experience in turn to hold fast to territory mapped onto the physical domains of the central nervous system. The ground acquired during the campaign is that which is lost by ignorance and outmoded tradition.

How should our societies change as a result of new facets of evidence-based understanding, particularly when that evidence grounds lived experience directly to the material and functioning of our nervous systems?

Tuesday, April 30, 2013

A Social Account of Suffering

50,000 cultured brain cells sit in a petri dish. Through a combination of electronic sensors, software engineering and robotic sculpture, the physiology of the cells interacts with the psychology of some patrons of an art gallery [1]. From this transaction, judgments arise - the audience might report feelings of being watched, of play, or simply of remotely observing an oblivious 'seizure machine.' One particular type of audience member, the Animal Ethicist, might even wonder if we should be worried that the culture of brain cells (as a former animal) might be in pain.

Brain cells, electrodes, and tiny Peter Singer (image from here). 
While in most cases it is fairly straightforward to determine that a human is in pain, when one starts to asks if non-humans (or even humans with severe communication problems, such as locked-in patients) are in pain, it is common to turn to neuroscience for help. The idea is that while mental states (such as pain and suffering) can only be deduced from behavior if the behaviors are 'wired up correctly,' mental states are always (or non-contingently, to borrow the language of Dr. Martha Farah [2]) related to brain states. Thus, the tools of neuroscience can give us direct access to the amount of pain an organism is experiencing, bypassing a body that might hide this information from us (which could happen because of injury, or because the body was never equipped with a human face). We are obligated to perform this scientific investigation, as we have an obligation to prevent pain and suffering.

Pain is something the brain does.  Nociception sends information about tissue damage (1) through the spinal cord, where such information can be modulated (2).  However, pain doesn't really become that nasty, unpleasant experience until it weasels its way into your limbic system (4).  Image retrieved from here.