Tuesday, July 28, 2015

Liberating brains from bodies by capturing them with brainets?

by Karen Rommelfanger

Miguel Nicolelis is dedicated to liberating the human brain from the physical constraints of a body.

Recently, brain-machine interface engineer extraordinaire Miguel Nicolelis connected nonhuman animal brains in a modern-day mind meld called the brainet. For those who don't already know him, Nicolelis is an innovator, dedicated to pushing the limits of what is possible with neurotechnology, and a media darling to boot.

One focus of Nicolelis' work has been developing neural prostheses whose function is mediated through wired or wirelessly transmitted electrical activity from arrays of electrodes implanted on the surfaces of nonhuman animal brains. One well-known experiment from the Nicolelis lab involved monkeys that learned to feed themselves a marshmallow  or even operate a robot on a treadmill via direct connection electrodes implanted in their brains and a prosthetic arm. For extra flash, Nicolelis had a 12-lb monkey (based out of a Duke laboratory) operate a 200-lb robot on a treadmill in Tokyo by transmitting its brain activity through an Internet connection. In this same 2013 interview he waxes philosophical, “Our sense of self does not end at end of the cells of our bodies, but it ends at the last layer of the electrons of the tool that we’re commanding with our brains.”


His work has intended applications for humans. One recent media stunt involved a "Mind-controlled robotic exoskeleton"  donned by an individual who was paralyzed from the trunk down. 29-year-old Juliano Pinto kicked off the first ball at the World Cup in 2014 through an electrode studded cap on his head that transmitted recorded electrical activity from his brain to a robotic suit. Hailing from

Tuesday, July 21, 2015

Bring back the asylum: A critical analysis of the call for a "return to 'modern' institutionalization methods"

By Cassandra Evans

Cassandra Evans is a Ph.D. student in Disability Studies at Stony Brook University. She studies mental disabilities and ethics surrounding treatment, services, and access for individuals with mental disabilities. She is currently examining the history of institutions in Suffolk County, Long Island (New York) and what shape the “way forward” from institutionalization will take in the new millennium.

This post is a shorter version of a talk Cassandra gave at the Society for Disability Studies’ national conference in Atlanta, Georgia, June 11, 2015.

In early June, 2015, I visited Pilgrim Psychiatric Center in Brentwood, New York, (Suffolk County, Long Island). As I drove onto the Pilgrim campus, I felt as if I could be entering any of the other scores of institutions around the country—the pictures I’ve seen all look so similar and convey the same eeriness: high rise brick buildings with plain numbers on them, grass growing up all around, broken and barred windows, some areas with trash heaps on the grounds and graffiti on the walls. The names were different, but during their official operations, the treatments and results were similar—many individuals stayed longer than they ever wanted, many died and few were “cured.”

This photo shows a brick high-rise institutional building with a 
gravel road leading away from its parking lot, green grass and 
fresh tire tracks nearby.  Toward the front of the building several 
cars are parked outside the front of the building at the bottom 
floor of this 10- or 12-story, double-winged ward.  “Building 82” 
at Pilgrim Psychiatric Center in Brentwood, New York, is still 
home to many individuals with psychiatric disabilities.  Though 
three out of four institutions in Suffolk County, Long Island were 
closed and their residents deinstitutionalized, others with more 
severe  disabilities or who were more geriatric ended up here.

Photo by Cassandra Evans
While there, I saw pictures and news clippings in the museum that demonstrated how, in the era when institutions were being built and filled—toward the late 1800s and early 1900s until about the 1950s, consensus was that these facilities and the treatments inside of them were “state-of-the-art.” Text describing the 1938 LIFE Eisenstaedt photo essay noted that the pictures are “showing the dark world of the insane and what scientists are doing to lead them back to the light of reason” (Long Island Psychiatric Museum, 2015). While that rhetoric was common then, I wonder if it is similar ableist thinking, this need to normalize that still prevails today, driving new calls to “bring back the asylum.”

It was a recent ethical argument on this topic in the Journal of American Medical Association (JAMA) that prompted me to visit the Long Island Psychiatric Museum. This article, “Improving long-term psychiatric care: bring back the asylum,” by Sisti, Segal and Emanuel (2015) made major waves in both academic and lay literature. In it, the authors argue that because of “transinstitutionalization”—the failure of deinstitutionalization to guarantee appropriate placements for former residents of these homes—the “way forward” for severe psychiatric patients is a return back to the asylum (Sisti et al, 2015).

Tuesday, July 14, 2015

The power of a name: Controversies and changes in defining mental illness

by Carlie Hoffman

The purposes of naming are to help categorize the world in which we live and to aid in grouping similar things together. However, who decides which name is the correct one? Is a child who often cannot pay attention to his classwork “absent-minded,” or experiencing attention deficit hyperactivity disorder? Is a person whose moods often swing from one extreme to the other simply “moody,” or living with bipolar disorder? Naming a lived experience a “mental illness” has the ability to change the social realities of those who receive the diagnosis, altering not only self-perception, but also influencing the perceptions and triggering the biases of others— often in a detrimental manner. So, who has the power to determine how such a label is assigned, and what happens if someone is given the wrong one?

The power affiliated with naming has caused the diagnosis of mental disorders to be fraught with controversy. Mental illnesses are defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM), which has been deemed the “bible” of mental health. According to Dr. Thomas Insel, the director of the National Institutes for Mental Health (NIMH), the goals of the DSM are to create a common language for describing mental illness, and to ensure that mental health care providers use the same terms in the same ways. Thus, when patients visit a psychiatrist in search of a name that will define the symptoms they are experiencing, this name is assigned with the aid of the DSM.

One controversy affecting the diagnosis of mental disorders is the growing concern with medicalization of the “normal” human experience. Medicalization is the process of defining select human experiences or conditions, typically ones that were once considered normal, as medical conditions that warrant professional medical attention. Some level critiques against medicalization, particularly the medicalization of experiences associated with cognitive and emotional function, suggesting it can lead to over-diagnosis of mental disorders as individuals cope with stressors in a typical fashion [5, 11, 13]. A series of controversial changes made to the newest edition of the DSM, DSM-5, have provided a foothold for those concerned with medicalization. The addition of premenstrual dysphoric disorder and the elimination of the bereavement exclusion from the criteria for major depressive disorder have increased the apprehension that typical premenstrual mood and behavioral changes, and the normal grieving process could be classified as mental disorders [7, 13, 14].

Tuesday, July 7, 2015

Charles Bonnet syndrome, musical ears, and normal hallucinations

by Jonah Queen

In a previous post on this blog, I wrote about the Mad Pride movement, which advocates for the rights of, and the end of stigma against, those diagnosed with psychiatric disorders. I discussed how the lack of a clear distinction between “normal” and “abnormal” psychology even leads some activists to think of these conditions as extreme emotional or sensory experiences rather than illnesses. Mad pride advocates see a trend of increasing medicalization within psychiatry, arguing that feelings and behaviors are too readily classified as pathological. But this concern with over-medicalization is not unique to the Mad Pride movement. It is expressed by a wide range of individuals, including those within the mental health establishment. But there is one area where the field of mental health seems to be moving in the opposite direction: hallucinations. DSM-5, which has been criticized for overly broad definitions of psychiatric disorders, is restricting the diagnostic criteria for schizophrenia, making it so that hearing voices (with no additional symptoms) is no longer sufficient for a diagnosis.

The cover of the report in which Charles Bonnet first described the condition which would be named after him (from demneuropsy.com.br)

This change is due to current research that shows hallucinations are not always a sign of psychosis and are also surprisingly common (according to some sources, ten percent of the population occasionally hears voices). Doctors, researchers, and patient advocacy groups are working to spread this knowledge and to overcome the belief among the general population that experiencing hallucinations makes someone “crazy.”

A hallucination is defined as a perceptual experience that does not come from an external source. They can be caused by a variety of disorders (whether psychiatric, neurological, or somatic) and can occur in healthy individuals in response to psychotropic drugs (like hallucinogens such as LSD and psilocybin) or stress (including sleep deprivation). But there are many other common phenomena that can also be described as hallucinations, though they might not be thought of as such due to their mundane and unobtrusive nature. These include experiences that many of us have had, such as hearing your name being called, hearing a phone ringing, or feeling your cellphone vibrating in your pocket, even though none of those things are actually occurring. Sometimes these experiences can be caused by our brains misinterpreting real sounds. Hearing a “phantom” phone ring can happen in response to faint background noise in a frequency similar to that of the ring, since it is a sound many of us are (even unconsciously) always listening for. 

Tuesday, June 30, 2015

New neuro models for the interdisciplinary pursuit of understanding addiction

by Katie Givens Kime

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Katie Givens Kime is a doctoral student in Religion, with foci in practical theology, psychoanalysis, and neuroethics, and her research investigates the religious and spiritual aspects of addiction recovery methods.  

A few years ago, a highly respected and accomplished philosopher at Duke University, Owen Flanagan, surprised everyone when he stood up to speak at Society for Philosophy and Psychology.  A garden-variety academic presentation it was not.  In “What Is It Like to Be An Addict?” Flanagan revealed to 150 of his esteemed colleagues that he had been addicted to various narcotics and to alcohol for many, many years.  Not so long ago, every gruesome morning looked like this:

I would come to around 6:15 a.m., swearing that yesterday was the very last time...I’d pace, drink a cup of coffee, and try to hold to my terrified resolve.  But by 6:56—every time, failsafe, I’d be in my car, arriving at the BP station...at 7 a.m. sharp I’d gather my four or five 16-ounce bottles of Heineken, hold their cold wet balm to my breast, put them down on the counter only long enough to be scanned....I guzzled one beer in the car.  Car cranking, BP, a beer can’s gaseous earnestness—like Pavlov’s dogs, when these co-occur, Owen is off, juiced...the second beer was usually finished by the time I pulled back up to the house, the house on whose concrete porch I now spent most conscious, awake, time drinking, wanting to die.  But afraid to die.  When you’re dead you can’t use.  The desire to live was not winning the battle over death.  The overwhelming need – the pathological, unstoppable – need to use, was. (Flanagan, 2011, p. 77) 

Research on addiction is no small niche of medical science.  It’s an enormous enterprise.  This seems appropriate, since addiction (including all types of substance abuse) is among the top public health crises in the industrialized West. The human suffering and the public (and private) expense wrought by addiction is immense. (See data here, here, and here.)

To that end, two accomplished researchers recently guest lectured here in Atlanta, representing a few dynamic edges of such research.  Dr. Mark Gold lectured for Emory University’s Psychiatry Grand Rounds on "Evolution of Addiction Neurobiology and Treatment Over the Past 40 Years,” and Dr. Chandra Sripada lectured for the Neurophilosophy Forum at Georgia State University on "Addiction, Fallibility, and Responsibility.”

Tuesday, June 23, 2015

Selfhood and ethics: Who am I and why does it matter?

by Keenan Davis

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Keenan is a graduate student in Bioethics, whose work focuses on the use of virtue ethics and natural law to evaluate novel biotechnologies. He will be pursuing a PhD in the Graduate Division of Religion in the fall.

What should I be doing with my life? Many approach this timeless question by considering first another: Who am I? For a wide range thinkers from Plato to Dr. Phil, we can only know what to do with ourselves when we truly know ourselves. Who we are determines and constrains how we ought to behave. For example, because my parents caused me to exist, I should behave towards them with a level of gratitude and love. Perhaps through a cause-and-effect dynamic, as a result of being their son, I should treat them respectfully. We will return to this example at the conclusion of our exploration.

Historically, the question of selfhood was assessed in terms of an afterlife, seeking to resolve what happens to us when we die. If, as Plato claimed, a person is nothing more than his soul, "a thing immortal," then he will survive physical death. Indeed, perhaps one should look forward to the separation of the soul from material constraints. How we ought to behave then is for the sake of existence after and beyond this world, a position shared by many adherents to Abrahamic religion. On the other hand, if we are no more than our bodies, then we do not persist after death and have no reason to orient our behavior toward post-mortem expectations. Such is the position of Lucretius and the Epicureans who conclude that our practical task is instead to flourish within a strictly material context. Our behavior should be for the sake of this world. For both Lucretius and Plato, the metaphysical substance of self is what mattered foremost.

John Locke
As part of the 17th century Enlightenment, John Locke changed the focus from the substance of self and more explicitly addressed the issue of selfhood with an eye to its normative consequences. For instance, he believed the self to be based entirely on memory and consciousness, regardless of the relationship between body and soul. By defining personhood as continuous self-identification through memory, Locke aimed to establish psychological criteria for moral agency and responsibility. Only if one is responsible for particular actions ought he be liable for judgment, reward, or punishment. Despite his emphasis on the psychological, as opposed to the biological or spiritual, Locke's definition of self still follows the cause-and-effect pattern of is then ought: who I am determines how I should behave.



Using thought experiments like the famous Ship of Theseus conundrum, philosopher Trenton Merricks of the University of Virginia undermines this line of thought by suggesting that there is no metaphysical answer to the question of who we are. There simply are no necessary and sufficient criteria—psychological, bodily, or otherwise—of identity over time for any object. Lest we take this conclusion too far, Merricks explains that it does not mean that persons and objects lack essential properties or evade description: "Among my essential properties are, I think, being a person and failing to be a cat or hatbox." His assessment just means that not all explanations or identifications involving characteristics need to be stated in terms of absolute proof. Allowing a modest concession to unavoidable skepticism, we need not (nor do we ever) demonstrate infallibly that "the tree in my yard today is the same tree that was in my yard yesterday" to warrant that belief. We can still be warranted in our beliefs regarding who we are without proving them absolutely certain.

Tuesday, June 16, 2015

Changing the Way We Think

by David Michaels

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. David is a student at Emory University working on his Master's degree in Bioethics. After completing his graduate studies he will be attending medical school in Texas.  

Have you ever wondered what it would be like to have the ability to read minds? If you're like me, you've daydreamed about possessing this superpower. It's easy to imagine all of the fascinating ways you could exploit this gift to your liking. But after a while this myopic perspective is turned on its head when we imagine our own thoughts being read.  Quickly, almost instantaneously, we conclude with absolute certainty, "Nope, absolutely not - the power to read minds is a bad idea..." Some thoughts are probably best left alone in the mysterious impenetrable fortress of privacy--our mind.

However, recent breakthroughs in neuroscience may challenge the notion that our mind is impervious to infiltration. Did you know that we may have the ability in the near future to record our dreams so that we can watch them later? Scientists have been working on developing technology that translates brain activity (measured in an fMRI machine) to visible images, allowing us to "see" our thoughts. Although this technology currently only utilizes real-time brain activity and cannot produce images from stored thoughts (i.e. memories), it nevertheless introduces the possibility that people will be able to "see" our thoughts - and maybe "read" them too - in the future.

This is just one of many controversies over emerging 'neurotechnological lie detection' Sarah Stoller and Dr. Paul Root Wolpe discuss in a 2007 paper. They explore the question of whether or not the government has the right to invade our minds in order to obtain evidence that can be used in a court of law. Neuroscience has, for the first time in history, allowed researchers to bypass the peripheral nervous system and gather data directly from the brain (Wolpe et al. 2005). Although Stoller and Wolpe focus on the legality of these technologies and whether or not they violate our 5th amendment right, I want to explore whether adopting technologies that unveil the privacy of the mind will change the way we think and the way that we live.

Tuesday, June 9, 2015

The Ambiguity of "Neurotheology" and its Developing Purpose

by Shaunesse' Jacobs

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Shaunesse' is a dual masters student in Theological Studies and Bioethics at Emory and her research interests lie in end-of-life care and religious practices surrounding death and dying.

Are religion and spirituality authentic belief systems that have thrived for millennia because of their truth? Or are they simply constructs of the brain to help humanity cope with the unknown? With the advancement of science, can religion and science work together to understand humanity? What do religion and science have to say collectively that has not been said individually? These questions continue to be asked with each scientific advancement, and even more so now that neurotheology is beginning to develop as a sub-discipline of neuroscience. Neurotheology is generally classified as a branch of neuroscience seeking to understand how religious experience functions within the brain. The field has recently taken off and continues to grow thanks to the research of Andrew Newberg and Mark Robert Waldman, but its aims were first pursued by James Ashbrook.

For Ashbrook, the goal of neurotheology is to question "and explore theology from a neurological perspective, thus helping us to understand the human urge for religion and religious myths." These definitions seem very similar, but one implies that neurotheology is subordinate to theology and the other presents neurotheology as subordinate to neuroscience. This ambiguity becomes more muddled by Newberg in his work Principles of Neurotheology, where he supports the notion that competing and open-ended definitions for terms such as “religion,” “theology,” “spirituality,” and “neuroscience” are acceptable. In promoting open-ended definitions, Newberg may have suggested starter definitions as a basis for terms in this emerging field, such as “religion” as a particular system of faith and worship; “theology” as the study of God and God’s relation to the world; “spirituality” as the search for independent or transcendent meaning; and “neuroscience” as the study of how the nervous system develops, its structure, and what it does.

from wbur

Tuesday, June 2, 2015

23andMe: The Ethics of Genetic Testing for Neurodegenerative Diseases

by Liana Meffert

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Liana is a senior at Emory University majoring in Neuroscience and Behavioral Biology and Creative Writing (poetry). She is currently applying to Public Health graduate schools and considering a future in medicine. In her free time she enjoys running, reading, and her research on PTSD at Grady Memorial Hospital.
23andMe logo 

The face of genetic testing and counseling is in the midst of a major overhaul. Historically, a patient had to demonstrate several risk factors including familial and medical health history or early symptoms in order to be tested for the likelihood of developing a neurodegenerative disease. For the first time, the public has unrestricted and unregulated access to the relative probability of developing certain neurodegenerative diseases.

So why is finding out you may develop a neurodegenerative disease in later years different than learning you’re at high risk for breast cancer? Neurodegenerative diseases are unique in that they essentially alter one’s concept of “self.” Being told you may succumb to cancer at some point in your life is a much different scenario than being told your memories will slowly deteriorate or that the way you relate to your loved ones, or even the very things you enjoy, may change. For the first time in history, the potential for these drastic changes in your “future self” are available at the click of a button.

Tuesday, May 26, 2015

Disease or Diversity: Learning from Autism

by Jillybeth Burgado

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Jillybeth is a senior undergraduate double majoring in neuroscience and behavioral biology and religion. She hopes to pursue a PhD in neuroscience after working as a research assistant after graduation.

Chipmunka Publishing 
The idea that variation in behaviors arises through natural differences in our genome was popularized in the 1990s and termed “neurodiversity.” Led in large part by autism spectrum disorder (autism) activists, this movement challenged the established notions of autism as a disease that needed to be eradicated, championing the acceptance of a wide array of neural differences in the population. Rejecting terms such as “normal,” proponents of neurodiversity questioned common messaging and goals of research organizations (e.g. autism is not something that needs to be eradicated or “cured”). In this post, I briefly summarize the neuroethical concerns of ground-breaking neuroscience research, with particular focus on autism diagnostic research. I will then introduce a less well-known movement, Mad Pride, and discuss how we can apply some of the concepts and lessons from the autism and neurodiversity movements to understand and evaluate the claims of those involved with Mad Pride.