Tuesday, June 30, 2015

New neuro models for the interdisciplinary pursuit of understanding addiction

by Katie Givens Kime

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Katie Givens Kime is a doctoral student in Religion, with foci in practical theology, psychoanalysis, and neuroethics, and her research investigates the religious and spiritual aspects of addiction recovery methods.  

A few years ago, a highly respected and accomplished philosopher at Duke University, Owen Flanagan, surprised everyone when he stood up to speak at Society for Philosophy and Psychology.  A garden-variety academic presentation it was not.  In “What Is It Like to Be An Addict?” Flanagan revealed to 150 of his esteemed colleagues that he had been addicted to various narcotics and to alcohol for many, many years.  Not so long ago, every gruesome morning looked like this:

I would come to around 6:15 a.m., swearing that yesterday was the very last time...I’d pace, drink a cup of coffee, and try to hold to my terrified resolve.  But by 6:56—every time, failsafe, I’d be in my car, arriving at the BP station...at 7 a.m. sharp I’d gather my four or five 16-ounce bottles of Heineken, hold their cold wet balm to my breast, put them down on the counter only long enough to be scanned....I guzzled one beer in the car.  Car cranking, BP, a beer can’s gaseous earnestness—like Pavlov’s dogs, when these co-occur, Owen is off, juiced...the second beer was usually finished by the time I pulled back up to the house, the house on whose concrete porch I now spent most conscious, awake, time drinking, wanting to die.  But afraid to die.  When you’re dead you can’t use.  The desire to live was not winning the battle over death.  The overwhelming need – the pathological, unstoppable – need to use, was. (Flanagan, 2011, p. 77) 

Research on addiction is no small niche of medical science.  It’s an enormous enterprise.  This seems appropriate, since addiction (including all types of substance abuse) is among the top public health crises in the industrialized West. The human suffering and the public (and private) expense wrought by addiction is immense. (See data here, here, and here.)

To that end, two accomplished researchers recently guest lectured here in Atlanta, representing a few dynamic edges of such research.  Dr. Mark Gold lectured for Emory University’s Psychiatry Grand Rounds on "Evolution of Addiction Neurobiology and Treatment Over the Past 40 Years,” and Dr. Chandra Sripada lectured for the Neurophilosophy Forum at Georgia State University on "Addiction, Fallibility, and Responsibility.”

Tuesday, June 23, 2015

Selfhood and ethics: Who am I and why does it matter?

by Keenan Davis

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Keenan is a graduate student in Bioethics, whose work focuses on the use of virtue ethics and natural law to evaluate novel biotechnologies. He will be pursuing a PhD in the Graduate Division of Religion in the fall.

What should I be doing with my life? Many approach this timeless question by considering first another: Who am I? For a wide range thinkers from Plato to Dr. Phil, we can only know what to do with ourselves when we truly know ourselves. Who we are determines and constrains how we ought to behave. For example, because my parents caused me to exist, I should behave towards them with a level of gratitude and love. Perhaps through a cause-and-effect dynamic, as a result of being their son, I should treat them respectfully. We will return to this example at the conclusion of our exploration.

Historically, the question of selfhood was assessed in terms of an afterlife, seeking to resolve what happens to us when we die. If, as Plato claimed, a person is nothing more than his soul, "a thing immortal," then he will survive physical death. Indeed, perhaps one should look forward to the separation of the soul from material constraints. How we ought to behave then is for the sake of existence after and beyond this world, a position shared by many adherents to Abrahamic religion. On the other hand, if we are no more than our bodies, then we do not persist after death and have no reason to orient our behavior toward post-mortem expectations. Such is the position of Lucretius and the Epicureans who conclude that our practical task is instead to flourish within a strictly material context. Our behavior should be for the sake of this world. For both Lucretius and Plato, the metaphysical substance of self is what mattered foremost.

John Locke
As part of the 17th century Enlightenment, John Locke changed the focus from the substance of self and more explicitly addressed the issue of selfhood with an eye to its normative consequences. For instance, he believed the self to be based entirely on memory and consciousness, regardless of the relationship between body and soul. By defining personhood as continuous self-identification through memory, Locke aimed to establish psychological criteria for moral agency and responsibility. Only if one is responsible for particular actions ought he be liable for judgment, reward, or punishment. Despite his emphasis on the psychological, as opposed to the biological or spiritual, Locke's definition of self still follows the cause-and-effect pattern of is then ought: who I am determines how I should behave.



Using thought experiments like the famous Ship of Theseus conundrum, philosopher Trenton Merricks of the University of Virginia undermines this line of thought by suggesting that there is no metaphysical answer to the question of who we are. There simply are no necessary and sufficient criteria—psychological, bodily, or otherwise—of identity over time for any object. Lest we take this conclusion too far, Merricks explains that it does not mean that persons and objects lack essential properties or evade description: "Among my essential properties are, I think, being a person and failing to be a cat or hatbox." His assessment just means that not all explanations or identifications involving characteristics need to be stated in terms of absolute proof. Allowing a modest concession to unavoidable skepticism, we need not (nor do we ever) demonstrate infallibly that "the tree in my yard today is the same tree that was in my yard yesterday" to warrant that belief. We can still be warranted in our beliefs regarding who we are without proving them absolutely certain.

Tuesday, June 16, 2015

Changing the Way We Think

by David Michaels

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. David is a student at Emory University working on his Master's degree in Bioethics. After completing his graduate studies he will be attending medical school in Texas.  

Have you ever wondered what it would be like to have the ability to read minds? If you're like me, you've daydreamed about possessing this superpower. It's easy to imagine all of the fascinating ways you could exploit this gift to your liking. But after a while this myopic perspective is turned on its head when we imagine our own thoughts being read.  Quickly, almost instantaneously, we conclude with absolute certainty, "Nope, absolutely not - the power to read minds is a bad idea..." Some thoughts are probably best left alone in the mysterious impenetrable fortress of privacy--our mind.

However, recent breakthroughs in neuroscience may challenge the notion that our mind is impervious to infiltration. Did you know that we may have the ability in the near future to record our dreams so that we can watch them later? Scientists have been working on developing technology that translates brain activity (measured in an fMRI machine) to visible images, allowing us to "see" our thoughts. Although this technology currently only utilizes real-time brain activity and cannot produce images from stored thoughts (i.e. memories), it nevertheless introduces the possibility that people will be able to "see" our thoughts - and maybe "read" them too - in the future.

This is just one of many controversies over emerging 'neurotechnological lie detection' Sarah Stoller and Dr. Paul Root Wolpe discuss in a 2007 paper. They explore the question of whether or not the government has the right to invade our minds in order to obtain evidence that can be used in a court of law. Neuroscience has, for the first time in history, allowed researchers to bypass the peripheral nervous system and gather data directly from the brain (Wolpe et al. 2005). Although Stoller and Wolpe focus on the legality of these technologies and whether or not they violate our 5th amendment right, I want to explore whether adopting technologies that unveil the privacy of the mind will change the way we think and the way that we live.

Tuesday, June 9, 2015

The Ambiguity of "Neurotheology" and its Developing Purpose

by Shaunesse' Jacobs

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Shaunesse' is a dual masters student in Theological Studies and Bioethics at Emory and her research interests lie in end-of-life care and religious practices surrounding death and dying.

Are religion and spirituality authentic belief systems that have thrived for millennia because of their truth? Or are they simply constructs of the brain to help humanity cope with the unknown? With the advancement of science, can religion and science work together to understand humanity? What do religion and science have to say collectively that has not been said individually? These questions continue to be asked with each scientific advancement, and even more so now that neurotheology is beginning to develop as a sub-discipline of neuroscience. Neurotheology is generally classified as a branch of neuroscience seeking to understand how religious experience functions within the brain. The field has recently taken off and continues to grow thanks to the research of Andrew Newberg and Mark Robert Waldman, but its aims were first pursued by James Ashbrook.

For Ashbrook, the goal of neurotheology is to question "and explore theology from a neurological perspective, thus helping us to understand the human urge for religion and religious myths." These definitions seem very similar, but one implies that neurotheology is subordinate to theology and the other presents neurotheology as subordinate to neuroscience. This ambiguity becomes more muddled by Newberg in his work Principles of Neurotheology, where he supports the notion that competing and open-ended definitions for terms such as “religion,” “theology,” “spirituality,” and “neuroscience” are acceptable. In promoting open-ended definitions, Newberg may have suggested starter definitions as a basis for terms in this emerging field, such as “religion” as a particular system of faith and worship; “theology” as the study of God and God’s relation to the world; “spirituality” as the search for independent or transcendent meaning; and “neuroscience” as the study of how the nervous system develops, its structure, and what it does.

from wbur

Tuesday, June 2, 2015

23andMe: The Ethics of Genetic Testing for Neurodegenerative Diseases

by Liana Meffert

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Liana is a senior at Emory University majoring in Neuroscience and Behavioral Biology and Creative Writing (poetry). She is currently applying to Public Health graduate schools and considering a future in medicine. In her free time she enjoys running, reading, and her research on PTSD at Grady Memorial Hospital.
23andMe logo 

The face of genetic testing and counseling is in the midst of a major overhaul. Historically, a patient had to demonstrate several risk factors including familial and medical health history or early symptoms in order to be tested for the likelihood of developing a neurodegenerative disease. For the first time, the public has unrestricted and unregulated access to the relative probability of developing certain neurodegenerative diseases.

So why is finding out you may develop a neurodegenerative disease in later years different than learning you’re at high risk for breast cancer? Neurodegenerative diseases are unique in that they essentially alter one’s concept of “self.” Being told you may succumb to cancer at some point in your life is a much different scenario than being told your memories will slowly deteriorate or that the way you relate to your loved ones, or even the very things you enjoy, may change. For the first time in history, the potential for these drastic changes in your “future self” are available at the click of a button.