Tuesday, December 22, 2015

The freedom to become an addict: The ethical implications of addiction vaccines

by Tabitha Moses 

Tabitha Moses, M.S., is Administrative and Research Coordinator at Lehman College, CUNY, as well as a Research Affiliate at the National Core for Neuroethics at the University of British Columbia. Tabitha earned her BA in Cognitive Science and Philosophy and MS in Biotechnology from The Johns Hopkins University. She has conducted research in the areas of addiction, mental illness, and emerging neurotechnologies. She hopes to continue her education through a joint MD/PhD in Neuroscience while maintaining a focus on neuroethics.

The introduction of “addiction vaccines” has brought with it a belief that we have the potential to cure addicts before they have ever even tried a drug. Proponents of addiction vaccines hold that they will:
  1. prevent children from becoming addicted to drugs in the future, 
  2. allow addicts to easily and safely stop using drugs, and 
  3. potentially lower the social and economic costs of addiction for society at large.
However, it is critical to be aware of the limitations and risks - both ethical and physical - of introducing these vaccines into mainstream medical care.

Tuesday, December 15, 2015

Combating neurohype

by Mo Costandi

Mo Costandi trained as a developmental neurobiologist and now works as a freelance writer based in London. His work has appeared in Nature, Science, and Scientific American, among other publications. He writes the Neurophilosophy blog, hosted by The Guardian, and is the author of 50 Human Brain Ideas You Really Need To Know, published by Quercus in 2013, and Neuroplasticity, forthcoming from MIT Press. Costandi also sits on the Board of Directors of the International Neuroethics Society.

In 2010, Judy Illes, president elect of the International Neuroethics Society, argued that neuroscientists need to communicate their research to the general public more effectively. Five years on, that message is still pertinent - and perhaps even more so.

Tuesday, December 8, 2015

Getting aHead: ethical issues facing human head transplants

By Ryan Purcell

Gummy bear head transplant, courtesy of flickr user Ella Phillips
In a widely circulated Boston Globe editorial this summer, Steven Pinker told bioethicists to “get out of the way” of scientific progress. There is abundant human suffering in the world today, he said, and the last thing we need is a bunch of hand wringing to slow down efforts to attenuate or even eliminate it. The prospect of head transplantation, however, has the potential to make us all a bit more appreciative of our local bioethicists. Even if there were not any technical issues (of which, there are of course plenty), coming to terms with the muddier personal and societal issues inherent in a procedure such as this could take quite a while. Nevertheless, Dr. Sergio Canavero is not planning to wait around and wants to perform a human head transplantation by the end of 2017. Are we ready?

Tuesday, December 1, 2015

Don’t miss our Special Issue of AJOB Neuroscience: The Social Brain

By Katie Strong, PhD

If you haven’t already, be sure to read the 6.3 Issue of AJOB Neuroscience, our special issue on The Social Brain guest edited by Dr. Jean Decety. The issue centers on the biological, neuroscientific, and clinical evidence for human social cognition, along with the philosophical and ethical arguments for modifying morality and social emotions and behaviors, such as empathy, trust, and cooperativity.

The first target article by Jean Decety and Jason M. Cowell entitled “Empathy, Justice, and Moral Behavior” argues that despite the importance of empathy for driving our social lives, forging necessary social bonds, and making complex decisions, empathy alone is not enough in regards to moral resolutions and judgements. While empathy underpins cooperativity and the formation of social bonds, empathy has evolved to promote bias and in-group social preferences. The target article provides evidence that empathy does not always lead to moral decisions, and empathy often favors in-group members over out-group members. Decision making can be biased to favor relatives or a single individual over many people and for that reason, reasoning must accompany empathy. “Empathy alone is powerless in the face of rationalization and denial. But reasoning and empathy can achieve great things,” state the authors at the conclusion of the paper.

Tuesday, November 24, 2015

Widening the use of deep brain stimulation: Ethical considerations in research on DBS to treat Anorexia Nervosa

by Carolyn Plunkett


Carolyn Plunkett is a Ph.D. Candidate in the Philosophy Department at The Graduate Center of City University of New York. She is also an Ethics Fellow in The Bioethics Program at the Icahn School of Medicine at Mount Sinai, and a Research Associate in the Division of Medical Ethics at NYU Langone Medical Center. Carolyn will defend her dissertation in spring 2016, and, beginning July 2016, will be a Rudin Post-Doctoral Fellow in the Divisions of Medical Ethics and Medical Humanities at NYU Langone Medical Center. 

This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.

Karen Rommelfanger, founding editor of The Neuroethics Blog, heard a talk I gave on deep brain stimulation (DBS) at Brain Matters! 3 in 2012. Three years later, she heard a brief synopsis of a paper I presented a few weeks ago at the International Neuroethics Society Annual Meeting. Afterward, she came up to me and said, “Wow! Your views have changed!” I had gone from being wary about using DBS in adults, much less minors, to defending its use in teens with anorexia nervosa. She asked me to write about this transition for this blog, and present my recent research.

Tuesday, November 17, 2015

Do you have a mouse brain? The ethical imperative to use non-human primates in neuroscience research

by Carlie Hoffman

Much of today’s neuroscience research investigating human brain diseases and disorders utilizes animal models. Animals ranging from flies, rodents, and non-human primates are routinely used to model various disorders, with mice being most commonly utilized. Scientists employ these animal models to approximate human conditions and disorders in an accessible manner, with the ultimate purpose of applying the findings derived in the animal back into the human brain.

Rhesus macaques, a species of NHP often used in research.
The use of animals in research has been the source of much debate, with people either supporting or objecting their use, and objections arising from animal rights activists, proponents of critical neuroscience such as Nikolas Rose and Joelle Abi-Rached, and others. A main focus of this debate has also been the use of non-human primates (NHP) in research. The cognitive functions and behaviors of NHPs are more closely related to those seen in humans than are rodent cognitions and behaviors, thus causing primates to be held as the closest approximation of human brain functioning in both normal and disease states. Though some say NHP research is essential, others call for scaling down or even completely eliminating it. Strides have already been made towards the reduction and removal of NHPs from experimental research, as displayed by the substantial justification required to perform experiments utilizing them, the increasing efforts going towards developing alternative non-animal models (including the Human Brain Project’s goal to create a computer model of the human brain), and the recent reduction of the use of chimpanzees in research [2, 6].  A case was even brought to the New York Supreme Court earlier this year to grant personhood status to two research chimpanzees.

Monday, November 9, 2015

Why defining death leaves me cold

by John Banja, PhD

*Editor's note: In case you missed our annual Zombies and Zombethics (TM) Symposium entitled Really, Most Sincerely Dead. Zombies, Vampires and Ghosts. Oh my! you can watch our opening keynote by Dr. Paul Root Wolpe by clicking on the image below. We recommend starting at 9:54 min.

http://4.bp.blogspot.com/_1c6uKfrEfj0/S7-dg3QzSwI/AAAAAAAAE8M/RJRZVjlXnrI/s320/Meinhardt_Raabe.jpg

Two weeks ago, I attended a panel session on brain death at the annual conference of the American Society for Bioethics and Humanities. Forgive the bad pun, but the experience left me cold and …lifeless(?). The panel consisted of three scholars revisiting the more than a decade old conversation on defining death. Despite a standing room only crowd, there was utterly nothing new. Rather, we heard a recitation of the very familiar categories that have historically figured in the “What does it mean to be dead?” debate, e.g., the irreversible cessation of cardio-respiratory activity, the Harvard Brain Death criteria, the somatic integration account, the 2008 Presidential Commission’s “loss of the drive to breathe,” and so on. I walked out thinking that we could come back next year, and the year after that, and the year after that and get no closer to resolving what it means to be dead.

Tuesday, November 3, 2015

Shrewder speculation: the challenge of doing anticipatory ethics well

by Dr. Hannah Maslen 

Hannah Maslen is a Research Fellow in Ethics at the Oxford Martin School and the Oxford Uehiro Centre for Practical Ethics. She currently works on the Oxford Martin Programme on Mind and Machine, where she examines the ethical, legal, and social implications of various brain intervention and interface technologies, from brain stimulation devices to virtual reality. 

This post is part of a series that recaps and offers perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.

In its Gray Matters report, the United States Presidential Commission for the Study of Bioethical Issues underscored the importance of integrating ethics and neuroscience early and throughout the research endeavor. In particular, the Commission declared: 

"As we anticipate personal and societal implications of using such technologies, ethical considerations must be further deliberated.  
Executed well, ethics integration is an iterative and reflective process that enhances both scientific and ethical rigor." 

What is required to execute ethics integration well? How can philosophers make sure that their work has a constructive role to play in shaping research and policy-making?

Tuesday, October 27, 2015

Is football safe for brains?

by Dr. L. Syd M Johnson

Dr. Johnson is Assistant Professor of Philosophy & Bioethics in the Department of Humanities at Michigan Technological University. Her work in neuroethics focuses on disorders of consciousness and sport-related neurotrauma. She has published several articles on concussions in youth football and hockey, as well as on the ethics of return-to-play protocols in youth and professional football.

This post is the first of several that will recap and offer perspectives on the conversations and debates that took place at the recent 2015 International Neuroethics Society meeting.

At the International Neuroethics Society annual meeting in Chicago this month, Nita Farahany and a panel from the Football Players Health Study at Harvard University (FHPS) headlined the public talk “Is professional football safe? Can it be made safer?” The panel declined to provide direct answers to these important questions, but the short answers are “No,” and “Not by much,” respectively.

Tuesday, October 20, 2015

Technologies of the extended mind: Implications for privacy of thought

by Peter Reiner, PhD


Dr. Reiner is Professor and co-founder of the National Core for Neuroethics, at the University of British Columbia. Dr. Reiner began his academic career studying the cellular and molecular physiology of the brain, and in 1998, Dr. Reiner became President and CEO of Active Pass Pharmaceuticals, a drug discovery company that he founded to tackle the scourge of Alzheimer's disease. Upon returning to academic life in 2004, Dr. Reiner refocused his scholarly work in the area of neuroethics. He is also an AJOB Neuroscience board member.

Louis Brandeis in his law office, 1890.
In 1890, Samuel Warren and his law partner Louis Brandeis 
published what has become one of the most influential essays in the history of US law. Entitled The Right to Privacy [1], the article is notable for outlining the legal principles that protect privacy of thought. But it is not just their suggestions about privacy that are illuminating – it is their insight into the ways that law has changed over historical time scales that makes the paper such a classic. In very early times, they write, “the law gave a remedy only for physical interference with life and property...[and] liberty meant freedom from actual restraint.” Over time, as society began to recognize the value of the inner life of individuals, the right to life came to mean the right to enjoy life; protection of corporeal property expanded to include the products of the mind, such as literature and art, trademarks and copyrights. In a passage that resonates remarkably well with the modern experience, they point out that the time was nigh for the law to respond to changes in technology.

Tuesday, October 13, 2015

The Neuroethics Blog Reader hot off the presses!

It is my pleasure to present you with our first edition of The Neuroethics Blog reader. This reader includes some of the most popular posts on the site and highlights our junior talent.

While the blog showcases cutting-edge debates in neuroethics, it also serves as a mechanism for mentoring junior scholars and students and providing them with exciting opportunities to have their pieces featured alongside established scholars in the field. In addition, the blog allows for community building, inviting scholars from multiple disciplines to participate. Our contributors have included individuals at various levels of education from fields such as law, neuroscience, engineering, psychology, English, medicine, philosophy, women’s studies, and religion, to name a few. Each blog post is a collaborative process, read and edited numerous times by the editorial leadership in partnership with the author.

We aim to continue to mentor and deliver quality posts that serve to cultivate not only our neuroethics academic community, but also members of the public who may be cultivating their own interests in neuroethics. Whether for direct applications in your profession or simply to understand the world in which we live, we hope the blog will help you navigate the implications of new neurotechnologies and explore what is knowable about the human brain.

At this time, I'd like to thank our amazing editorial team including Lindsey Grubbs (Managing Editor), Carlie Hoffman (Editor of this reader), Ryan Purcell, and Katie Strong. I'd also like to highlight our previous Managing Editors Dr. Julia Haas and Julia Marshall who have since graduated and are continuing their scholarship in neuroethics, as well as Jonah Queen who was there from the very beginning. Stay tuned for more great things from this group along with all of our talented contributors.

Thank you for taking the time to embark on this journey with us and please enjoy this reader!

P.S. If you are lucky enough to find yourself at the International Neuroethics Society conference this Oct 15-16, we will have limited printed copies available. Just look for folks wearing the "Ask Me About AJOB Neuroscience" buttons.




Tuesday, October 6, 2015

Your Brain on Movies: Implications for National Security

by Lindsey Grubbs

An intellectually diverse and opinionated crowd gathered recently for the most recent Neuroethics and Neuroscience in the News journal club at Emory University—“Your brain on movies: Implications for national security.” The discussion was one of the liveliest I've seen in the years I've been attending these events, which is perhaps not surprising: the talk touched on high-profile issues like neuromarketing (which is controversial enough that it has been banned in France since 2011) and military funding for neuroscience.

The seminar was led by Dr. Eric Schumacher, Associate Professor of Psychology at Georgia Tech, director of the Georgia State University/Georgia Tech Center for Advanced Brain Imaging, and principle investigator of CoNTRoL—Cognitive Neuroscience at Tech Research Laboratory. Currently, the lab investigates task-oriented cognition, as well as the relationship between film narratives and “transportation” (colloquially, the sense of “getting lost” in a story), which is a complex cognitive puzzle involving attention, memory, and emotion.

Cary Grant chased by an airplane in North by Northwest,
courtesy of Flickr user Insomnia Cured Here.
Schumacher presented his recent article, “Neural evidence that suspense narrows attentional focus,” published in Neuroscience. Subjects in the study were placed in an MRI scanner and shown film clips of suspenseful films including Alien, Blood Simple, License to Kill, and three Hitchcock films: North by Northwest, Marnie, and The Man Who Knew Too Much (I think I enrolled in the wrong studies to pay for college). The scanner revealed when suspense in the film increased, people's gaze was focused on the film.

Tuesday, September 29, 2015

Overexposed: The role of environmental toxicants on your brain

By Carlie Hoffman

It is often said that we are products of our environment: who we are is shaped by the things, people, and situations with which we surround ourselves. However, whatever we may like to think, we are not in control of every facet of our environment. In fact, we are unknowingly and involuntarily exposed to dozens of man-made environmental chemicals, called toxicants, each day that can negatively alter our bodies and even our very brain matter. In essence, we are becoming literal products of our environment.

Synthetic chemicals and toxicants are ubiquitous within our surroundings. While some toxicants come from obvious sources, like cigarette smoke and car exhaust, other sources of exposure are more subtle. For instance, electrical equipment (like computers and cell phones), beauty products (like makeup and shampoo), mattresses, and furniture all contain flame retardants, chemicals used to reduce flammability [3, 13]. Bisphenol A (BPA) and phthalates, chemicals used to harden plastics, can also be found in dental sealants, cigarette filters, soda bottles, and the linings of canned foods [4, 8, 12]. Additionally, dichlorodiphenyltrichloroethane (DDT), a pesticide commonly used in the mid-1900s to combat outbreaks of pests, malaria, and lice, was banned in 1972 in the US and yet is still currently present within both the environment and human tissues [12].

Pesticides not only harm insects, but certain doses can also have harmful effects on the human body.

Tuesday, September 15, 2015

Unintentional discrimination in clinical research: Why the small decisions matter

by Arthur T. Ryan, M.A. and Elaine F. Walker, Ph.D.

Arthur Ryan is a graduate student in clinical psychology at Emory University. His research focuses on understanding the etiology and neuropathology underlying severe mental illness.

Elaine Walker is a Professor of Psychology and Neuroscience in the Department of Psychology at Emory University and is the Director of the Development and Mental Health Research Program, which is supported by the National Institute of Mental Health. Her research is focused on child and adolescent development and the brain changes that are associated with adolescence. She is also a member of the AJOB Neuroscience editorial board.

Arthur Ryan, M.A.
Over the past several decades, there has been a significant effort to minimize bias against individuals based on ethnicity and other demographic factors through the creation of seemingly impartial and objective criteria across a host of domains. For example, when the United States Federal Sentencing Guidelines were created in the 1980’s, one of their primary goals was to alleviate “...unwarranted disparity among offenders with similar characteristics convicted of similar criminal conduct” [1]. Unfortunately, even well-intentioned efforts such as this one can still have a disparate negative impact upon historically marginalized groups, such as the well-documented disproportionate sentencing of black individuals due to differing rules governing offenses committed with crack vs. powdered cocaine [2]. Concerns about such inadvertent bias are not limited to the legal domain. Agencies that fund clinical investigations are paying greater attention to demographic representativeness and access to participation in health-related research.

Ethics and suicide: Are we paying attention to the important issues?

by Victoria Saigle and Eric Racine, Ph.D.

Eric Racine, Ph.D.

Victoria Saigle is a graduate student at the Institut de recherches cliniques de Montréal's Neuroethics Research Unit. She is a completing her MSc in Experimental Medicine at McGill University through the Biomedical Ethics Unit. 

Dr. Eric Racine is the director of the Neuroethics Research Unit at the Institut de recherches cliniques de Montréal and holds academic appointments in the Department of Medicine and the Department of Social and Preventive Medicine at Université de Montréal and in the Department of Neurology and Neurosurgery, the Department of Medicine, and the Biomedical Ethics Unit at McGill University. He is also a member of the AJOB Neuroscience Editorial Board.

Discussing suicide can be difficult in clinical, public, and academic settings because many people have strong intuitions about which, when, and whether voluntary death is appropriate. However, discussions about suicide are largely absent from bioethics scholarship. Considering that suicide is among the ten most common causes of death worldwide and the second leading cause of death for individuals aged 15-29 (World Health Organization, 2014), it is surprising that more attention is not devoted to this topic.

Victoria Saigle
Ethical dilemmas related to suicide intersect with important questions in research ethics, clinical ethics, and public health ethics. However, we discovered in recent work that the majority of ethics scholarship on voluntary death focuses either entirely on physician-assisted dying (PAD – a term we are using here to describe many different acts in which a physician helps to hasten death at a patient’s request) or consists of philosophical arguments about the acceptability or rationality of suicide. Though interesting, these topics do little to address the challenges and lived experiences of suicidal individuals, their families, suicide researchers, or health professionals. Below, we will delineate aspects of suicide that deserve more attention.

Tuesday, September 8, 2015

Is trauma in our genes? Ethical implications of epigenetic findings

by Neil Levy

Neil Levy is professor of philosophy at Macquarie University, Sydney and deputy director of the Oxford Centre for Neuroethics. He is the author of 7 books, including Neuroethics (2007) and Consciousness and Moral Responsibility (2014), and edits the journal Neuroethics. He is also a member of the AJOB Neuroscience board.

A recent study by Rachel Yehuda et al. in Biological Psychiatry provided further evidence for the genetic transmission of acquired characteristics, by showing that Holocaust survivors passed certain acquired genetic markers to their children. The idea that acquired characteristics can be genetically transmitted is (roughly) equivalent to the doctrine of Lamarckism, and was long considered a heresy in biology. [Editor's note: see also Ryan Purcell's 2014 post for this blog on the relationship between Lamarckism and epigenetics.] According to the Darwinian orthodoxy, traits change because randomly occurring mutations confer a relative fitness advantage on some organisms, not because they change their behaviour, and that change then comes to be encoded in the genes. But the orthodoxy has long been shattered. Scientists now recognize that the story is a lot more complex than that.

Tuesday, September 1, 2015

Brain devices: Navigating collaborations between industry, government, and researchers

by Paul J. Ford, PhD

Dr. Ford is Director of the NeuroEthics Program at the Cleveland Clinic. He is an active clinical ethicist, and teaches ethics to medical students, residents, and fellows. His publications have appeared in Science, The Hastings Center Report, Neurology, Neuromodulation, and Journal of Medical Ethics. He is also a board member for AJOB Neuroscience.

This spring (June 3-4, 2015) the National Institutes of Health (NIH) as part of the BRAIN Initiative convened an eclectic group of individuals in hopes of encouraging more investigator initiated studies of currently approved neuromodulation and neuro recording devices for new indications (agenda, session videos, and program goals available here). The participants, both on the program and in the audience, specifically included industry, researchers, universities, and governmental agencies. I was delighted to participate in the workshop and was impressed by the number of sincerely interested parties across the spectrum of roles. Within these conversations it was apparent that there existed many shared values and goals as well as complex challenges for protecting particular interests. It beautifully highlighted the complexities of interactions among varied stakeholders.

Tuesday, August 25, 2015

Self/less and transplanting (ID)entities

by Karen Rommelfanger

I recently sat on a panel discussion for an early screening of the movie Self/less. I'm quoted (mostly correctly) with my name (mostly) spelled correctly here.

In Self/less, an aging business tycoon with a terminal illess (played by Ben Kingsley) pays to "shed" his skin for a new, younger, fitter body (played by Ryan Reynolds). See trailer above.

The film, despite the futuristic theme, revisits mundane themes of the Faustian tradeoff or a deal with a devil, ultimately conveying the message that the costs, even for the rich, are too high when trying to cheat death. The title of the movie implies that for the greater good the selfless thing to do is to just die as nature intended.

While the film would surely be categorized as science fiction, there are entrepreneurs quite dedicated to making such a possibility a reality.

Tuesday, August 18, 2015

Why I teach with an English professor

by Krish Sathian, MD, PhD

Dr. Sathian is Professor of Neurology, Rehabilitation Medicine, and Psychology at Emory University, and directs the Neurorehabilitation Program in the Department of Neurology. The recipient of Emory’s 2001 Albert Levy senior faculty award for excellence in scientific research, he is Executive Director of the Atlanta VAMC Rehabilitation R&D Center for Visual and Neurocognitive Rehabilitation and immediate Past President of the American Society of Neurorehabilitation.

Editor's note: The following post is the second of a pair of essays about interdisciplinary teaching we will feature on the blog. Please see its companion piece from last week, Dr. Laura Otis's "Why I teach with a neurologist." It is often said that academic fields are becoming increasingly siloed as specializations become more and more detailed and jargon-filled with each new peer-reviewed paper. The classes co-taught by Professors Otis and Sathian were unique interdisciplinary spaces where students across traditional disciplinary divides were able to wrestle with topics shared by the humanities and sciences: perception, imagination, and art. Is this kind of interdisciplinary inquiry a necessary counterbalance to the siloing of the disciplines? Or could it even be seen as part of the ethical practice of science? Might having more of such classes improve the science literacy of those in the humanities, and keep scientists in touch with the depth of expertise that other fields can contribute (as I have argued in an earlier post)? Should we begin to find ways to institutionalize more of this type of work into the higher education system, or provide more movement between the disciplines? Or is interdisciplinarity merely a fad? Readers: what do you think? 

I consider myself very fortunate to work both as a clinical neurologist in academia, and as a neuroscientist investigating fundamental questions about the brain that may in time have an impact on how we treat people with neurological disorders. My own research over many years has concentrated on studies of perception, but I recently began to study how the brain handles metaphor.

Tuesday, August 11, 2015

Why I teach with a neurologist

by Laura Otis, PhD

Dr. Otis is a Professor of English at Emory University. Although she ultimately obtained a PhD in Comparative Literature and now teaches English literature, she holds a BS in Molecular Biophysics and Biochemistry and an MA in Neuroscience, and she worked in research labs for years. She was awarded a MacArthur fellowship for creativity in 2000 and is currently working as a visiting scholar at the Max Planck Institute for Human Development in Berlin.

Editor's note: The following post is the first of a pair of short essays about interdisciplinary teaching that will be featured on the blog. Stay tuned next week for Dr. Krish Sathian's "Why I teach with an English professor." It is often said that academic fields are becoming increasingly segregated as specializations develop more jargon and become more detailed with each new peer-reviewed paper. However, the classes co-taught by Professors Otis and Sathian are unique interdisciplinary spaces where students across traditional disciplinary divides are able to wrestle with topics shared by the humanities and sciences: perception, imagination, and art. Is this kind of interdisciplinary inquiry a necessary counterbalance to the segregation of the disciplines? Or even part of the ethical practice of science? Might having more classes like this improve the scientific literacy of those in the humanities, and keep scientists in touch with the depth of expertise that other fields can contribute (as I have argued in an earlier post)? Should we begin to find ways to institutionalize more of this type of work into the higher education system, or provide more movement between the disciplines? Or is interdisciplinarity merely a fad and a buzzword? Readers: what do you think? 

In teaching, there are few things worse than realizing you’ve told your students something wrong. The jolt may come a year, five years down the line, but you can’t issue a retraction. They’ve dispersed to medical schools, where they’re now propagating your error. It’s been thirty years since I studied Neuroscience at UCSF, and a few things have changed since then. The human genome has been sequenced. Scientists analyze data on computers. I try to keep abreast of what’s happening, but this is hard while teaching Victorian literature. In this climate of near-worship for Neuroscience, I worry that I could say anything about the brain, and people would believe me. With a neurologist in the room, this can’t happen.

Tuesday, August 4, 2015

Meeting ethological needs: Conflicting data on orca longevity in captivity

by Frans de Waal

Editor's note: Frans de Waal, PhD, is the Charles Howard Candler Professor of Primate Behavior at Emory University and the Director of the Living Links Center at the Yerkes National Primate Research Center. He is also a member of the United States National Academy of Sciences and the Royal Netherlands Academy of Sciences and a member of the AJOB Neuroscience editorial board. His research focuses on primate social behavior, including conflict resolution, cooperation, inequality aversion, and food-sharing.

de Waal, a leading primatologist, makes an argument here for thinking seriously about the captivity of certain animals such as orcas. Of course, the orca also has a sophisticated mammalian brain. Is the defining criterion of our responsibility to other animals their ecological needs, as de Waal suggests, or is it their cognitive function? What do you think?

There is so much to-do about orcas (killer whales) in captivity, with a drumbeat of voices against humans keeping this species, that it was about time we got some data on longevity. Not that longevity is the only measure to consider with regards to the ethics of keeping these fascinating animals, but since there is the claim out there that orcas in human care live short, stressful lives, there is a need to know the truth.

Source: flickr.com

Tuesday, July 28, 2015

Liberating brains from bodies by capturing them with brainets?

by Karen Rommelfanger

Miguel Nicolelis is dedicated to liberating the human brain from the physical constraints of a body.

Recently, brain-machine interface engineer extraordinaire Miguel Nicolelis connected nonhuman animal brains in a modern-day mind meld called the brainet. For those who don't already know him, Nicolelis is an innovator, dedicated to pushing the limits of what is possible with neurotechnology, and a media darling to boot.

One focus of Nicolelis' work has been developing neural prostheses whose function is mediated through wired or wirelessly transmitted electrical activity from arrays of electrodes implanted on the surfaces of nonhuman animal brains. One well-known experiment from the Nicolelis lab involved monkeys that learned to feed themselves a marshmallow  or even operate a robot on a treadmill via direct connection electrodes implanted in their brains and a prosthetic arm. For extra flash, Nicolelis had a 12-lb monkey (based out of a Duke laboratory) operate a 200-lb robot on a treadmill in Tokyo by transmitting its brain activity through an Internet connection. In this same 2013 interview he waxes philosophical, “Our sense of self does not end at end of the cells of our bodies, but it ends at the last layer of the electrons of the tool that we’re commanding with our brains.”


His work has intended applications for humans. One recent media stunt involved a "Mind-controlled robotic exoskeleton"  donned by an individual who was paralyzed from the trunk down. 29-year-old Juliano Pinto kicked off the first ball at the World Cup in 2014 through an electrode studded cap on his head that transmitted recorded electrical activity from his brain to a robotic suit. Hailing from

Tuesday, July 21, 2015

Bring back the asylum: A critical analysis of the call for a "return to 'modern' institutionalization methods"

By Cassandra Evans

Cassandra Evans is a Ph.D. student in Disability Studies at Stony Brook University. She studies mental disabilities and ethics surrounding treatment, services, and access for individuals with mental disabilities. She is currently examining the history of institutions in Suffolk County, Long Island (New York) and what shape the “way forward” from institutionalization will take in the new millennium.

This post is a shorter version of a talk Cassandra gave at the Society for Disability Studies’ national conference in Atlanta, Georgia, June 11, 2015.

In early June, 2015, I visited Pilgrim Psychiatric Center in Brentwood, New York, (Suffolk County, Long Island). As I drove onto the Pilgrim campus, I felt as if I could be entering any of the other scores of institutions around the country—the pictures I’ve seen all look so similar and convey the same eeriness: high rise brick buildings with plain numbers on them, grass growing up all around, broken and barred windows, some areas with trash heaps on the grounds and graffiti on the walls. The names were different, but during their official operations, the treatments and results were similar—many individuals stayed longer than they ever wanted, many died and few were “cured.”

This photo shows a brick high-rise institutional building with a 
gravel road leading away from its parking lot, green grass and 
fresh tire tracks nearby.  Toward the front of the building several 
cars are parked outside the front of the building at the bottom 
floor of this 10- or 12-story, double-winged ward.  “Building 82” 
at Pilgrim Psychiatric Center in Brentwood, New York, is still 
home to many individuals with psychiatric disabilities.  Though 
three out of four institutions in Suffolk County, Long Island were 
closed and their residents deinstitutionalized, others with more 
severe  disabilities or who were more geriatric ended up here.

Photo by Cassandra Evans

Tuesday, July 14, 2015

The power of a name: Controversies and changes in defining mental illness

by Carlie Hoffman

The purposes of naming are to help categorize the world in which we live and to aid in grouping similar things together. However, who decides which name is the correct one? Is a child who often cannot pay attention to his classwork “absent-minded,” or experiencing attention deficit hyperactivity disorder? Is a person whose moods often swing from one extreme to the other simply “moody,” or living with bipolar disorder? Naming a lived experience a “mental illness” has the ability to change the social realities of those who receive the diagnosis, altering not only self-perception, but also influencing the perceptions and triggering the biases of others— often in a detrimental manner. So, who has the power to determine how such a label is assigned, and what happens if someone is given the wrong one?

The power affiliated with naming has caused the diagnosis of mental disorders to be fraught with controversy. Mental illnesses are defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM), which has been deemed the “bible” of mental health. According to Dr. Thomas Insel, the director of the National Institutes for Mental Health (NIMH), the goals of the DSM are to create a common language for describing mental illness, and to ensure that mental health care providers use the same terms in the same ways. Thus, when patients visit a psychiatrist in search of a name that will define the symptoms they are experiencing, this name is assigned with the aid of the DSM.

One controversy affecting the diagnosis of mental disorders is the growing concern with medicalization of the “normal” human experience. Medicalization is the process of defining select human experiences or conditions, typically ones that were once considered normal, as medical conditions that warrant professional medical attention. Some level critiques against medicalization, particularly the medicalization of experiences associated with cognitive and emotional function, suggesting it can lead to over-diagnosis of mental disorders as individuals cope with stressors in a typical fashion [5, 11, 13]. A series of controversial changes made to the newest edition of the DSM, DSM-5, have provided a foothold for those concerned with medicalization. The addition of premenstrual dysphoric disorder and the elimination of the bereavement exclusion from the criteria for major depressive disorder have increased the apprehension that typical premenstrual mood and behavioral changes, and the normal grieving process could be classified as mental disorders [7, 13, 14].

Tuesday, July 7, 2015

Charles Bonnet syndrome, musical ears, and normal hallucinations

by Jonah Queen

In a previous post on this blog, I wrote about the Mad Pride movement, which advocates for the rights of, and the end of stigma against, those diagnosed with psychiatric disorders. I discussed how the lack of a clear distinction between “normal” and “abnormal” psychology even leads some activists to think of these conditions as extreme emotional or sensory experiences rather than illnesses. Mad pride advocates see a trend of increasing medicalization within psychiatry, arguing that feelings and behaviors are too readily classified as pathological. But this concern with over-medicalization is not unique to the Mad Pride movement. It is expressed by a wide range of individuals, including those within the mental health establishment. But there is one area where the field of mental health seems to be moving in the opposite direction: hallucinations. DSM-5, which has been criticized for overly broad definitions of psychiatric disorders, is restricting the diagnostic criteria for schizophrenia, making it so that hearing voices (with no additional symptoms) is no longer sufficient for a diagnosis.

The cover of the report in which Charles Bonnet first described the condition which would be named after him (from demneuropsy.com.br)

Tuesday, June 30, 2015

New neuro models for the interdisciplinary pursuit of understanding addiction

by Katie Givens Kime

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Katie Givens Kime is a doctoral student in Religion, with foci in practical theology, psychoanalysis, and neuroethics, and her research investigates the religious and spiritual aspects of addiction recovery methods.  

A few years ago, a highly respected and accomplished philosopher at Duke University, Owen Flanagan, surprised everyone when he stood up to speak at Society for Philosophy and Psychology.  A garden-variety academic presentation it was not.  In “What Is It Like to Be An Addict?” Flanagan revealed to 150 of his esteemed colleagues that he had been addicted to various narcotics and to alcohol for many, many years.  Not so long ago, every gruesome morning looked like this:

I would come to around 6:15 a.m., swearing that yesterday was the very last time...I’d pace, drink a cup of coffee, and try to hold to my terrified resolve.  But by 6:56—every time, failsafe, I’d be in my car, arriving at the BP station...at 7 a.m. sharp I’d gather my four or five 16-ounce bottles of Heineken, hold their cold wet balm to my breast, put them down on the counter only long enough to be scanned....I guzzled one beer in the car.  Car cranking, BP, a beer can’s gaseous earnestness—like Pavlov’s dogs, when these co-occur, Owen is off, juiced...the second beer was usually finished by the time I pulled back up to the house, the house on whose concrete porch I now spent most conscious, awake, time drinking, wanting to die.  But afraid to die.  When you’re dead you can’t use.  The desire to live was not winning the battle over death.  The overwhelming need – the pathological, unstoppable – need to use, was. (Flanagan, 2011, p. 77) 

Research on addiction is no small niche of medical science.  It’s an enormous enterprise.  This seems appropriate, since addiction (including all types of substance abuse) is among the top public health crises in the industrialized West. The human suffering and the public (and private) expense wrought by addiction is immense. (See data here, here, and here.)

To that end, two accomplished researchers recently guest lectured here in Atlanta, representing a few dynamic edges of such research.  Dr. Mark Gold lectured for Emory University’s Psychiatry Grand Rounds on "Evolution of Addiction Neurobiology and Treatment Over the Past 40 Years,” and Dr. Chandra Sripada lectured for the Neurophilosophy Forum at Georgia State University on "Addiction, Fallibility, and Responsibility.”

Tuesday, June 23, 2015

Selfhood and ethics: Who am I and why does it matter?

by Keenan Davis

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Keenan is a graduate student in Bioethics, whose work focuses on the use of virtue ethics and natural law to evaluate novel biotechnologies. He will be pursuing a PhD in the Graduate Division of Religion in the fall.

What should I be doing with my life? Many approach this timeless question by considering first another: Who am I? For a wide range thinkers from Plato to Dr. Phil, we can only know what to do with ourselves when we truly know ourselves. Who we are determines and constrains how we ought to behave. For example, because my parents caused me to exist, I should behave towards them with a level of gratitude and love. Perhaps through a cause-and-effect dynamic, as a result of being their son, I should treat them respectfully. We will return to this example at the conclusion of our exploration.

Historically, the question of selfhood was assessed in terms of an afterlife, seeking to resolve what happens to us when we die. If, as Plato claimed, a person is nothing more than his soul, "a thing immortal," then he will survive physical death. Indeed, perhaps one should look forward to the separation of the soul from material constraints. How we ought to behave then is for the sake of existence after and beyond this world, a position shared by many adherents to Abrahamic religion. On the other hand, if we are no more than our bodies, then we do not persist after death and have no reason to orient our behavior toward post-mortem expectations. Such is the position of Lucretius and the Epicureans who conclude that our practical task is instead to flourish within a strictly material context. Our behavior should be for the sake of this world. For both Lucretius and Plato, the metaphysical substance of self is what mattered foremost.

John Locke
As part of the 17th century Enlightenment, John Locke changed the focus from the substance of self and more explicitly addressed the issue of selfhood with an eye to its normative consequences. For instance, he believed the self to be based entirely on memory and consciousness, regardless of the relationship between body and soul. By defining personhood as continuous self-identification through memory, Locke aimed to establish psychological criteria for moral agency and responsibility. Only if one is responsible for particular actions ought he be liable for judgment, reward, or punishment. Despite his emphasis on the psychological, as opposed to the biological or spiritual, Locke's definition of self still follows the cause-and-effect pattern of is then ought: who I am determines how I should behave.



Using thought experiments like the famous Ship of Theseus conundrum, philosopher Trenton Merricks of the University of Virginia undermines this line of thought by suggesting that there is no metaphysical answer to the question of who we are. There simply are no necessary and sufficient criteria—psychological, bodily, or otherwise—of identity over time for any object. Lest we take this conclusion too far, Merricks explains that it does not mean that persons and objects lack essential properties or evade description: "Among my essential properties are, I think, being a person and failing to be a cat or hatbox." His assessment just means that not all explanations or identifications involving characteristics need to be stated in terms of absolute proof. Allowing a modest concession to unavoidable skepticism, we need not (nor do we ever) demonstrate infallibly that "the tree in my yard today is the same tree that was in my yard yesterday" to warrant that belief. We can still be warranted in our beliefs regarding who we are without proving them absolutely certain.

Tuesday, June 16, 2015

Changing the Way We Think

by David Michaels

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. David is a student at Emory University working on his Master's degree in Bioethics. After completing his graduate studies he will be attending medical school in Texas.  

Have you ever wondered what it would be like to have the ability to read minds? If you're like me, you've daydreamed about possessing this superpower. It's easy to imagine all of the fascinating ways you could exploit this gift to your liking. But after a while this myopic perspective is turned on its head when we imagine our own thoughts being read.  Quickly, almost instantaneously, we conclude with absolute certainty, "Nope, absolutely not - the power to read minds is a bad idea..." Some thoughts are probably best left alone in the mysterious impenetrable fortress of privacy--our mind.

However, recent breakthroughs in neuroscience may challenge the notion that our mind is impervious to infiltration. Did you know that we may have the ability in the near future to record our dreams so that we can watch them later? Scientists have been working on developing technology that translates brain activity (measured in an fMRI machine) to visible images, allowing us to "see" our thoughts. Although this technology currently only utilizes real-time brain activity and cannot produce images from stored thoughts (i.e. memories), it nevertheless introduces the possibility that people will be able to "see" our thoughts - and maybe "read" them too - in the future.

This is just one of many controversies over emerging 'neurotechnological lie detection' Sarah Stoller and Dr. Paul Root Wolpe discuss in a 2007 paper. They explore the question of whether or not the government has the right to invade our minds in order to obtain evidence that can be used in a court of law. Neuroscience has, for the first time in history, allowed researchers to bypass the peripheral nervous system and gather data directly from the brain (Wolpe et al. 2005). Although Stoller and Wolpe focus on the legality of these technologies and whether or not they violate our 5th amendment right, I want to explore whether adopting technologies that unveil the privacy of the mind will change the way we think and the way that we live.

Tuesday, June 9, 2015

The Ambiguity of "Neurotheology" and its Developing Purpose

by Shaunesse' Jacobs

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Shaunesse' is a dual masters student in Theological Studies and Bioethics at Emory and her research interests lie in end-of-life care and religious practices surrounding death and dying.

Are religion and spirituality authentic belief systems that have thrived for millennia because of their truth? Or are they simply constructs of the brain to help humanity cope with the unknown? With the advancement of science, can religion and science work together to understand humanity? What do religion and science have to say collectively that has not been said individually? These questions continue to be asked with each scientific advancement, and even more so now that neurotheology is beginning to develop as a sub-discipline of neuroscience. Neurotheology is generally classified as a branch of neuroscience seeking to understand how religious experience functions within the brain. The field has recently taken off and continues to grow thanks to the research of Andrew Newberg and Mark Robert Waldman, but its aims were first pursued by James Ashbrook.

For Ashbrook, the goal of neurotheology is to question "and explore theology from a neurological perspective, thus helping us to understand the human urge for religion and religious myths." These definitions seem very similar, but one implies that neurotheology is subordinate to theology and the other presents neurotheology as subordinate to neuroscience. This ambiguity becomes more muddled by Newberg in his work Principles of Neurotheology, where he supports the notion that competing and open-ended definitions for terms such as “religion,” “theology,” “spirituality,” and “neuroscience” are acceptable. In promoting open-ended definitions, Newberg may have suggested starter definitions as a basis for terms in this emerging field, such as “religion” as a particular system of faith and worship; “theology” as the study of God and God’s relation to the world; “spirituality” as the search for independent or transcendent meaning; and “neuroscience” as the study of how the nervous system develops, its structure, and what it does.

from wbur

Tuesday, June 2, 2015

23andMe: The Ethics of Genetic Testing for Neurodegenerative Diseases

by Liana Meffert

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Liana is a senior at Emory University majoring in Neuroscience and Behavioral Biology and Creative Writing (poetry). She is currently applying to Public Health graduate schools and considering a future in medicine. In her free time she enjoys running, reading, and her research on PTSD at Grady Memorial Hospital.
23andMe logo 

The face of genetic testing and counseling is in the midst of a major overhaul. Historically, a patient had to demonstrate several risk factors including familial and medical health history or early symptoms in order to be tested for the likelihood of developing a neurodegenerative disease. For the first time, the public has unrestricted and unregulated access to the relative probability of developing certain neurodegenerative diseases.

So why is finding out you may develop a neurodegenerative disease in later years different than learning you’re at high risk for breast cancer? Neurodegenerative diseases are unique in that they essentially alter one’s concept of “self.” Being told you may succumb to cancer at some point in your life is a much different scenario than being told your memories will slowly deteriorate or that the way you relate to your loved ones, or even the very things you enjoy, may change. For the first time in history, the potential for these drastic changes in your “future self” are available at the click of a button.

Tuesday, May 26, 2015

Disease or Diversity: Learning from Autism

by Jillybeth Burgado

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Jillybeth is a senior undergraduate double majoring in neuroscience and behavioral biology and religion. She hopes to pursue a PhD in neuroscience after working as a research assistant after graduation.

Chipmunka Publishing 
The idea that variation in behaviors arises through natural differences in our genome was popularized in the 1990s and termed “neurodiversity.” Led in large part by autism spectrum disorder (autism) activists, this movement challenged the established notions of autism as a disease that needed to be eradicated, championing the acceptance of a wide array of neural differences in the population. Rejecting terms such as “normal,” proponents of neurodiversity questioned common messaging and goals of research organizations (e.g. autism is not something that needs to be eradicated or “cured”). In this post, I briefly summarize the neuroethical concerns of ground-breaking neuroscience research, with particular focus on autism diagnostic research. I will then introduce a less well-known movement, Mad Pride, and discuss how we can apply some of the concepts and lessons from the autism and neurodiversity movements to understand and evaluate the claims of those involved with Mad Pride.

Tuesday, May 19, 2015

Forget the Map; Trust Your Brain: The Role Neuroscience Plays in Free Will

by Fuad Haddad

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics. Fuad is an undergraduate junior at Emory studying neuroscience and behavioral biology and ethics. He currently performs research at Yerkes National Primate Research Center under Dr. Larry Young, studying the relationship of single nucleotide polymorphism and pair bonding. His other research interests are the relationship between oxytocin and allopatric grooming as a model of empathy. 

Lizzie laughs as we drive down Briarcliff. “What do you mean an adventure?” she chuckles at me. I have a propensity to get lost for fun, an unhealthy and interesting habit. We approach a stop light. “Left, right, straight – pick one!” I say. As we arrive at a consensus, we journey onward until we reach a green highway sign that signals the exit to Athens. Her smile gives her motive away; I think, “Sorry Emory, but I’m going to be a Bull Dog today.” 

Take a moment to fast forward four months. On a September afternoon, I sit in the same car, with the same girl, leaving from the same place. “Left, right, straight?!” I ask again. Like before, we haphazardly trek through the jungle of northeast Atlanta. In the midst of yet another game of “where can we get lost now?” a peculiar phenomenon occurs. Slamming on the brakes, the car comes to a halt. Almost instantaneously we both realize that in this seemingly random choice in direction, our choices lead us back to the same green sign again and even more interestingly, through the same path. 
I guess Robert Frost isn’t going to Athens. From southeastroads.com

Tuesday, May 12, 2015

Is Multilingualism a Form of Cognitive Enhancement?

The following post is part of a special series emerging from Contemporary Issues in Neuroethics, a graduate-level course out of Emory University’s Center for Ethics.

People often ask me what language I dream in. I usually tell them that I dream in both languages – Romanian and English – and that it depends on the content of the dream and on the people featured in it. I associate emotional states with my native Romanian, while organized, sequential thinking is easier in English. Most of the time, I am not even aware of the identity of the language I produce and hear in my dreams.

Leaving the mysterious dimension of dreams behind, how does the multilingual brain navigate the world? Faced with an information-dense environment, it is able to switch its language of appraisal at the moment’s need. Consider the increasingly large group of bilingual English-speaking Hispanics in the United States. Most of them use English in their academic and work environments, then effortlessly switch to Spanish when talking to family members and other Spanish speakers. They also retrieve autobiographical memories in the original language of encoding without losing any more details than a monolingual individual. Given a context, multilingual individuals are able to adjust to the linguistic requirements of the situation. The multilingual brain is, therefore, an adaptable brain.

This leads us to the next point of inquiry. How does speaking several languages sculpt the brain? The Brain and Language Laboratory for Neuroimaging led by Dr. Laura-Ann Petitto has been investigating the differential activity in monolingual and bilingual brains during comprehension tasks, and has found that bilinguals show increased activation in the left inferior frontal cortex, an area associated with semantic processing and behavior inhibition. Another group led by Dr. Jubin Abutalebi has found that the brain of bilinguals recruits more areas when processing language than the brain of monolinguals. Finally, a recent study replicated the finding that learning a second language early in life changes the structure of white matter in the brain. This study is of particular interest, because it suggests that learning a second language later in life and using it concurrently with the first has the same effects on the brain.
Image result for ny times manzana
Image by Harriet Russell. From www.nytimes.com