Tuesday, March 3, 2015

Diversity in Neuroethics: it’s more important than you might think

By Nicholas Fitz and Roland Nadler**

Nicholas Fitz
Nick is a Graduate Research Assistant at the National Core for Neuroethics at the University of British Columbia. 

Roland is a third-year J.D. student at Stanford Law School and previously worked as a Graduate Research Assistant at the National Core for Neuroethics at the University of British Columbia.

**equal contribution

Roland Nadler
The second decade of neuroethics is now well underway. Much like the human brain itself, some of its developmental “critical periods” have run out, but many others remain open. How will we use these remaining opportunities to shape the field?

Junior participants in these spaces should take the initiative to engage with unresolved questions about the nature and structure of neuroethics as a discipline. After all, those of us at the beginning of our careers have a particularly significant stake in the answers to those questions, with most of our academic and professional lives still ahead of us. As we work to integrate society’s growing technological power with best ethical practices and societal values, we must ask: whose practices, whose values?

Last year, in a bid to foster this discussion, we offered three visions for diversity in neuroethics. In that article, we devoted much attention to diversity along intellectual, disciplinary, and political lines.

Today, we offer a few more thoughts on the importance of diversity in neuroethics in the more familiar sense of having a wide array of identities and backgrounds represented in the field.

Chiefly, we hope to convince you that robust identity diversity is beneficial — indeed, crucial — to neuroethics. The field simply could not provide the kinds of insights that it promises if its practitioners were a homogenous group of people speaking comfortably from positions of social power and privilege.

Tuesday, February 24, 2015

Neuroimaging in the Courtroom

If just any picture is worth a thousand words, then how much weight should we ascribe to a picture of our own brain? Neuroimaging can be quite compelling, especially when presented in the media as evidence for neuroscientific findings. Many researchers have pointed out though that the general public may be too entranced by fMRI images highlighting which parts of the brain are activated in response to certain stimuli, such as your iPhone, high-fat foods, or even Twitter. Neuro-realism is the idea that attaching a brain scan to a scientific finding suddenly makes the conclusion more credible, and examples of this have populated the media and the scientific literature1. But, from where does this theory of “neuro-seduction” really stem and is there even ample evidence to support it? For the first journal club of the new semester Emory undergraduate student and AJOB Neuroscience Editorial Intern Julia Marshall along with Emory professor Scott Lilienfeld discussed the role that neuroimaging plays in the courtroom, and whether brain scans have the potential to help or hurt those convicted of crimes in light of neuro-realism, neuro-seduction, and neuroredundancy.

from Scientific American blog

Recently, an article by Martha Farah and Cayce Hook2 took a critical look at the two studies that are most frequently cited as being evidence for neuro-realism and discussed why this theory has continued to persist despite its lack of evidence. The first study by McCabe and Castel3 analyzed whether people consider scientific findings more believable when accompanied by functional brain images, and the collected data suggested that scientific reasoning in research descriptions made more sense to participants when a brain image was provided as evidence. However, Farah and Hook point out that these brain images are actually more informative than a bar graph or topographic map, and participants should find them more compelling. The second paper often cited in relation to neuro-realism is a study by Weisberg, et al.4 which asked participants to consider whether an explanation for a psychological phenomenon, which did or did not include irrelevant neuroscientific rationale, was good or bad. Participants that were not neuroscience experts were more likely to rate a bad explanation as favorable when accompanied by neuroscience data. This study, however, did not include images, and even the authors of the paper admit that people may respond in a similar fashion to information that comes from specialties outside of neuroscience and psychology; there could be a general fascination with science that makes poor explanations appear reasonable. Farah and Hook also highlight a number of experiments5–7 that have been unable to replicate the findings from these two studies, helping to cast a shadow of doubt on neuro-realism.

Tuesday, February 17, 2015

Exchanging 'Reasons' for 'Values'

Julia Haas is a McDonnell Postdoctoral Fellow in the Philosophy-Neuroscience-Psychology program at Washington University in St. Louis. Her research focuses on decision-making.

Over the past two decades, computational and neurobiological research has had a big impact on the field of economics, bringing into existence a new and prominent interdisciplinary field of inquiry, ‘neuroeconomics.’ The guiding tenet of neuroeconomics has been that by combining both theoretical and empirical tools from neuroscience, psychology and economics, the resulting synthesis could provide valuable insights into all three of its parent disciplines (Glimcher 2009). And although some economists have resisted the influence of neuroscience research (Gul and Psendorfer 2008), neuroeconomics has by all measures thrived as a theoretical endeavor, and proven itself as a discipline capable of marshaling substantial institutional and financial resources.

For example, theories from economics and psychology have already begun to restructure our neurobiological understanding of decision-making, and a number of recent neurobiological findings are beginning to suggest constraints on theoretical models of choice developed in both economic and psychological domains. Similarly, a study by the Eigenfactor project at the University of Washington showed that while there were no citations from either of these disciplines to the other in 1997, by 2010, there were 195 citations from economics journals to neuroscience journals, and 74 citations from neuroscience journals to economics journals.

Disciplinary cross-pollination 
This interdisciplinary partnership has caught the attention of the National Institutes of Health, which finances 21 current research projects with "neuroeconomics" in their descriptions, to the tune of $7.6-million. The agency gives out many more millions for other neurobiology work related to decision-making: Caltech got $9-million this month to establish a center in this field. The National Science Foundation has backed eight neuroeconomics projects with $3.5-million in research money.

Neuroeconomics: A Role Model for the Neuroscience of Ethics 

Neuroeconomics has thus been one of the most significant and astute beneficiaries of computational and neuroscientific research on decision-making. By contrast, the discipline of philosophy has fallen behind. Although many insights from computational and decision neuroscience are directly relevant to philosophical discussions about deliberation and choice, the vast majority of them have fallen by the philosophical wayside. This is not to say that philosophy has ignored neuroscience: this would not at all be true. Beginning with the publication of Patricia Churchland’s Neurophilosophy in 1985, both neurophilosophy and the philosophy of neuroscience have become active research areas across philosophy departments. But many of these neuroscientific contributions have focused on issues pertaining to traditional metaphysics (such as consciousness and free will) and epistemology (such as perception and representation). By contrast, the implications of computational and decision neuroscience for philosophical theories of decision-making and practical reasoning have yet to be realized.

Tuesday, February 10, 2015

Obama’s BRAIN and Free Will

By Eddy Nahmias, PhD

Eddy Nahmias is professor in the Philosophy Department and the Neuroscience Institute at Georgia State University. He is also a member of the AJOB Neuroscience editorial board.

On April 2, 2013 President Barack Obama announced the BRAIN Initiative, a 10-year, $3 billion research goal to map all of the neurons and connections in the human brain. The BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative is modeled on the Human Genome Project, which successfully sequenced the entire DNA code of the human genome in 2003. Our brains, with 100 trillion neuronal connections, are immensely more complicated than our DNA, so the BRAIN Initiative has a much higher mountain to climb.

But let’s suppose that, finally, during the next Clinton presidency, the BRAIN Initiative is completed…. that is, the presidency of Charlotte Clinton, Bill and Hilary’s grandchild. In fact, suppose that eventually neuroimaging technology advances to the point that people’s brains can be mapped fully enough to allow real-time computations of all of their occurrent brain activity. Neuroscientists can then use this information to predict with 100% accuracy every single decision a person will make, even before the person is consciously aware of their decision. Suppose that a woman named Jill agrees to wear the lightweight BrainCapTM for a month. The neuroscientists are able to detect the activity that causes her thoughts and decisions and use it to predict all of Jill’s thoughts and decisions, even before she is aware of them. They predict, for instance, how she will vote in an election. They even predict her attempts to trick them by changing her mind at the last second.

From interbilgisayar.com

Question: Do you think it is possible for such technology to exist in the future (the “near” future of Charlotte Clinton’s presidency or perhaps a more distant future)? And if such technology did exist, what would it tell us about whether we have free will?

Tuesday, February 3, 2015

When the Hype Doesn’t Pan Out: On Sharing the Highs-and-Lows of Research with the Public

By Jared Cooney Horvath

Jared Cooney Horvath is a PhD student at the University of Melbourne in Australia studying Cognitive Psychology / Neuroscience.

15-years ago, a group of German researchers decided to revive the ancient practice of using electricity to effect physiologic change in the human body. Using modern equipment and safety measures, this group reported that they were able to alternately up- and down-regulate neuronal firing patterns in the brain simply by sending a weak electric current between two electrodes placed on the scalp1.
tDCS electrode placement

Today, this technique is called Transcranial Direct Current Stimulation (tDCS) and over 1,400 scientific articles (calculated by combining non-replicated articles from a joint PubMed, ISI Web of Science, and Google Scholar search using the keywords “Transcranial Direct Current Stimulation”: October 15, 2014) have been published suggesting that passing an arguably innocuous amount of electricity through the brain of a healthy individual can improve his/her memory, learning, attention, inhibitory control, linguistic function, etc. In parallel with these findings (often fueled by the researchers themselves), the public hype surrounding tDCS has grown to impressive proportions: in fact, in the last year alone, stories about this device and its ability to improve cognition and behavior have appeared in popular news outlets ranging from the BBC2 to Wired3 to The Wall Street Journal4.

Tuesday, January 27, 2015

Neuroscience in the Courtroom: An Attempt for Clarity

*Editor’s note: You can catch a lengthier discussion of this topic at our Jan 29th session of Neuroscience and Neuroethics in the News.

When people think about functional magnetic resonance imaging (fMRI) and the courtroom, many often think of mind reading or colorful images of psychopathic brains. Portable fMRI machines capable of reading our personal thoughts pop into our heads and arouse a fear that one day a neuroscientist could reasonably discern our deepest secrets through a brain scan. Despite recent scholarship that suggests a world filled with covert fMRI lie detection devices is far away (if ever attainable), I think further attention should be paid to how people think about neuroscience and interpret scientific information that draws on brain-laden language, particularly in the courtroom (Farah, Hutchinson, Phelps, & Wagner, 2014). This topic is of special interest to me as it is the focus of my undergraduate research thesis. I also think it should be relevant to neuroscientists, ethicists, and journalists as well because the way in which people interpret and understand aspects of the brain and human behavior is perhaps a consequence of how such information is portrayed to the public.

Photo from Ali, Liftshitz, & Raz, 2014

Tuesday, January 20, 2015

“Believe the children”? Childhood memory, amnesia, and its implications for law

How reliable are childhood memories? Are small children capable of serving as reliable witnesses in the courtroom? Are memories that adults recall from preschool years accurate? These questions are not only important to basic brain science and to understanding our own autobiographies, but also have important implications for the legal system. At the final Neuroscience, Ethics and the News journal club of the 2014 Fall semester, Emory Psychologist Robyn Fivush led a discussion on memory development, childhood amnesia, and the implications of neuroscience and psychology research for how children form and recall memories.

This journal club discussion was inspired by a recent NPR story that explored the phenomenon of childhood amnesia. Why is it that most of us cannot form long-term memories as infants, at least in the same way that we can as adults? This fundamental question has fascinated many researchers and psychologists and neuroscientists today are tackling it in innovative ways. Even adult memory of the recent past is not nearly as reliable as most people (and jurors) believe1 and while 2-year-old children can report long-term memories from several months prior2, adults typically cannot recall memories from before age 3.5. The emergence of autobiographical memory may arise from the realization of the self (~2 years) and acquisition of language skills, but it seems to happen gradually. Childhood amnesia may actually be the result of a slow conversion to recalling self-experienced episodes rather than just events themselves.3

Via medimoon.com

However, the general public has been shown to have a rather poor understanding of memory,1 perhaps due to “common sense” beliefs and cultural traditions. These common sense and cultural notions are deep-seated and may even have more influence in our society than the latest research, especially if those findings are not effectively communicated to the public. In fact, there is significant disagreement between the memory experts and judges, jurors, and law enforcement on the reliability of childhood memories recalled by adults.4 For example, nearly 70% of experts surveyed agreed that “Memories people recover from their own childhood are often false or distorted in some way”, but only about 30% of jurors thought that statement was true.3

Tuesday, January 13, 2015

Ethical Issues in Neurosurgery: A Special Issue of Virtual Mentor

This month the American Medical Association's journal Virtual Mentor published a series of articles about the ethical issues pertaining to neurosurgery. Some of the articles include discussions about deep brain stimulation in early-stage Parkinson Disease, simulation and neuro-surgery teaching tools, and integrating ethics into science education. The special issue also featured two members of the American Journal of Bioethics Neuroscience: editor-in-chief Dr. Paul Root Wolpe, and editor Dr. John Banja. The issue was guest edited by a neurosurgical resident at Emory University, Jordan Amadio. Click here to view the special issue.

Tuesday, January 6, 2015

Neuroscience and Human Rights

Last month, I had the privilege of attending the International Neuroethics Society Meeting in Washington, DC, made possible by a travel award from the Emory Neuroethics Program. This year's meeting featured panelists from diverse backgrounds: government, neuroscience, ethics, law, engineering, public health, and others. Each participant and attendee offered her unique perspectives on topical issues in neuroethics.

As I listened to many thought-provoking presentations and discussions, a question kept arising in my mind: to what extent should scientists engage with issues of social justice if their research findings support changes in public policy? As a "war on science" continues to be waged by members of the U.S. Senate and Congress (see Senator Coburn's 2014 "Wastebook," and the recent NPR Science Friday response by targeted scientists) and the American public lags in scientific literacy (A NSF report this year found that 1 in 4 Americans think the sun orbits the earth), this question carries a particular sense of urgency. Isn't science supposed to support human flourishing and maximize our well-being, as the American Association for the Advancement of Science puts it, "for the benefit of all people?" How accountable should scientists be in ensuring that this actually happens, beyond the scope of their laboratories?

My reflections on these questions were ignited by a fascinating example of how neuroscience can inform policy, provided by Katy de Kogel of the Dutch Ministry of Justice. Dr. de Kogel spoke of recent shifts in Dutch criminal law that reflect neuroscientific consensus: the neural substrates that support decision-making are not fully "online" in the developing, adolescent brain. In contrast to United States legal code, which specifies that individuals above the age of 18 be prosecuted as adults, thus barring them from legal protections offered to minors, Dutch courts have incorporated scientific understanding of neurodevelopment into their criminal code by advancing the age at which individuals are tried as minors: from 18 to 22 years of age. Criminal research findings support this change, as minors housed in adult detention centers tend to have higher rates of recidivism than those detained in juvenile centers. In my view, this is a refreshing and somewhat unexpected example of how society can benefit from advancements in neuroscience. We often think of science producing technological or medical innovations that improve our lives, rather than ancillary benefits like this that are impossible to foresee at the outset of a project.

Katy de Kogel of the Dutch Ministry of Justice (Courtesy of Dr. Gillian Hue)

Tuesday, December 23, 2014

The 2014 International Neuroethics Society Annual Meeting

By Mallory Bowers

On November 14, the International Neuroethics Society convened for its annual meeting at the AAAS building in Washington, D.C. I had the pleasure of attending and presenting at INS through the generous support of the Emory Neuroethics Program. The society is an interdisciplinary group of scholars - including lawyers, clinicians, researchers, and policy makers - and the 2014 agenda reflected this diversity in expertise.

The conference opened with a short talk by Chaka Fattah, the U.S. representative for Pennsylvania’s 2nd congressional district. As a Philadelphia native, I was excited to learn that Congressman Fattah was an architect of the Fattah Neuroscience Initiative, which was an impetus for developing the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

Courtesy of Gillian Hue

Discussion of the BRAIN initiative continued through the following panels, “The BRAIN Initiative & the Human Brain Project: an Ethical Focus” and “The Future of Neuroscience Research & Ethical Implications”. Panelist Stephen Hauser spoke about the Presidential Commission for the Study of Bioethical Issues, while Henry Markram discussed the Human Brain Project – the European-based research collaboration to establish innovative neurotechnologies and develop a more thorough understanding of the human brain. Representatives of several scientific funding institutions (Dr. Tom Insel – Director of the National Institute of Mental Health, Dr. George Koob – Director of the National Institute on Alcohol Abuse and Alcoholism, and Dr. Geoff Ling – Defense Advanced Research Projects Agency) discussed the progress of neuroscience research, while emphasizing the need for continued advancement. Although the morning panels were interesting (as a behavioral neuroscientist, seeing Dr. Tom Insel was quite thrilling), I was left with the impression that the scientific “establishment” was only beginning to scratch the surface of the neuroethical implications of the research being conducted by scientists like myself. I wondered if any of the morning panelists attended the later sessions, which discussed more neuroethically hard-hitting issues, such as “Neuroscience in the Courts” and “Neuroscience and Human Rights”.