Tuesday, February 24, 2015

Neuroimaging in the Courtroom

If just any picture is worth a thousand words, then how much weight should we ascribe to a picture of our own brain? Neuroimaging can be quite compelling, especially when presented in the media as evidence for neuroscientific findings. Many researchers have pointed out though that the general public may be too entranced by fMRI images highlighting which parts of the brain are activated in response to certain stimuli, such as your iPhone, high-fat foods, or even Twitter. Neuro-realism is the idea that attaching a brain scan to a scientific finding suddenly makes the conclusion more credible, and examples of this have populated the media and the scientific literature1. But, from where does this theory of “neuro-seduction” really stem and is there even ample evidence to support it? For the first journal club of the new semester Emory undergraduate student and AJOB Neuroscience Editorial Intern Julia Marshall along with Emory professor Scott Lilienfeld discussed the role that neuroimaging plays in the courtroom, and whether brain scans have the potential to help or hurt those convicted of crimes in light of neuro-realism, neuro-seduction, and neuroredundancy.

from Scientific American blog

Recently, an article by Martha Farah and Cayce Hook2 took a critical look at the two studies that are most frequently cited as being evidence for neuro-realism and discussed why this theory has continued to persist despite its lack of evidence. The first study by McCabe and Castel3 analyzed whether people consider scientific findings more believable when accompanied by functional brain images, and the collected data suggested that scientific reasoning in research descriptions made more sense to participants when a brain image was provided as evidence. However, Farah and Hook point out that these brain images are actually more informative than a bar graph or topographic map, and participants should find them more compelling. The second paper often cited in relation to neuro-realism is a study by Weisberg, et al.4 which asked participants to consider whether an explanation for a psychological phenomenon, which did or did not include irrelevant neuroscientific rationale, was good or bad. Participants that were not neuroscience experts were more likely to rate a bad explanation as favorable when accompanied by neuroscience data. This study, however, did not include images, and even the authors of the paper admit that people may respond in a similar fashion to information that comes from specialties outside of neuroscience and psychology; there could be a general fascination with science that makes poor explanations appear reasonable. Farah and Hook also highlight a number of experiments5–7 that have been unable to replicate the findings from these two studies, helping to cast a shadow of doubt on neuro-realism.

Tuesday, February 17, 2015

Exchanging 'Reasons' for 'Values'

Julia Haas is a McDonnell Postdoctoral Fellow in the Philosophy-Neuroscience-Psychology program at Washington University in St. Louis. Her research focuses on decision-making.

Over the past two decades, computational and neurobiological research has had a big impact on the field of economics, bringing into existence a new and prominent interdisciplinary field of inquiry, ‘neuroeconomics.’ The guiding tenet of neuroeconomics has been that by combining both theoretical and empirical tools from neuroscience, psychology and economics, the resulting synthesis could provide valuable insights into all three of its parent disciplines (Glimcher 2009). And although some economists have resisted the influence of neuroscience research (Gul and Psendorfer 2008), neuroeconomics has by all measures thrived as a theoretical endeavor, and proven itself as a discipline capable of marshaling substantial institutional and financial resources.

For example, theories from economics and psychology have already begun to restructure our neurobiological understanding of decision-making, and a number of recent neurobiological findings are beginning to suggest constraints on theoretical models of choice developed in both economic and psychological domains. Similarly, a study by the Eigenfactor project at the University of Washington showed that while there were no citations from either of these disciplines to the other in 1997, by 2010, there were 195 citations from economics journals to neuroscience journals, and 74 citations from neuroscience journals to economics journals.

Disciplinary cross-pollination 
This interdisciplinary partnership has caught the attention of the National Institutes of Health, which finances 21 current research projects with "neuroeconomics" in their descriptions, to the tune of $7.6-million. The agency gives out many more millions for other neurobiology work related to decision-making: Caltech got $9-million this month to establish a center in this field. The National Science Foundation has backed eight neuroeconomics projects with $3.5-million in research money.

Neuroeconomics: A Role Model for the Neuroscience of Ethics 

Neuroeconomics has thus been one of the most significant and astute beneficiaries of computational and neuroscientific research on decision-making. By contrast, the discipline of philosophy has fallen behind. Although many insights from computational and decision neuroscience are directly relevant to philosophical discussions about deliberation and choice, the vast majority of them have fallen by the philosophical wayside. This is not to say that philosophy has ignored neuroscience: this would not at all be true. Beginning with the publication of Patricia Churchland’s Neurophilosophy in 1985, both neurophilosophy and the philosophy of neuroscience have become active research areas across philosophy departments. But many of these neuroscientific contributions have focused on issues pertaining to traditional metaphysics (such as consciousness and free will) and epistemology (such as perception and representation). By contrast, the implications of computational and decision neuroscience for philosophical theories of decision-making and practical reasoning have yet to be realized.

Tuesday, February 10, 2015

Obama’s BRAIN and Free Will

By Eddy Nahmias, PhD

Eddy Nahmias is professor in the Philosophy Department and the Neuroscience Institute at Georgia State University. He is also a member of the AJOB Neuroscience editorial board.

On April 2, 2013 President Barack Obama announced the BRAIN Initiative, a 10-year, $3 billion research goal to map all of the neurons and connections in the human brain. The BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative is modeled on the Human Genome Project, which successfully sequenced the entire DNA code of the human genome in 2003. Our brains, with 100 trillion neuronal connections, are immensely more complicated than our DNA, so the BRAIN Initiative has a much higher mountain to climb.

But let’s suppose that, finally, during the next Clinton presidency, the BRAIN Initiative is completed…. that is, the presidency of Charlotte Clinton, Bill and Hilary’s grandchild. In fact, suppose that eventually neuroimaging technology advances to the point that people’s brains can be mapped fully enough to allow real-time computations of all of their occurrent brain activity. Neuroscientists can then use this information to predict with 100% accuracy every single decision a person will make, even before the person is consciously aware of their decision. Suppose that a woman named Jill agrees to wear the lightweight BrainCapTM for a month. The neuroscientists are able to detect the activity that causes her thoughts and decisions and use it to predict all of Jill’s thoughts and decisions, even before she is aware of them. They predict, for instance, how she will vote in an election. They even predict her attempts to trick them by changing her mind at the last second.

From interbilgisayar.com

Question: Do you think it is possible for such technology to exist in the future (the “near” future of Charlotte Clinton’s presidency or perhaps a more distant future)? And if such technology did exist, what would it tell us about whether we have free will?

Tuesday, February 3, 2015

When the Hype Doesn’t Pan Out: On Sharing the Highs-and-Lows of Research with the Public

By Jared Cooney Horvath

Jared Cooney Horvath is a PhD student at the University of Melbourne in Australia studying Cognitive Psychology / Neuroscience.

15-years ago, a group of German researchers decided to revive the ancient practice of using electricity to effect physiologic change in the human body. Using modern equipment and safety measures, this group reported that they were able to alternately up- and down-regulate neuronal firing patterns in the brain simply by sending a weak electric current between two electrodes placed on the scalp1.
tDCS electrode placement

Today, this technique is called Transcranial Direct Current Stimulation (tDCS) and over 1,400 scientific articles (calculated by combining non-replicated articles from a joint PubMed, ISI Web of Science, and Google Scholar search using the keywords “Transcranial Direct Current Stimulation”: October 15, 2014) have been published suggesting that passing an arguably innocuous amount of electricity through the brain of a healthy individual can improve his/her memory, learning, attention, inhibitory control, linguistic function, etc. In parallel with these findings (often fueled by the researchers themselves), the public hype surrounding tDCS has grown to impressive proportions: in fact, in the last year alone, stories about this device and its ability to improve cognition and behavior have appeared in popular news outlets ranging from the BBC2 to Wired3 to The Wall Street Journal4.