fMRI and the Hype Cycle: Scientists Lead the Way, Even When Wrong

By Nathan Ahlgrim

Image courtesy of the NIH on Flickr
I trust science. As a one-time scientist and current teacher, I’m a happy man if my students leave my class with a critical eye and trust of the scientific method. Getting there is a struggle, which should come as no surprise to anyone who has ever been at their wit’s end when trying to get their mom, their neighbor, or their Senator to just understand. Understand the science, and follow its recommendations! Writing this at the beginning of 2021, I cannot help but draw the obvious connection with the disastrous politicization of safety protocols around COVID-19. 

The trouble comes when, after spending hours of class time demonstrating the utility of the scientific method, I bring up the Replication Crisis. First described in the early 2010s, the Replication Crisis was a reckoning in Psychology. If a supposedly well-constructed psychology study can “prove” the existence of ESP (Bem, 2011), why should psychology research be trusted? 

The conflict between the scientific method and the way science is practiced in the real world can be as disorienting as the nitty-gritty science itself. “Expert opinions” change, however slowly, as new and more replicable evidence arises. Public opinion catches up, however more slowly. It is in that time-lag between the science community and the public adjusting their beliefs that I see scientist-snobbery flourish. Unfortunately, I have also seen such snobbery draw a line in the sand between scientists and “everyone else,” decreasing public trust in science even further. 

I am, of course, glossing over the entirely different category of political- and belief-based objections to scientific consensus like how climate change is driven by human intervention or how the alignment of the planets don’t actually sway your life (my apologies to all you Scorpios out there). That is a wholly different conversation for a different time. 

The intersection of these two challenges: public attitudes towards science and the replicability of research, underscores the benefit of a little scientific humility. 

#

When functional magnetic resonance imaging (fMRI) technology burst onto the scene in 1991, the resulting hype promised foolproof lie-detection, analysis of mental capacity, and other keys to human psychology. Its relevance to a courtroom led many in the neuroscience and legal community to worry that judges and jurors would be “dazzled by neurobabble” (Weisberg et al., 2008). In effect, brain scans would be such convincing evidence of a defendant’s mental state that the average person couldn’t help but agree with whatever a picture of the brain told them to think. Without the proper scientific training, a brain scan with any visible abnormality would be all the defense needed for a verdict of Not Guilty by Reason of Insanity (NGRI), or so was the fear. Although it was just a part of the evidence presented by the defense, John Hinckley Jr. was found NGRI for his attempted assassination of President Reagan in part due to a pathological brain scan. If that could happen with the relatively rudimentary technology of a CT (computerized tomography) scan, what new doors would open with the introduction of MRI evidence? Innocent people could be convicted, murderers could walk free, all because a brain scan can be more persuasive than any other form of evidence. 

That was the fear, and research backed up this fear (e.g. see McCabe & Castel, 2008; Gurley & Marcus, 2008). Then, like so much headline-grabbing science before it, repeated failures to replicate the neurodazzle of neurobabble threw the entire conversation into question (e.g. Roskies et al., 2013; Hook & Farah, 2013). Was our fear of neuroimage supremacy much ado about nothing? The rise and fall of neurobabble hysteria offers neuroscientists an often overlooked lesson in public perception: it can change. 

#

Too often, practicing scientists view themselves as wholly separate from the non-scientific community. The lay public, the normies, whatever they are called, are often assumed to think fundamentally differently than those who live and breathe the scientific method. In that belief is a number of assumptions, but one of the most insidious is that scientists are steadfast disciples of the scientific method.  

Follow the data, and ye shall be tenured. 

Image courtesy of Wikimedia Commons

A handful of replication failures would surely still exist if such ardent adherents dominated the field, but it would likely never grow into a crisis. Sadly, science is practiced by scientists, i.e. humans. And those humans need to lock down funding! Everything from confirmation bias (which may be partly subconscious) to reporting biases and self-selection bias plagues the industry and contributes to hype cycles within research communities.  

fMRI research may be distinctively prone to unsupported hype because the apparent elegance of the results is conjured by ever-more complex analysis pathways. 

Brain glow red! Brain active! 

But the elegance is just that: apparent. As straightforward as fMRI images look to the lay public (or, in the spirit of full disclosure, to neuroscientists not versed in neuroimaging), the statistical wizardry which transmute magnetic fluctuations into pretty pictures can lead to some galling false-positives. Protecting against these false-positives can be as deceptively simple as honoring the lessons of Statistics 101: you have to correct for multiple comparisons, especially when the average human fMRI spits out more than 100,000 voxels. Exactly how to do that is when it gets complicated. Call it a failure of the team doing the research or a failure of the peer-review process, but many labs in top-tier universities might fail that Statistics 101 assignment. If fMRI scans can breathe new life into a dead salmon using the statistical methods found in many hard-hitting fMRI papers (Bennet et al., 2011), something is undeniably fishy in the neuroimaging world. 

Voodoo fish were not the last of fMRI’s troubles. Subsequent publications demonstrated that research teams regularly failed to report their analysis pathways in their publications, making an exact replication all but impossible (Carp, 2012). 

What followed was not a repudiation of fMRI as a technique, but the scientific community certainly sobered its predictions and promises with the technology. We now know that fMRI is a tool, just not a magical one. It demonstrates correlations between brain activity and behavior on a group level, but its ability to identify a single falsehood in one person may never meet a tolerable level of accuracy (Farah et al., 2014).

Gartner's Hype Cycle
Image courtesy of Wikimedia Commons
Have neuroscientists reached the Plateau of Productivity in Gartner’s hype cycle? It would be tempting to say the undead salmon plunged us into the Trough of Disillusionment, but publications using fMRI have shown no signs of slowing since 1991. Regardless of how good a fit the hype cycle is for fMRI research, the aftereffects of a hype cycle are palpable. Many broken promises and many unreplicable studies. 

Scientists have since seen the data, adjusted their conclusions, and moved on. As they should. Those same scientists have read research about the lay public blindly following whatever pretty brain picture is put in front of them and wring their hands in worry or roll their eyes in scorn. It is all too easy to forget what we in our ivory tower have blindly followed in the pursuit of knowledge (and publications).  

The lay public takes shortcuts. They have to—how else could they make a judgment about a brain scan without dropping six years on a PhD? 

Of course, scientists also take shortcuts. We’re just better at hiding them. What neuroscientist has four years to drop earning a degree in advanced analytics when they just want to analyze some fMRI data? It’s an understandable shortcut, but that’s what opens the door for inaccurate plug-and-play analysis packages to throw the conclusions of hundreds of papers into question.  

All this means is that the line between practicing neuroscientists and the lay public is a false distinction. If scientists can be led astray by hype and then walk back towards healthy skepticism, is it so garish to acknowledge that non-scientists could do the same? 

I am not claiming that the original papers warning of the “seductive allure” of neurobabble were free of methodological flaws. Many of the subsequent replications addressed these flaws by better controlling the essential factor of neurobabble. Are brain images the key? Can tables, graphs, or even a textbox spouting neuroscience boost believability? In general, more recent research suggests that neuroimages hold no unique power over people’s beliefs (Hook and Farah, 2013), although neuroscience as a field may be uniquely influential (Fernandez-Duque et al., 2015).  

Far from outing a complete replication failure, I believe more recent research is also capturing the public’s progression through something like a hype cycle. Public perception was swayed by enthusiastic scientists who got carried away by their glimmering new toy (as well as some less-than-scrupulous ones; Rusconi & Mitchener-Nissen, 2013). It took some time after scientists calmed down for public perception to follow suit, but it would be unfair to expect non-neuroscientists to change their beliefs in the same timeframe as the neuroscientists who gossip over the latest hard-hitting Nature paper like T. Swift just dropped another album. 

Scientists must recognize that public perceptions can change about the more basic sciences in the same way as happens with politicized topics like access to contraception, health care, and climate change. Not only will this recognition force some humility into the scientists who think themselves immune from hype, but it will encourage a more equitable conversation between those within and outside the ivory tower. Any shift to bring scientists closer to alignment with non-scientists has the added benefit of making scientists seem like just another person (which they are), and hopefully, an informed person who deserves to be listened to. 

After all, if I trust science and expect others to do the same, I should lead by example and take the humbling step of tossing my outdated beliefs into the bin.  


References 

  1. Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425. https://doi.org/10.1037/a0021524 
  2. Bennett, C., Miller, M., & Wolford, G. (2009). Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: an argument for multiple comparisons correction. NeuroImage, 47, S125. doi:10.1016/s1053-8119(09)71202-9   
  3. Carp J. (2012). On the plurality of (methodological) worlds: estimating the analytic flexibility of FMRI experiments. Frontiers in Neuroscience, 6, 149. https://doi.org/10.3389/fnins.2012.00149 
  4. Gurley, J. R., & Marcus, D. K. (2008). The effects of neuroimaging and brain injury on insanity defenses. Behavioral Sciences & the Law, 26(1), 85–97. https://doi.org/10.1002/bsl.797 
  5. Farah, M. J., & Hook, C. J. (2013). The Seductive Allure of “Seductive Allure.” Perspectives on Psychological Science, 8(1), 88–90. https://doi.org/10.1177/1745691612469035 
  6. Farah, M. J., Hutchinson, J. B., Phelps, E. A., & Wagner, A. D. (2014). Functional MRI-based lie detection: scientific and societal challenges. Nature Reviews. Neuroscience, 15(2), 123–131. https://doi.org/10.1038/nrn3665 
  7. Fernandez-Duque, D., Evans, J., Christian, C., & Hodges, S. D. (2015). Superfluous neuroscience information makes explanations of psychological phenomena more appealing. Journal of Cognitive Neuroscience, 27(5), 926–944. https://doi.org/10.1162/jocn_a_00750 
  8. Hook, C. J., & Farah, M. J. (2013). Look again: effects of brain images and mind-brain dualism on lay evaluations of research. Journal of Cognitive Neuroscience, 25(9), 1397–1405. https://doi.org/10.1162/jocn_a_00407 
  9. McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: the effect of brain images on judgments of scientific reasoning. Cognition, 107(1), 343–352. https://doi.org/10.1016/j.cognition.2007.07.017 
  10. Roskies, A. L., Schweitzer, N. J., & Saks, M. J. (2013). Neuroimages in court: less biasing than feared. Trends in Cognitive Sciences, 17(3), 99–101. https://doi.org/10.1016/j.tics.2013.01.008 
  11. Rusconi, E., & Mitchener-Nissen, T. (2013). Prospects of functional magnetic resonance imaging as lie detector. Frontiers in Human Neuroscience, 7, 594. https://doi.org/10.3389/fnhum.2013.00594 
  12. Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470–477. https://doi.org/10.1162/jocn.2008.20040 
______________

Nathan Ahlgrim is a Psychology Instructor at a community college in North Carolina. He was introduced to the field of Neuroethics while a graduate student at Emory University.While there, he served as part of The Neuroethics Blog editorial team under the mentorship of Dr. Karen Rommelfanger. He continues to write, teach, and talk about neuroscience and neuroethics to his students and anyone else who cares to listen.



Want to cite this post?

Ahlgrim, N. (2021). fMRI and the Hype Cycle: Scientists Lead the Way, Even When Wrong. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2021/03/fmri-and-hype-cycle-scientists-lead-way.html

Comments


Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook