Tuesday, September 20, 2011

1 hot brain pic > 1k words?

Pretty pictures of brains with some parts lit up: Do they convince us that scientific results are real? Do they convince us more than text or bar graphs? McCabe and Castel ask these questions in their 2008 article "Seeing is Believing".

(The above is not an actual figure. It was pirated mercilessly from a paper unrelated to this post by yours truly.)

Last Wednesday, Dr. Karen Rommelfanger presented McCabe and Castel's paper at the first meeting of a new journal club hosted by the Neuroethics Program at the Emory Center for Ethics. Karen began by talking about how pervasive those pretty pictures of brains have become. Functional magnetic resonance imaging (fMRI) seems to be everywhere (a good introduction to how it works can be found here). Some companies, such as Cephos ("The science behind the truth") and NoLieMRI (who make up for their lack of a snappy slogan with their rhyming name), claim to use fMRI scanners as giant lie detectors, while other companies promise that they can use neuroimaging and related techniques to help with marketing.

To get the conversation going, Karen showed us a video on the use of fMRI for lie detection from Dateline NBC that's embedded in the front page of the Cephos website. The story featured Cephos client Ed Hook, who turned to the company to prove to his wife that he was no longer lying to her about how many times he had cheated on her. We found it hard not to laugh at some of the statements Cephos founder Dr. Steven Laken made to the couple after Hook's session in the scanner. For instance, he told Hook, "Our conclusion is that you were telling the truth ... on having only four affairs." Can fMRI really help this marriage? Then we thought about how we would have reacted to the story if we weren’t graduate students, paid humble stipends to spend all day engaged in critical thinking, and we stopped laughing. As noted by the impeccably-dressed Comparative Lit student at the journal club, this was one of the more telling lines from the reporter's voice-over: "...this new type of lie detector is considered more scientific than the old polygraph test, because it relies on computers, and not subjective humans, to ask the questions and determine the results."

Of course, computers are just subjective as the humans that run them, but the results of an MRI scan suggest otherwise. There's something about scrolling through layers of your own brain that implies that massive number crunching has been done, beyond the reach of human bias. I'm not an expert, but I believe it's better to stay skeptical about fMRI lie detection. And I'm not the only one. Mallory Bowers, a Neuroscience grad student in the Ressler lab, pointed out during the journal club that the brain scans run by companies like Cephos are experiments with an n of 1. Others added that the results of these experiments aren’t peer reviewed. Instead, they’re reviewed by the people that run the companies, who have everything to gain from giving their customers whatever results the customers want. Orion Kiefer, an MD/PhD student in Neuroscience, countered this line of thought with the observation that any number of news shows must be dying to show that fMRI lie detection is just as unreliable as polygraph tests have been shown to be. Surely, if fMRI lie detection didn’t work, some hard-nosed reporter or a contrarian member of a skeptics’ society would have already put himself or herself on the patient’s table and proved it, right? Just to be sure, maybe some Neuroscience Program students should take a field trip to NoLie fMRI.

Whatever the quality of the science behind the brain scans, and whatever the results are used for, there remains the question of whether people are influenced by the way the data from these experiments is presented. Maybe companies like Cephos succeed, in part, because of how convincing the results of an fMRI scan seem. The journal club moved on to the paper by McCabe and Castel with these thoughts in mind. To test the idea that the way data from brain scans is presented can affect the credibility of the results, McCabe and Castel asked undergraduates at Colorado State University to read fictitious press-release-like articles reporting the results of brain imaging studies. Lo and behold, they found that an image of a brain scan accompanying the article increased scores on the statement, "The scientific reasoning in the article made sense". For the article "Watching TV is Related to Math Ability," the score crept from around 2.70 to about 2.85 -- a significant difference -- when the article either included a brain scan image instead of a bar graph or did not include an image (on their questionnaire, 2.5 was halfway between "agree" and "disagree").

McCabe and Castel concluded that brain images "provide a physical basis for abstract cognitive processes" that "[appeal] to people's affinity for reductionistic explanations of cognitive phenomena." I didn't know people had that kind of affinity. I thought people still bristled at the thought of being reduced to the product of a pile of neurons. If I were going to pick any group of people that would be likely to agree that a study is scientifically sound because an article about it includes images of brain scans, I would pick some undergraduates (and I include my former undergraduate self in that blanket generalization). The problem of Psychology departments’ dependence on undergrads for their results is well known, though, so let's not blame McCabe and Castel for it. Another issue was that their effect size was small, which the authors recognize, but I'm not sure how they could have improved that. They point to "pre-experimental exposure" to brain scan images, which could have influenced the subjects' responses. Clearly, we need to hurry up and clone us some neanderthals, and get one of them in to the MRI machine so we can test a population that hasn't been exposed to fMRI images.

Maybe more surprising than McCabe and Castel's results is their discussion of the implications. They seem to think that there could be some positives. For instance, increased awareness of cognitive neuroscience might result in more funding. They also observe that many have called for neuroscientists to be more involved with the dissemination of data. This paper is from 2008, and since then the need for neuroscientists to be able to translate their results to the public has only increased. The challenge is coming up with a "two-minute elevator talk" version of the caveats involved with interpreting fMRI data. How do you explain to your grandma why she should be skeptical about press releases claiming that fMRI can help political campaigns to target swing voters? We also agreed that there could be a bright side to all this belief in the power of the brain scan. It was pointed out that brain scans provide positive proof of mental disorders, and may help family members accept that their loved ones suffer from a legitimate mental condition, and in this way reduce the stigma.

The first meeting of the journal club was a success. Being graduate students, we all enjoyed the free food (pizza from Domino's—maybe a neuromarketer could tell us why we liked it so much). Many of the people at the first meeting were neuroscience graduate students (Emory NSP represent) and members of the Center for Ethics -- we would love to have more people from outside the field at future meetings. We did have one Comparative Literature grad student, who made the rest of us feel ashamed for not wearing a vest, as well as a librarian from Psychology, and the Director of the Center for Mind Brain and Culture, Dr. McCauley. In addition, there was a giant lazy susan in the middle of the conference room table that I won't mention again, as well as some important people whose names I don't know yet. Feel free to join us next month when medical ethicist and philosopher, Dr. John Banja, presents his paper "Virtue Essentialism, Prototypes, and the Moral Conservative Opposition to Enhancement Technologies: A Neuroethical Critique." Dr. Banja is an engaging speaker and it promises to be interesting. It would be even more interesting if you are of a morally-conservative bent and came ready to debate. Prep for it by asking yourself if you are for or against enhancement technologies. I'll be there, enhanced by espresso (Karen would like me to remind you that the Center for Ethics provides free coffee for students and visitors), unless I forget to put a reminder in my Gmail calendar, or my iPhone dies, leaving me without access to Google Maps, which I depend on to find my way to the Center for Ethics (because of my hippocampal injury).

--David Nicholson
Graduate student, Sober lab
Emory Neuroscience Program

Want to cite this post?
Nicholson, D. (2011). 1 hot brain pic > 1k words? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2011/09/1-hot-brain-pic-1k-words.html


The Neuroethics Program @ Emory University said...
This comment has been removed by the author.
Laura E. Mariani said...

Nicely done, David! I wanted to comment with a link to that Wired article about Cephos that I brought up during the discussion, where the scanner made a mistake in detecting a lie under controlled conditions: http://www.wired.com/medtech/health/magazine/16-06/mf_neurohacks?currentPage=all (Do a Ctrl+F for "Cephos" to skip to the relevant section.) It's from 2008, though, so they may have made significant improvements since then.

Also, the snappy dresser from Comp Lit is Michael Hessel-Mial.

David Nicholson said...

Thanks, Laura--I looked for the article (partly because I wanted to credit you as "neuro grad student in the Caspary lab and karaoke superstar Laura Mariani") but couldn't find it. It doesn't seem to be on Cephos' site, although they have posted Nature and Science news articles that don't exactly flatter them. Good point about improvements--you'd think they'd get more accurate after doing more scans. Then again, I have to wonder about those controlled conditions. If you "lie" in the scanner about something you "stole" because the experimenter told you to, does your brain activate the same way as when you lie about something you did or didn't do in real life?

David Nicholson said...
This comment has been removed by the author.
The Neuroethics Program @ Emory University said...

Great post, David. We look forward to seeing everyone at Dr. Banja's talk on October 19, 2011 from 1-2pm in the Center for Ethics.

If you guys see David wandering around on that day, please send him to the following address.

1531 Dickey Drive
Atlanta, GA 30322

Cate Powell said...

I'm not a neuroethicist, my background is in religion and international affairs and I found this article to be very accessible. I especially liked your comments on the fallibility of computers, "computers are just as subjective as the humans that run them." I would love to see a follow up report on your field trip to nolieMRI!