Likin' Laken, if he ain't fakin'


Last Friday, Emory held its third annual Neuroethics symposium, focusing this year on the use of fMRI for lie detection and the acceptance of fMRI data as evidence in the courtroom. The symposium featured talks from Stanford law professor Hank Greely, University of Pennsylvania psychiatrist Daniel Langleben, and the CEO of Cephos, Dr. Steven Laken. I wasn’t surprised to learn we’d invited the first two speakers: Greely wrote the seminal articles on law and neuroscience, and Langleben pioneered fMRI studies of lying. It did surprise me to see Laken on the list. His company Cephos is one of the few that have successfully marketed fMRI-based lie detection for the commercial sector. I kind of thought—maybe hoped—that an audience composed largely of neuroscientists would eat him alive.
Dr. Steven Laken, CEO of Cephos
I'd read about Cephos when we'd discussed fMRI and lie detection at the Neuroethics journal club, and what I'd read had made me skeptical. Laken's credentials didn't impress me; he got his Ph.D. from Hopkins in cellular and molecular medicine, worked in industry for a few years, and then started the company. It seemed to me like you would want someone with a Ph.D. in neuroscience or physics to figure out how to detect lies with fMRI. His semi-slick bio blurb on the Cephos website, along with what struck me as overstated results on the company’s profile, did little to sway my admittedly biased opinion.

Something tells me Laken has faced skepticism before. The day of the symposium, the Neuroethics Program held a lunch for students to meet with the speakers. We went around the room introducing ourselves, and when it was Laken’s turn, he described himself as "the entertainment". Later in the lunch he launched into a sermonette, one I'm guessing he's delivered before. Laken told us that before starting Cephos he was “looking for a problem science hadn’t solved”. Lie detection struck him as important: juries are bad at detecting lies, and polygraphs don’t do that great a job either (according to a 2003 National Academies Report). He added that 1-5% of people on death row are wrongly convicted, as the DNA evidence shows. “I can’t let that happen,” he said.

I don't know if Laken actually believes what he says, but I find it hard to believe that a CEO who spent his time in grad school getting his name into major media outlets is purely motivated by the need for better lie detection. Don’t get me wrong. Of course, I think it’s great that he developed a blood test screen for colorectal cancer, the test that got his name “on the nightly news of all four major television networks” (to quote his bio on the Cephos page). I’m not so naïve that I think it’s a sin against academia to seek a little publicity for all the hard work one puts into a discovery. Some of the best scientists are also great publicists. And as far as his credentials go, I also realize there are plenty of people that jump from one field to another after getting their Ph.D.

What really makes me suspicious is that Laken sells himself so well. Then again, maybe that’s just his job. As he asked the audience during his afternoon talk, which scientist would you, as a lawyer, pick: one that knows fMRI inside and out but puts people to sleep when he or she talks, or one that can communicate effectively with a jury? He has a point. I couldn’t help comparing his polished yet conversational talk to Langleben’s, which I struggled to follow at times. However, their talks did have one thing in common: both made me think fMRI-based lie detection works.

I’ll explain more below, but first I want to talk about why Dr. Laken might not be such a bad guy after all. During his talk Laken said, “I consider myself first and foremost a scientist.” He claimed that “being in a company is not incompatible with good scientific work.” Those statements activated my knee-jerk ivory-tower academic liberal reflex—I tend to feel that doing good scientific work is incompatible with being in a profit-driven corporation—but Laken had me listening. He went on to say “we’ve published everything [we’ve done]” and “we want to understand what’s going on in the brain [when people lie]”. He added that Cephos is "open access ... send me a drive, you have our data.” Wait. That sounds like sound scientific practice. Does this mean I have to stop making cynical comments about this guy?  Then he pointed out that “replication is the key to science, but it’s the part that we [as a scientific community] don’t pay attention to.” Dammit. He’s right. As a by-product of their being a company that carries out the same service over and over again, Cephos does in effect replicate their results. I guess I have to give Laken credit for doing what all scientists should: sharing data and replicating results. It’s not clear to me what anyone else could do with the data, if the experiments are not well-designed, but I feel better knowing I could look at their data if I had the expertise.

I admit I don’t—I study songbirds. Actually, I study motor learning, using songbirds as a model system, but most of what I know about fMRI is what I read from blogs. And what I read makes me think that fMRI lie detection is not ready for prime time. Take, for example, two recent studies featured on the blog Neuroskeptic. Both increased their sample size, or as scientists refer to it, their “n”, far beyond the norm for an fMRI study, and found activations that previous work has missed. Recall that most fMRI studies average BOLD signal—the blood oxygenation levels that we assume are a proxy for brain activity—over several subjects performing some experimental task. By subtracting this activity from the BOLD signal during a control task, scientists find brain areas that the experimental task "activates". The group in the first study increased their n by considering scans from ~1300 subjects, while the second study did so by scanning three subjects 500 times. If you look in the comments on the Neuroskeptic blog post, you’ll find experts in the neuroimaging field arguing about what, if anything, these studies mean. You’ll also find random crazy people posting nonsense; such is the nature of blogs. My point stands: there is still a lot of debate about fMRI in the scientific community. That means it fails to meet one of the criteria set forth in the Daubert standard, the rule of evidence which determines whether testimony from scientific experts—i.e. lie detection companies—is admissible in court (at least in the states which apply the Daubert standard).

In spite of those issues, Laken and Langleben are detecting something. Again, as Langleben explained, most fMRI studies average BOLD signal over several subjects. However, Langleben has tried to detect lying in single subjects. He showed us unpublished experiments, in which he provided trained experts—two of the post-docs that work in his lab—with brain scans from single subjects. Each subject was scanned twice: once while lying and once while telling the truth. The post-docs had to classify the brain scans by whether the subjects were lying.  In twenty-four out of twenty-eight cases, the post-docs were able to correctly classify lying and honest brains. These numbers are strikingly similar to those from Cephos, who find that their algorithm correctly identifies lying twenty-eight out of thirty-one times.

That’s an 85%-90% hit rate!

Shouldn’t we be freaking out? We can read minds.

Maybe these results aren’t so surprising. We can already read minds, as Hank Greely pointed out in his talk, as long as you define “reading minds” to mean “correlating BOLD signal with previously seen BOLD signals in other subjects or the same subject performing similar tasks.” We don’t have to understand how the brain actually works, as long as we can predict what its BOLD signal looks like during any given task.

To sum up: if anyone develops an fMRI machine that doesn’t take up an entire room and doesn’t sound like a dishwasher powered by the souls of unborn babies, then I'll start to worry about who can get their hands on this technology. In the meantime, I think the rest of the neuroimaging community should take Laken up on his offer to share data, and help him figure out what exactly it is that he, Langleben, and others that use fMRI to study lying are detecting.




Want to cite this post?
Nicholson, D. (2012). Likin' Laken, if he ain't fakin'. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2012/06/likin-laken-if-he-aint-fakin.html

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook