Skip to main content

Likin’ Laken, if he ain’t fakin’

Last Friday, Emory held its third annual Neuroethics symposium,
focusing this year on the use of fMRI for lie detection and the acceptance of fMRI data as evidence in the courtroom. The
symposium featured talks from Stanford law professor Hank Greely, University of
Pennsylvania psychiatrist Daniel Langleben, and the CEO of Cephos, Dr. Steven Laken. I wasn’t surprised
to learn we’d invited the first two speakers: Greely wrote the seminal articles on law and neuroscience, and Langleben
pioneered fMRI studies of lying. It did surprise me to see Laken on the list.
His company Cephos is one of the few that have successfully marketed fMRI-based
lie detection for the commercial sector. I kind of thought—maybe hoped—that an audience composed largely
of neuroscientists would eat him alive.
Dr. Steven Laken, CEO of Cephos

I’d read about Cephos when we’d discussed fMRI and lie detection at the Neuroethics journal club, and what I’d read had made me skeptical. Laken’s
credentials didn’t impress me; he got his Ph.D. from Hopkins in cellular and
molecular medicine, worked in industry for a few years, and then started the
company. It seemed to me like you would want someone with a Ph.D. in
neuroscience or physics to figure out how to detect lies with fMRI.
His semi-slick bio blurb on the Cephos website, along with what struck me as
overstated results on the company’s profile, did little to sway my admittedly
biased opinion.

Something tells me Laken has faced skepticism before. The day of
the symposium, the Neuroethics Program held a lunch for students to meet with
the speakers. We went around the room introducing ourselves, and when it was
Laken’s turn, he described himself as “the entertainment”. Later in the lunch he launched into a sermonette, one I’m guessing he’s delivered before. Laken told us that before
starting Cephos he was “looking for a problem science hadn’t solved”. Lie
detection struck him as important: juries are bad at detecting lies, and
polygraphs don’t do that great a job either (according to a 2003 National Academies Report). He added that 1-5% of people on death row are wrongly
convicted, as the DNA evidence shows. “I can’t let that happen,” he said.

I don’t know if Laken actually believes what he says, but I find it hard to believe
that a CEO who spent his time in grad school getting his name into major media
outlets is purely motivated by the need for better lie detection. Don’t get me
wrong. Of course, I think it’s great that he developed a blood test screen for colorectal
cancer, the test that got his name “on the nightly news of all four major television
networks” (to quote his bio on the Cephos page). I’m not so naïve that I think
it’s a sin against academia to seek a little publicity for all the hard work one puts into a discovery. Some of the best scientists are also great publicists. And as far as his credentials go, I also realize there are plenty of people that jump
from one field to another after getting their Ph.D.

What really makes me suspicious is that Laken sells
himself so well. Then again, maybe that’s just his job. As he asked the audience during his
afternoon talk, which scientist would you, as a lawyer, pick: one that knows
fMRI inside and out but puts people to sleep when he or she talks, or one that
can communicate effectively with a jury? He has a point. I couldn’t help
comparing his polished yet conversational talk to Langleben’s, which I
struggled to follow at times. However, their talks did have one thing in common:
both made me think fMRI-based lie detection works.

I’ll explain more below, but first I want to talk about why Dr. Laken
might not be such a bad guy after all. During his talk Laken said, “I consider
myself first and foremost a scientist.” He claimed that “being in a company is
not incompatible with good scientific work.” Those statements activated my
knee-jerk ivory-tower academic liberal reflex—I tend to feel that doing good
scientific work is incompatible with being in a profit-driven corporation—but
Laken had me listening. He went on to say “we’ve published everything [we’ve
done]” and “we want to understand what’s going on in the brain [when people
lie]”. He added that Cephos is “open access … send me a drive, you have our
data.” Wait. That sounds like sound scientific practice. Does this mean I have to stop making cynical comments about this guy? 
  Then he pointed
out that “replication is the key to science, but it’s the part that we [as a
scientific community] don’t pay attention to.” Dammit. He’s right. As a
by-product of their being a company that carries out the same service over and
over again, Cephos does in effect replicate their results. I guess I have to
give Laken credit for doing what all scientists should: sharing data and replicating results.
It’s not clear to me what anyone else could do with the data,
if the experiments are not well-designed, but I feel better knowing I could
look at their data if I had the expertise.

I admit I don’t—I study songbirds. Actually, I study motor
learning, using songbirds as a model system, but most of what I know about fMRI
is what I read from blogs. And what I read makes me think that fMRI
lie detection is not ready for prime time. Take, for example, two recent studies featured on the blog Neuroskeptic. Both increased their sample size, or
as scientists refer to it, their “n”,
far beyond the norm for an fMRI study, and found activations that previous work
has missed. Recall that most fMRI
studies average BOLD signal—the blood oxygenation levels that we assume are a
proxy for brain activity—over several subjects performing some experimental
task. By subtracting this activity from the BOLD signal during a control task, scientists
find brain areas that the experimental task “activates”. The group in the first study increased their n by considering scans from ~1300 subjects, while the second study did so by scanning
three subjects 500 times. If you
look in the comments on the Neuroskeptic blog post, you’ll find experts in the neuroimaging
field arguing about what, if anything, these studies mean. You’ll also find
random crazy people posting nonsense; such is the nature of blogs. My point
stands: there is still a lot of debate about fMRI in the scientific community.
That means it fails to meet one of the criteria set forth in the Daubert standard, the rule of evidence which determines whether testimony from
scientific experts—i.e. lie detection companies—is admissible in court (at
least in the states which apply the Daubert standard).

In spite of those issues, Laken and Langleben are detecting something. Again, as Langleben
explained, most fMRI studies average BOLD signal over several subjects.
However, Langleben has tried to detect lying in single
subjects. He showed us unpublished experiments, in which he provided trained
experts—two of the post-docs that work in his lab—with brain scans from
single subjects. Each subject was scanned twice: once while lying and once while telling the truth.
The post-docs had to classify the brain scans by whether the subjects
were lying.  In twenty-four out of twenty-eight cases, the
post-docs were able to correctly classify lying and honest brains. These
numbers are strikingly similar to those from Cephos, who find that their
algorithm correctly identifies lying twenty-eight out of thirty-one times.

That’s an 85%-90% hit rate!

Shouldn’t we be freaking out? We can read minds.

Maybe these results aren’t so surprising. We can already read minds, as Hank Greely pointed out in his talk, as long as you define “reading
minds” to mean “correlating BOLD signal with previously seen BOLD signals in other subjects or the same subject performing similar tasks.” We don’t have to understand how the brain actually works, as
long as we can predict what its BOLD signal looks like during any given task.

To sum up: if anyone develops an fMRI machine that doesn’t take up an entire
room and doesn’t sound like a dishwasher powered by the souls of unborn babies, then I’ll start to worry about who can get their hands on this technology. In the meantime, I think the rest of the neuroimaging community should take Laken up on his offer to share data, and help him figure out what
exactly it is that he, Langleben, and others that use fMRI to study lying are detecting.

Want to cite this post?

Nicholson, D. (2012). Likin’ Laken, if he ain’t fakin’. The Neuroethics Blog. Retrieved on
, from


  1. Thanks for the post. One thing I find disturburbing about the list of invitees is that there is no neuroradiologist who is considered a fMRI expert by his/her peers. I'm a neuroradiologist at Emory and would like to have the opinion of such an expert. A person who knows the physics, applicability, and imaging applicability of fMRI is essential to such a discussion.


  2. Hi Falgun,

    Thanks for your comment. The symposium concluded with a panel discussion and it would've been great to have some of your questions discussed. We were also grateful to have Dr. Carolyn Meltzer on the panel to voice her concerns as an experienced neuroradiologist. Too bad you couldn't attend, but take heart, we will have links on the blog to the videos of the entire symposium to share with you soon. Thanks for reading and stay tuned!


  3. I agree with David's specific concerns about Dr. Laken. Those aspects of his career and approach would have made me approach this very skeptically.

    But this part:

    " He claimed that “being in a company is not incompatible with good scientific work.” Those statements activated my knee-jerk ivory-tower academic liberal reflex—I tend to feel that doing good scientific work is incompatible with being in a profit-driven corporation—but Laken had me listening."

    made me realize how different the worlds of scientific and engineering research are. I've worked with neuroscientists at Emory, but either we never discussed this, or they've been infected after working too closely with Georgia Tech. I never sensed this "knee-jerk reflex."

    I suppose that, even in neuroengineering, we're always looking for practical and commercial applications of our work, and our work means more to us when it results in a practical application. Especially if we can benefit from that. 🙂


  4. Hey BubbaRich,

    I'm pretty sure my tongue-in-cheek comment about my academic liberal reflex doesn't represent the views of everyone who considers themselves more of a scientist than an engineer. And, like I said right after that, I actually think Laken's doing a lot of things right, like sharing data and reproducing results. If anything, I'd say he's much less profit-minded than, say, the companies that run clinical trials for big pharma (which is the kind of situation that gets me worried about science and money conflicting).

    Hope I didn't give you the wrong impression. Of course I want my research to have practical applications (and maybe even commercial–don't tell anyone I said that though, or they might take away my Democracy Now tote bag).


  5. Hi Falgun,

    I tried to respond earlier but for some reason it didn't show up. Thanks for taking the time to comment. Let me echo what Karen said–Dr.Metzer was present and did take part in the panel discussion. As you'll see if you check out the video (once it's posted), Dr.Metzer responded to a question that Dr.Mike Crutcher asked the panel about how confident they were in the results Laken, Langleben, and others are reporting. She replied that the main question is whether we know what the "gold standard" is, i.e., how do we define truth? You can easily think of plenty of questions that the person running the fMRI lie detector might want to ask which would have a less-than-honest answer. The first (admittedly lame) example that comes to mind is Clinton's statement that "there's nothing going on between" him and Monica Lewinsky during his impeachment hearings. (There was technically nothing going between them at the time he was questioned. Ooh–they should have used fMRI during the impeachment!) In contrast, the subjects that Laken scans affirm and then deny some fact while they're in the scanner (for example, "I took the ring" or "I cheated on my husband") and then Cephos decides which statement was a lie based on brain activation. So Metzer felt that, whether fMRI lie detection currently works or not, there are plenty of cases where interpretation of fMRI evidence will be muddled at best. Her answer to that question might not be as technical-minded as you or I would want, but she was definitely there, and she did talk about fMRI in general, as well. Check out the video if you get a chance.


Post a Comment

Emory Neuroethics on Facebook