For 5,000 dollars, a computer will scan your brain several times while asking you a series of banal yes or no questions: Do you live in Texas? Is it 2004? It will also ask you one important question, such as: Did you burn down the shop? Or, have you cheated on your spouse? Shortly thereafter, it will spit out two numbers. And the creators of the test insist that those two numbers will determine if, when you answered the serious question, you were lying.
This method of lie detection, which relies on brain scans rather than a racing heart, still hasn’t gained widespread support among mainstream neuroscientists or the legal community. But two companies, Cephos Corporation in Tyngsboro, Massachusetts, and No Lie MRI in San Diego, California, are already marketing it to clients, at a time when many experts worry about the technique’s accuracy in detecting real-life lies, as opposed to the fibs conjured up by study volunteers in experiments. And even if the test is reliable, experts question whether the results of this sort of mind reading should be admissible in court.
Steven Laken, president and chief executive of the eight-person start-up Cephos Corporation, says he believes this sort of brain data has a “strong possibility of being introduced as evidence” in the next couple years. Cephos is shooting for a benchmark of 95 percent accuracy in results, he says. The company got close to the mark in a 2005 study published in the journal Biological Psychiatry. In that study, they asked 31 subjects to steal a ring or watch and then lie about the object they took. Cephos detected the lies with 90 percent accuracy.
Laken says he views the test results as “forensic evidence,” and says that “like most pieces of forensic evidence, there is an error rate associated. It will be up to us to inform the jury that there are error rates and the computer makes mistakes. Then it’s up to the judge and jury to decide what value to give it.”
But many neuroscientists and legal scholars say the evidence isn’t ready for the courtroom. Judy Illes, a neuroethicist at the University of British Columbia in Vancouver, calls the companies “premature” and says “I don’t think we have the scientific evidence yet to be selling fMRI for the kind of applications they are supporting. . . . It’s a tall order to be able to sell results.”
Cephos’ test relies on functional magnetic resonance imaging (fMRI), a technology that measures changes in blood flow to different areas of the brain over time. Working neurons require more oxygen and thus more blood, just as working muscles do, so by tracking blood flow, fMRI shows which areas of the brain are most active at any particular moment.
Many fMRI studies have concluded that a few key areas of the brain are more active during deception than truth-telling. These include the anterior cingulate cortex, which is involved in attention and monitoring processes, and the left dorsolateral and right anterior prefrontal cortices, areas of executive function involved in working memory and behavioral control.
Cephos pays particular attention to blood flow in one thousand “voxels,” or three-dimensional pixels measuring brain space, in the brain regions associated with deception. (The entire area of the brain can be represented by about ten thousand voxels.) The computer provides two values indicating how many of those spaces, out of a thousand, are activated when a person admits or denies allegations.
These data are then checked against the activation seen when admitting or denying innocuous questions, like the name of the state where the person lives. By asking these neutral, verifiable questions, it’s possible to control against subjects who try to fool the test by lying about true details of their lives, explains Andrew Kozel, a neuroscientist and psychiatrist at the University of Texas Southwestern Medical Center who serves as an unpaid member of Cephos’ scientific advisory board. (Although Kozel accepts no salary money from Cephos for his consultation and holds no stock or equity in the company, Cephos has partially funded several of his studies.)
Put simply, Kozel says, certain parts of the brain work harder to deceive, so testers see more oxygenated blood flow to those thousand voxels when subjects lie. “The idea is that people are generally in a baseline state, a truthful state,” he says. “But when we start to conceive and communicate a lie, that requires suppression of truth, the production of a lie and an increased monitoring. And that’s where we believe it requires more brain activation to lie—although exactly why producing a deceptive versus a truthful response results in increased brain work is not known.”
Neuroscientists tend to have doubts about the reliability of fMRI lie detection at this stage, but many agree that the technique is worth studying. Richard Haier, a neuroscientist who studies intelligence using fMRI at the University of California, Irvine, refers to the methods Cephos and No Lie MRI use as “rudimentary,” though he says the concept is valid. Even Cephos’ consultant, Kozel, acknowledges the technology’s uncertainties. “I would absolutely agree that this technology is in the early stages of development and more research is needed,” he says, but “that doesn’t mean it cannot have utility in very specific situations with clear caveats.”
Cephos appears to be moving slowly to commercialize its lie detection services. According to Laken, the chief executive, Cephos has performed fewer than 100 tests since starting earlier this year. He provided the details of two cases. One customer was fighting possible charges for violating a restraining order. The other had lost his driver’s license four years ago for drunk driving and wanted to prove to the court he hadn’t touched alcohol since. The first customer was not charged, and the second case is still pending, according to Laken. (No Lie MRI declined to provide information on the number of tests sold.)
Legal scholars are both interested and alarmed by fMRI’s potential use in detecting liars. Henry Greely, a bioscience ethicist at Stanford University, thinks the technology “has the potential to really screw up people’s lives.” Some companies, he says, are touting the accuracy of their tests based on unrealistic scenarios. Greely mentioned one commonly cited study in which researchers asked 26 right-handed male undergraduate volunteers to lie about holding either the five of clubs or seven of spades. “How similar is that to telling the cops ‘No, I wasn’t there’ during a crime?” he asked. The study to which he refers was published in the journal Human Brain Mapping in 2005.
If prosecutors try to get the results of fMRI lie detection tests admitted into court, they can expect a challenge based on the Constitution’s ban on self-incriminating testimony, according to Kenneth Foster, a bioengineer who is also associated with the University of Pennsylvania’s neuroethics program.
“Legally, is having a brain scan similar to a urine sample or similar to testifying?” Foster asks. If it’s considered testimony, then defendants could challenge it by citing their Fifth Amendment rights. “This kind of dilemma has to be solved soon because it’ll make a big difference in the way courts see the admissibility of this evidence.”
**This story was updated on 11/3/08 at 9:30 pm with additional information from Andrew Kozel that was received after our deadline.**
Related on Scienceline:
Using brain scans as evidence in court cases involving adolescents.
Listen to a round-table discussion on the ethics of lie detector tests.
Inside the Autism vaccine court.