A multilayered 3d rendering of a lie detector with a metallic stylus putting down a red curvy line on a paper with boxes. The lines have different parameters. It means some person is dishonest.

Letter to America: The Black Box that Wouldn’t Die

Wendy M. Grossman

It was in 1985 that I first read—in the LA Skeptics’ newsletter, LASER—that polygraphs were highly unreliable as a method of telling whether someone was lying. We popularly call them “lie detectors,” but even proponents admit this is not an accurate description. Polygraphs measure physical parameters such as heart and breathing rates, blood pressure, and perspiration, and the operator interprets these measurements to decide whether the person is telling the truth. Skeptical Inquirer has revisited this topic numerous times: in 1990, 2005, 2016, and 2017.

In thirty-five years, the science has not improved. Numerous studies find that if polygraphs work in the sense of identifying whether someone is telling the truth or not, it is because the subject believes they do. In a 2018 posting, ACLU’s Jay Stanley called them enablers for racial bias, and noted their use in hiring despite a National Academies of Sciences 2003 expert report that called the quality of the evidence for their validity “relatively low.” In a recent talk, Marion Oswald, Vice-Chancellor’s Senior Fellow in Law at Northumbria University, called polygraphs a “Mary Poppins thermometer.” Oswald has written several papers advocating against further adoption.

In the LASER article, writer Al Seckel approved of the reluctance of other countries such as the United Kingdom to follow the United States’s lead in embracing polygraphs in government hiring and various other settings. Today, he would be disappointed. Without attracting much attention, the Offender Management Act (2007) put in place an option to use polygraph tests to monitor serious sex offenders on parole in England and Wales. In 2009, Ewout H. Meijera, Bruno Verschuereb, Harald L.G.J. Merckelbacha, and Geert Crombez reviewed the validity and utility of post-conviction sex offender testing and warned that the evidence is “weak, if not absent.”

Undeterred, as of 2014, polygraph tests can be made mandatory as part of an offender’s release conditions. New uses are coming. In 2019, the Ministry of Justice launched a three-year pilot of mandatory polygraph tests for those convicted of domestic abuse (PDF) and released under license (that is, with conditions attached). Now, the government wants to add them in terrorist cases, a move polygraph expert Thomas Ormerod, a professor at the University of Sussex, told the Guardian, “… will, at best, be a waste of police resources.”

Under British rules, confessions made in a polygraph test cannot be used against defendants in criminal justice. This is deceptive; operators detecting deception can inform the police, who can obtain a search warrant, and search the suspect’s home for more evidence. It is clear the practice is spreading; a quick search finds current ads for polygraph operators to join police forces in Norfolk, Cumbria, Hertfordshire, South Yorkshire, and Greater Manchester.

In testimony Oswald presented to Parliament during the consultation on the Counter-Terrorism and Sentence Bill 2019-21, she called polygraphs an “oppressive interrogation tool” and advocated against adoption. “It sounds to me … [like] handing over the decision for public protection to a polygraph operator, because that person is telling you whether this individual is telling the truth. It’s all about interpretation.”

Oswald came late to academia after some years as a private practice lawyer followed by in-house work for Apple, McAfee, and the U.K. government agency Ordnance Survey. As a result, she says she always tends to think about what the real-world applications are for new technologies she encounters.

The work that attracted my attention to her was a talk she gave at the Royal Society in 2017 outlining her study of the Harm Assessment Risk Tool (HART), an AI system then being tested by Durham Constabulary to decide which offenders should be offered help and deferred prosecution. She isn’t sure what sparked her interest in polygraphs, but in a recent paper, she compares the two. In both cases, she says, the purpose is “overcoming perceived human inadequacies and fixing societal issues” within criminal justice. The books she found showed that historically polygraph proponents used legal journal articles to advocate for their acceptance in court. “It seemed similar to what we’re seeing now in the use of AI and machine learning, in that there’s a lot of overlap between academia and the industry.” However, “At least with AI and some machine learning tools there is a generally accepted statistical method behind it, but polygraph doesn’t have even that.”

In the paper, Oswald draws out this comparison. Both technologies claim to predict and categorize human behavior; promoters for both downplay the role of the human in the loop; both are effectively opaque in operation. Just last week as I write this, the U.K. government had to reverse two algorithmic decision making systems, one used to decide which visa applicants to fast-track and the other to predict what secondary school students’ final exam grades—the ones that determine university admission—would have been if the exams hadn’t been canceled. In both cases, the algorithms involved turned out to be highly discriminatory and regressive.

In her view, “These things are becoming more attractive because people are more and more nervous about making these decisions about risky situations.” Apparently, too attractive for scientific evidence to dislodge.

Wendy M. Grossman

Wendy M. Grossman is an American freelance writer based in London. She is the founder of Britain's The Skeptic magazine, for which she served as editor from 1987-1989 and 1998-2000. For the last 30 years she has covered computers, freedom, and privacy for publications such as the Guardian, Scientific American, and New Scientist. She is a CSI Fellow.