top of page

AI Stinks! Why Science-Based Interviewing Must Come First

Updated: Sep 1

Introduction: The Cart Before the Horse

There’s the old saying, “garbage in, garbage out.” I was reminded of this recently when consulting for a company creating interview software and its follow-up documentation. Their system engineers had been approached and were trying to train their artificial intelligence (AI) to detect lies using anxiety cues, micro-expressions, neurolinguistic programming, and body language tricks drawn from pop psychology. I was surprised because this wasn't the topic of our session; it was investigative integrity and interviewing documentation, but I warned that academic research has shown these cues don't indicate lying. Worse, building AI around them simply hardens bad assumptions into expensive tools we think we can rely on. So much for investigative integrity.

Man putting garbage information into AI
Do NOT train AI with garbage techniques. The results will hurt cases and careers of well-intentioned investigators who trusted AI.

Imagine this in practice: an 18-year-old facing his first interrogation, nervous and overwhelmed. What’s his so-called “baseline” for truth or deception? Now contrast him with a hardened Department of Corrections graduate, fluent in the tricks and tactics passed around in prison. Both subjects are being evaluated using the same pseudoscientific measurements. Now, add in counter-interrogation strategies and suspects deliberately playing to or against these cues, and the picture becomes even murkier. The result isn’t better AI, it’s a machine that multiplies investigator bias and introduces false case information into the system (spoiler alert—documentarians are warming up their cameras for the next miscarriage of justice special).


The Myth of Lie Detection by Cues

Humans are not good lie detectors. Humans trained in lie detection methods are not good lie detectors either. Let’s be honest, we’ve tried to simplify something inherently complex. Lying is rarely a simple yes-or-no proposition. People deceive by leaving out important details, combining false and true information into seemingly verifiable details, or using recent memories to infuse their stories with credibility. Research indicates that the so-called cues—shifting eye gaze, crossed arms, baseline, behavioral analysis-type interviews, and nervous movements—do not reliably distinguish between truth and lies (not diagnostic). In fact, the most comprehensive meta-analysis to date, covering more than 200 studies and 24,000 judgments, found that people detect lies with only about 54% accuracy (Bond & DePaulo, 2006)—barely better than a coin toss.


Yet, body language “gurus” and truth and lie “wizards,” along with their pseudoscientific products, teach investigators to focus specifically on these cues. This creates a dangerous feedback loop: when investigators expect a gesture to mean guilt/lie, they interpret it that way and then bias their follow-up questions and subsequent information. With AI now being trained on these same shaky inputs, the danger grows. Algorithms don’t question their assumptions; they just repeat them at scale. Ironically, this ends up anchoring investigators even further, confirmation bias takes hold, groupthink reinforces erroneous judgments, and investigators become less adaptable, the number one trait of a great investigator. Instead of following the evidence (adaptability), they, now with the backing of AI, start forcing the evidence to follow them. These errors will be repeated agency after agency at an unprecedented scale with the help of a poorly trained AI.


Gasoline and Fire—Legacy & Accusatory Interviewing

One bad pseudoscience decision fuels the next. The risk isn’t just theoretical. After reinforcing our biases with charlatans spewing junk science, we now employ the next technique, which is conveniently included in curricula across the US: accusatory interviewing (this is also how AI would interrogate you). Accusatory and confrontational interrogation tactics, common in the U.S., were nearly ubiquitous at one time. These methods are more likely to produce false case information and, worse, false confessions. Techniques like minimization (“it’s not that bad”), false evidence ploys, and maximization (“hinting at worse punishments”) manipulate perceptions of consequences and reduce the reliability of what suspects say.


In a systematic review, Meissner, Redlich, Bhatt, and Brandon (2012) and a follow-up review by Catlin, Wilson, Redlich, Bettens, Meissner, Bhatt, & Brandon (2024) compared accusatory methods with information-gathering approaches. They found that information-gathering interviews increase the likelihood of true confessions and reduce false ones, while accusatory methods carry the opposite risk. In short, accusatory tactics not only fail but also create investigative landmines that your "wizard" AI would exacerbate if you approach the situation believing someone is lying.


Accusatory tactics don’t just reduce the quality of information by producing false case narratives; they also reduce the quantity of information gathered. Once investigators have predetermined guilt, often on the shaky grounds of pseudoscience, the interview becomes about extracting a confession rather than eliciting facts. Training materials still in circulation as of 2023 show just how ingrained this mindset is. Officers were instructed to “move in and put more pressure on,” watch for “signs of weakening,” “block the denial then immediately re-accuse,” and even told flatly that “you can mislead suspects about the facts.” The stated goal of interrogation was not information but a confession—because, as one slide proclaimed, “no one would confess to something they didn’t do.” There was no allowance for juveniles, mentally challenged individuals, or vulnerable populations (one we discuss in the next paragraph). The process ends with the push for a written confession to enable further “statement analysis.” When success is measured by compliance instead of information gathering, the risk of false narratives and miscarriages of justice becomes inevitable, especially given the reduced amount of investigative information available.


This "terrible-two" combination doesn't just stop with suspects. These one-size-fits-all interview packages conveniently work, according to the purveyors, with victims of crime as well. After all, they have training to sell. As vividly documented in Netflix specials like Victim / Suspect (2023) and American Nightmare (2024), as well as from the mouths of interview and interrogation "experts," we use pseudoscientific lie detection and accusatory methods to extract, or try to extract, "confessions" from victims who report sexual assaults, regardless of the case information. Most notoriously, sexual assault victims come forward and are met not with empathy, trauma-informed, or evidence-based practice, but with pseudoscientific lie detection and accusatory interviewing methods. Uninformed investigators use false evidence ploys, suggest case theories, and pressure individuals who may be traumatized, were intoxicated, or are struggling to recall details. In effect, these approaches can plant false memories or discredit truthful ones, all in the name of “clearing a case.” This is the opposite of the very premise of trauma-informed interviewing. These aforementioned instances often become trauma-creating interviews when the "terrible-two" are employed.


Whose idea was this? The goal is to train AI to walk the same path, a path from which a legitimate investigation and any well-meaning detective may never recover. If AI is designed to incorrectly label interviewees, including victims, as liars based on unreliable science and is used alongside accusatory interviewing techniques, it will document them precisely. However, the only outcome will be a career-ruining, public trust–dropping, credibility-shattering Netflix special about your agency. Better hope the AI can also block the FOIA requests while posting your resume on job sites.


Chiefs, Sheriffs, and Detectives: The First Step is Quitting

The first step isn’t adopting shiny AI tools, it’s quitting the bad training. If your investigators are still being sent to courses that push pseudoscientific lie detection or confession-driven curricula, you’re setting them, and your agency, up for failure. Know what training your people are attending and demand evidence-based instruction backed by research, not folklore. Until agencies stop feeding investigators junk methods, all the AI in the world will just be amplifying incorrect inputs. Quitting the legacy and accusatory training is where professionalism starts.


Now before you get upset, I come from the 1990s too. I was trained in these legacy and accusatory methods. At one time, they were considered best practice. But we know too much today to cling to them. We’ve upgraded everything else, patrol cars are safer, RMS systems are more robust, computers puts information to our cars in seconds, tasers reduce injury to officers, body-worn cameras document in real time, and tech is in every LEO's pocket. Yet somehow, we’re content to keep interviews and interrogations stuck in the past. One chief bluntly told me, "That's how I did it, and that's how my people will do it." If you want to ignore the academic research, fine, but will you also ignore the field-validation studies where law enforcement used science-based interviewing techniques on real cases and produced more information and, yes, even more confessions (still a terrible metric)? Russano, et al., 2024 demonstrated in a practitioner-developed study that when investigators applied rapport-based, evidence-driven methods in the field, they saw measurable gains in case information and disclosure. At some point, refusing to upgrade isn’t tradition; it’s negligence.


Training AI & Interviewers: Science-Based Interviewing

Science-Based Interviewing (SBI) is built on a different information-gathering foundation: rapport, better question architecture, tactical & active listening, and structured planning. These skills consistently generate detail-rich and more reliable information. Research on rapport shows it improves cooperation and disclosure, even with resistant suspects (Abbe & Brandon, 2013; Alison et al., 2013). Field studies using the ORBIT framework found that rapport-based interviewing significantly increased the “yield” of useful intelligence, while even small amounts of maladaptive interrogator behavior cut yield dramatically (Alison et al., 2014). Training studies confirm this. When investigators were trained in evidence-based, rapport-centered methods, they increased their use of effective tactics, suspects perceived more trust and rapport, and information disclosure rose significantly (Brimbal et al., 2021). Open-ended questions and free narratives, in particular, consistently generate the most detailed and accurate accounts (Snook et al., 2012).


These are the very kinds of information-rich, detail-heavy statements that AI can actually use ethically and effectively, because they produce lots of case and crime data that can be corroborated, cross-checked with records management systems, compared with crime trends, and connected to other evidence streams (surveillance cameras, license plate readers, and badge scanners).


For chiefs, sheriffs, and policymakers, step two is clear: invest in evidence-based training that prepares the next generation of investigators for the information age of policing. Programs like ixi's science-based interviewing course or Project Aletheia’s Train-the-Trainer course in Science-Based Interviewing ensure that agencies build lasting internal capacity—where sound practices are taught by both researchers and practitioners and reinforced through research. The future of policing will be measured not by who can shout the loudest or detect more lies in the interview room, but by who can gather the most accurate, corroborated information. If we want AI and advanced systems to strengthen investigations, we must first invest in the people who will give those systems data worth analyzing.


Why SBI Matters for AI

Artificial intelligence is only as ethical and effective as its input. If we train AI systems on deceptive cues and accusatory interview methods' behavioral techniques, the output will amplify bias and miscarriages of justice, not truth and certainly not justice. False case information gets treated like fact, confirmation bias gets built into algorithms, and the risk of wrongful conclusions, aka risk, grows.


On the other hand, SBI drops the garbage and information-limiting techniques and aims to create interviews full of narrative and verifiable details. These uncontaminated statements will integrate seamlessly, using AI with body-worn camera transcripts, RMS platforms, license plate readers, and digital forensics. The more detail in the record, the more valuable it becomes for AI to compare across cases, spot patterns, and highlight contradictions. Simply put: SBI creates data worth analyzing.

Don't believe the wizards!
Get the lie and truth "wizards" out of the interview room and out of interview & interrogation training rooms. The investigators are talking!

Conclusion: Get the Horse in Front of the Cart

When developers and agencies adopt AI tools without first improving the ethics and quality of their interviews (the input), it can lead to unintended consequences. Garbage interviewing creates garbage data. Garbage data fed into AI becomes garbage conclusions, just faster and flashier.


If we want AI to strengthen our agencies, companies, and investigations rather than weaken them, the path is clear: invest in training people first. Build interviewer and employee skills in rapport, active listening, better questioning, and structured preparation. Then, and only then, can AI be a force multiplier instead of a personal and professional risk amplifier.


Artificial intelligence should not be the new lie detection wizard. It should be a tool that extends the value of solid, science-based interviewing. Get the horse in front of the cart. Train investigators in Science-Based Interviewing, and give AI something worth analyzing.



Science-Based Interviewing References

Abbe, A., & Brandon, S. E. (2013). The role of rapport in investigative interviewing: A review. Journal of Investigative Psychology and Offender Profiling, 10(3), 237–249. https://doi.org/10.1002/jip.1386


Alison, L., Alison, E., Noone, G., Elntib, S., & Christiansen, P. (2013). Why tough tactics fail and rapport gets results: Observing rapport-based interpersonal techniques (ORBIT) to generate useful information from terrorists. Psychology, Public Policy, and Law, 19(4), 411–431. https://doi.org/10.1037/a0034564


Alison, L., Alison, E., Noone, G., Elntib, S., Waring, S., & Christiansen, P. (2014). The efficacy of rapport-based techniques for minimizing counter-interrogation tactics among a field sample of terrorists. Psychology, Public Policy, and Law, 20(4), 421–430. https://doi.org/10.1037/law0000021


Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10(3), 214–234. https://doi.org/10.1207/s15327957pspr1003_2


Brimbal, L., Meissner, C. A., Kleinman, S. M., Phillips, E. L., Atkinson, D. J., Dianiska, R. E., Rothweiler, J., Oleszkiewicz, S., & Jones, M. S. (2021). Evaluating the benefits of a rapport-based approach to investigative interviews: A training study with law enforcement investigators. Law and Human Behavior, 45(1), 55–67. https://doi.org/10.1037/lhb0000437


Catlin, M., Wilson, D., Redlich, A. D., Bettens, T., Meissner, C., Bhatt, S., & Brandon, S. (2024). Interview and interrogation methods and their effects on true and false confessions: A systematic review update and extension. Campbell systematic reviews, 20(4), e1441. https://doi.org/10.1002/cl2.1441


Meissner, C. A., Redlich, A. D., Bhatt, S., & Brandon, S. (2012). Interview and interrogation methods and their effects on true and false confessions. Campbell Systematic Reviews, 8(1), 1–53. https://doi.org/10.4073/csr.2012.13


Russano, M. B., Meissner, C. A., Atkinson, D. J., Brandon, S. E., Wells, S., Kleinman, S. M., Ray, D. G., & Jones, M. S. (2024). Evaluating the effectiveness of a 5-day training on science-based methods of interrogation with U.S. federal, state, and local law enforcement investigators. Psychology, Public Policy, and Law: An Official Law Review of the University of Arizona College of Law and the University of Miami School of Law, 30(2), 105–120. https://doi.org/10.1037/law0000422


Snook, B., Luther, K., Quinlan, H., & Milne, R. (2012). Let ’em talk! A field study of police questioning practices of suspects and accused persons. Criminal Justice and Behavior, 39(10), 1328–1339. https://doi.org/10.1177/0093854812449216

Comments


bottom of page