A remarkable deception detection study was just published and – given this moment in human history where Artificial Intelligence (AI) and Machine Learning (ML) algorithms are being implemented within organizations – its findings deserve deep understanding by due diligence pros and their supervisors. The research is entitled, “Lie detection algorithms disrupt the social dynamics of accusation behavior”[i] and it provides a road map for organizations looking to leverage insights from deception detection algorithms.
Study Overview
Creation of a Deception Detection Algo
Researchers first created a deception detection algorithm using machine learning. They elicited 1,536 true and false statements from 786 people to train their algorithm using a standard 80:20 technique. If the authors could later fool a deceptive detection judge then they were awarded £2.00.
Overall the accuracy of the AI developed algorithm was incredibly comparable to other similar efforts described in the deception science literature. Namely, accuracy was 66.86% in detecting deception. Also similar to other algorithms of this type, the accuracy at detecting truthful statements was just 52.94%. In other words, its Type I and Type II error rate are very different.
By contrast, the Deception And Truth Analysis algorithm has been double-blind, scientifically test with a 88.4% accuracy, a Type I error rate of 11.3%, and a Type II error rate of 14.3%.
Measuring People’s Accuracy, Payoffs, and Accusation Rates
After the deception detection algorithm was created a separate group of 2,040 individuals were recruited to judge the statements created by the first group. Deception detection judges were told that 50% of the statements they assessed were deceptive and 50% were truthful. The 2,040 people were divided into four different test groups:
No Choice
- 1. Baseline – These people had no choice to access the deception detection algorithm’s assessments. The baseline group also did not know of the existence of the algorithm until after they had conducted their assessments.
- 2. Forced – These people had no choice and were automatically given the deception detection algorithm’s assessments.
The next two groups were informed about the existence of a deception detection algorithm and given the option to request the result of the deception detection algorithm for a cost of £0.05. However, some of the people making the request were randomly blocked from seeing the result and were told that the assessment was unavailable. These people were refunded their £0.05.
Choice
- 3. Blocked – These people had the choice to access the algorithm but randomly were told that they could not see the result of the algorithm. This was done so that the researchers could statistically determine if people had faith in the AI algorithm (as represented by their requesting its insights) were more accurate in their personal deception assessments.
- 4. Choice – These people chose to and could see the result of the algorithm.
Participants in all four groups were incentivized to make accurate assessments and for each correct judgment received £0.50.
Key Findings
Among the paper’s key findings are:
- That people are very poor at deception detection normally (54.45%[ii]) and even worse when they are trying to detect it in the written word (46.47%). This finding jibes with similar research conducted several years ago that found people are just 50.0% able to detect deception in the written word, or transcripts of the spoken word.[iii] It also jibes with the results of hundreds of scientific studies that have found people are 54% accurate in deception detection judgments.
- People who are given, rather than choosing to see the results of a deception detection algorithm have much higher and statistically significant outperformance; 56.47% (p = 0.003) vs. 50.78% (not significantly different from chance guessing).
- The payoff from being forced to receive the results of the algorithm is 21.52% higher than not having access to the algorithm at all. When people have the option to choose to see the algorithm’s assessment their payoff is just 2.53% higher than not having it.
- Furthermore, the people who made the most money are those who had the highest trust in the algorithm’s results, making 36.09% more money than those who were neutral in deciding whether to trust the algorithm or not.
- People do not like to accuse others of deceptive behavior. Without access to a deception detection algorithm the rate of accusation is just 19.22%. This result is shocking given that participants in the study were told that 50% of the statements they would be assessing were deceptive. Given access to the algorithm’s results – either by choice or no-choice – increases the accusation rate to 31.08%. For those that had the highest belief in the power of the algorithm they made the most accusations at 40.54%. Note: this is still well below the 50% of statements that participants should have expected to be were deceptive.
- When people experience high levels of guilt in accusing someone else of being deceptive they are much less likely to make an accusation.
- Women are generally better at deception detection in the written word than men, 52.49% vs. 49.32%, though this result is not statistically significant.
- By education, women who have psychology degrees are the best at deception detection, including those that accepted the results of the algorithm, at 60.0%. While those with an engineering degree are the worst at just 39.29%. For men, those scoring best are those with a social science degree other than psychology at 55.91%. While the worst are those with a political science degree at just 41.18%.
- Those people who are least familiar with AI and ML actually are better at deception detection than those with a greater familiarity, 61.04% vs. 49.76%. This speaks to the general skepticism of AI among people and the harm of rejecting its insights.
- People are more willing to request algorithmic predictions when they believe (1) it outperforms an average human (+28.03%), (2) it outperforms themselves (+19.45%), and (3) the probability of false accusations is low (-15.69%).
- Last, people are more willing to purchase algorithmic predictions when they believe (1) it outperforms an average human (+£0.0645 or +12.90%), (2) it outperforms themselves (+£0.0471 or +9.42%), and (3) the probability of false accusations is low (+£0.0325 or +6.50%).
Conclusion
If you are a due diligence professional charged with assessing the trustworthiness of statements made by people then you should trust the assistance able to be provided by AI/ML algorithms. Not only is your accuracy improved, but also the amount of money that you can make. Additionally, people are both skeptical of AI/ML solutions, as well as reluctant to accuse others of deception, in general. If you are a supervisor of due diligence professionals – such as in investment management, insurance underwriting, the law, human resources, private investigations – you should mandate that the results of deception detection algorithms be provided to your staff to improve their capabilities and to save you money.
[i] Von Schenk, Alicia; Victor Klockmann; Jean-François Bonnefon; Iyad Rahwan; & Nils Köbis. “Lie detection algorithms disrupt the social dynamics of accusation behavior.” iScience(2024)
[ii] Bond, Jr., Charles F. and Bella DePaulo. “Accuracy of Deception Judgments.” Personality and Social Psychology Review. Volume 10, Issue 3 (2006): pp. 214-234
[iii] Kleinberg, Bennett & Bruno Verschuere. “How humans impair automated deception detection performance.” Acta Psychologica. 12 January 2021




0 Comments