Key Scientific Paper Redux – Accuracy of Deception Judgments

Key Scientific Paper Redux – Accuracy of Deception Judgments

Authored by Jason Apollo Voss

Jason Apollo Voss is a: conscious capitalist, believer in human potential, pursuer of wisdom & knowledge, and your advocate. He shares his wisdom, intelligence, knowledge, and humility through books, whitepapers, scientific research, articles, workshops, and executive coaching.

30/08/2022

At Deception And Truth Analysis (D.A.T.A.) we are first and foremost, grounded in the findings of deception science. Globally there are just a handful of researchers working in this space. Here we are talking about endowed professorships, or researchers whose sole emphasis is on these subjects. Deception science touches on multiple fields including social psychology, criminal justice, natural language processing, and more generally, computer science.

At D.A.T.A. we also realize that few professionals truly enjoying scientific papers. Thus, our commitment is to provide summaries of key research in deception science to better educate our Clients, enthusiasts, and those interested in lying behavior. We call this series: Key Scientific Paper Redux.

First up in this series is the preeminent classic work of deception science “Accuracy of Deception Judgments” by Charles F. Bond, Jr. and Bella M. DePaulo (2006). This paper is a meta-analysis that demonstrates people’s accuracy in surfacing deception and is among the most referenced paper in all of deception science.

What is a Meta-Analysis?

A meta-analysis is essentially a study of studies that uses statistical techniques to combine the results of many separate studies into an overarching study. Why is this important? It is important because individual studies can be dismissed as anomalies, or because the effect sizes reported may be small. By combining studies into a larger study of studies it allows for more sweeping conclusions. At D.A.T.A. meta-analyses allow us to confidently assert things such as: using body language cues to uncover deception is unreliable.

Accuracy of Deception Judgments: Study Details

Bond and DePaulo synthesized research results from 206 deception judgment studies, covering the deception detection abilities of 24,483 people. The first of these studies was published in 1941 and the latest was published in 2005, the year of their meta-analysis. In the studies people attempted to discriminate between deception and truth with no special aids or training.

Accuracy of Deception Judgments: Major Findings

  1. People’s overall accuracy in deception-truth judgments = 54.45%.
  2. The highest reported accuracy is 73% and the lowest is 31%.
  3. Using statistical techniques, the true standard deviation across studies = 4.52%.
  4. Proper classification of lies as deceptive = 47%.
  5. Proper classification of truths as non-deceptive = 61%.
  6. Cohen’s d (a measure of the size of the effect, i.e. people’s deception detection abilities) = 0.40, or a medium-sized effect in the typical way of interpreting this measure.
  7. Rates of deception detection vary little from study to study.
  8. People judge other people’s deceptions more harshly than their own.

Accuracy of Deception Judgments: Sub-Findings

Deception Medium: In the real world, detecting deception takes place in different mediums. For three different contexts here is the “correct lie-truth classification” accuracy:

  • Video = 50.35%
  • Audio = 53.75%
  • Audiovisual = 53.98%

These results show that people relying solely on visual cues are scarcely better than chance accurate at detecting deception. It is slightly easier to discriminate between truthfulness and deceptiveness relying on just the words people express. Best is to use a combination of audio and visual cues. However, the overall accuracy is still barely better than chance.

Motivation: Deception studies have been criticized frequently because those taking on the role of deceiver often lack motivation or stakes in convincing the receiver of their stories. Here are the results by stake, according to Bond and DePaulo:

  • No motivation = 53.43% 
  • Motivation = 53.27%

As you can see there is almost no difference between un-motivated and motivated deceivers. It is true that motivated deceivers are able to reduce the discriminatory power of receivers, but just barely.

Preparation: Perhaps it is easier to deceive if you have time to prepare your “story.” Here are the results:

  • No preparation = 53.13%
  • Preparation = 53.75%

Assessing the above results shows again that there is very little difference in correct truth-deception accuracy even accounting for a deceiver’s level of preparedness. In fact, deception detection is made easier when the deceiver is prepared.

Baseline Exposure: Maybe it makes a difference in deception detection accuracy if the receiver knows the deceiver? According to the meta-analysis:

  • No exposure (i.e. deceiver and receiver are strangers) = 53.06%
  • Exposure (i.e. deceiver and receiver know one another) = 54.55%

If you have familiarity with another person it is slightly easier to detect their deceptiveness. But again, the improvement in the grand scheme of things is negligible.

Interaction: What about situations in which the receiver is actually interacting with the deceiver rather than simply observing the behavior of the deceiver? How does this affect accuracy?

  • No interaction = 52.60%
  • Interaction with receiver = 52.75% 
  • Interaction with a third party = 53.97%

Receiver Expertise: Those who uphold the mistaken belief that audiovisual cues (i.e. body-language) are a valid way of judging deception have responded to the results of the multiple studies disproving this belief by retorting that trained experts are better. In other words, studies are flawed because they feature amateurs’, not experts’ abilities at deception detection. Here are the results when deception detection expertise is taken into account:

  • Non-expert = 53.29%
  • Expert = 53.91%

There is not a statistically significant difference between these results. If we took these statistical means as population means it suggests that experts relying on audiovisual cues, on average, make a more correct assessment less than one time better in 100 assessments. [Note: another article in this series drills down into experts’ abilities at a more refined level.]

Publication Status: If you follow the state of science you may know of the “file drawer” problem. That is, results that run contrary to the popular understanding of phenomena are sometimes stuffed away in a file drawer rather than attempting to be published. There are statistical techniques that allow for a more objective assessment of this type of problem. Additionally, Bond and DePaulo went out of their way to include unpublished studies of which they were aware in their research. What they found is that there was no statistically significant difference in the reported accuracies of published and un-published studies.

  • Published studies = 53.19%
  • Unpublished studies = 53.75%

Quotes of Note:

  1. “People have a prescriptive stereotype of the liar – stricken with shame, wracked by the threat of exposure, liars leak signs of their inner torment. They fidget, avoid eye contact, and can scarcely bring themselves to speak – a worldwide stereotype holds…People hold a stereotype of the liar – as tormented, anxious, and conscience stricken. Perceivers draw on this stereotype when considering a target’s veracity. Targets who most resemble the stereotype are most likely to be regarded as liars; those who least resemble it are most likely to be believed.”
  2. “From our double-standard framework, we interpret these results as follows: that the usual stereotype of a liar is largely visual, hence is most strongly evoked by visual images of people speaking.”

You may also like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.