Virtual Case Notes: AI ‘Polygraph’ Knows When You Lie Online

FSU researcher Shuyuan Ho is working on an AI system that can use language cues associated with deception to predict whether someone is lying online. (Photo: Courtesy of Florida State University)

Editor’s Note: Welcome to my weekly column, Virtual Case Notes, in which I interview experts on the latest developments in digital forensics and cybersecurity. Each week I take a look at new cases and research from the evolving world of cybercrime. For previous editions, please type “Virtual Case Notes” into the search bar at the top of the site.

If you’re trying to figure out whether someone is lying, you might focus on their body language, eye movement, facial expressions or tone of voice. If you’re a polygraph examiner, you’ll monitor their heart rate, blood pressure, breathing and other physical signs that can indicate a person knows they’re not telling the truth.

But when face-to-face interaction is stripped away, and all you have to go on is words on a screen, determining the truthfulness and intentions of someone just from what they type is a much more daunting task. The use of social media and instant messaging has made lying and scamming easier than ever, as plenty of phishers, fraudsters and “catfishers” have already figured out.

Now, a Florida State University researcher is figuring out how to use online liars’ words against them, with an artificial intelligence approach that learns the subtle differences in language between honest and deceptive chatters.

“Do liars speak more words of self-reference, or do the truth-tellers speak more self-reference? Or, what kind of words do they use?” said Shuyuan Ho, an assistant professor at FSU’s School of Information, in an interview with Forensic Magazine. “We used a machine learning approach to analyze (online chat messages) … The machine is able to parse out and correlate the cues.”

Ho and her colleagues, including fellow FSU professor Xiuwen Liu and Stanford University communications professor Jeffrey T. Hancock, set up an experiment where 40 participants played an online game called “Real or Spiel,” which involved players asking and answering questions in pairs over an instant chat messenger.

Participants were assigned either a “speaker” or “detector” role, and each speaker was assigned either a “saint” (truth-teller) or “sinner” (liar) role. The detectors would ask questions about a pre-assigned topic to the speaker, who would answer either truthfully or untruthfully depending on their role, and then the detector would guess whether their opponent was a saint, or a sinner. A total of 80 game sessions were conducted during the study.

The experiment revealed that the accuracy of the human participants in correctly guessing their opponent’s role was 52.4 percent. Essentially, this suggests the average person has only slightly better than coin-toss odds of being able to tell whether they’re being lied to while chatting online.

“We tend to think that we can do very well (detecting lies) online, but the result is not so based on all these experiments,” explained Ho.

This experiment was conducted in 2014 and 2015, and the results on participants’ deception strategies and ability to detect deception were first reported in the Journal of Management Information Systems in 2016. But this past February, Ho and Hancock published new research in the journal Computers in Human Behavior, showing the results of testing their “online polygraph” on the dataset from the original experiment.

Ho said the results were “amazing.” After first using linear regression analysis to determine which language cues were associated with deception, the researchers used logistic regression analysis to predict whether each speaker from the game was a saint or a sinner. The researchers found that the computer could identify liars 82.5 percent of the time, and overall identify the speakers’ roles with 74 percent accuracy. This performance was much better than that of the human participants.

Among the cues associated with deception were a more frequent use of insight words such as “think” and “know,” as well as words of certainty like “always” and “never.” Truth-tellers were more likely to use words that expressed less certainty, like “perhaps” and “guess,” and expressed causation more often, with words like “because” and “since.”

Self-reference words like “I” and “me” were also factored in—the earlier paper in JMIS showed that liars tended to make less references to themselves than truth-tellers. The time it took for participants to respond was one of the few non-word-related cues that were considered—deceptive participants were found to respond more quickly than truth-tellers.

Ho noted that the preliminary results of this experiment confirm the ability of statistical models and AI to pick up the subtle signs of deception better than their human counterparts, but added that the research still has a long way to go. She also acknowledged the debate around the accuracy of traditional polygraphs, and stressed that even if the online polygraph became more accurate, it would still be used more as a “reference point” than a quick solve.

“Any of the prediction technology systems, they always have false alarms (false positives/false negatives). It’s always going to be (that way)—that’s just the nature of prediction itself,” Ho said. “If it’s adapted by (an) organization, then it can help them make some decisions. It will be viewed as a recommendation system.”

Although traditional polygraph tests are typically not admissible in court, they are still used in investigations by law enforcement and intelligence agencies—Ho sees potential applications in these areas when the technology is further developed. She also mentioned that another form of artificial-intelligence-aided lie detection has been tested for use by border security, using facial expression and body language recognition to predict deception or truthfulness. According to CNBC, these systems have shown to be up to 80 percent accurate.

A more general potential application of the technology would be personal use by private web users—perhaps those who are tired of falling for online scams, or who are wary of being deceived in the online dating scene.

In order to bring the technology’s helpfulness to fruition, Ho seeks to expand her research to include a much great volume of data, which could help build more accurate models and reveal more clues about the nature of online deception.

“My vision is to have this (game) completely available online, so then people can play with that system,” Ho said. “We can do a big data collection … If we make it a fun experience for people, I think it has a lot of potential.”

 

https://www.forensicmag.com/news/2019/03/virtual-case-notes-ai-polygraph-knows-when-you-lie-online

DISCLAIMER: Comments expressed here do not reflect the opinions of FraudXpose or any employee thereof.

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.