Detecting lies in the blink of an eye – the concept and problems of artificially intelligent border guards

 

January 25th, 2020
Written by Julia Teufel, second year Psychology undergraduate and research placement student with Professor Memon

 

AI border guards are no longer a futuristic idea, currently being piloted for implementation at select EU and US borders, they are here to stay.  These “machines” measure physical cues, such as facial expressions and heart rate to allegedly check the validity of individual’s accounts. AI is increasingly used to control migration without human judgement (Korkmaz, 2020) but scientific evidence that AI programs can reliably make such life changing decisions is lacking.

Take the evidence on microexpressions– fleeting facial expressions that occur involuntarily- ostensibly a measure of deception drawing on Paul Ekman’s (2003) work for its theoretical basis. The assumption being that they correspond directly to emotions and that ‘blink of an eye’ expression changes can signal emotions fluctuating from say happiness to sadness. Following on from this is the misconception that an individual’s true emotional state can be detected even when concealed.

The microexpressions theory is based on several problematic assumptions: Firstly, deception produces negative emotions like shame or guilt and these emotions correspond with certain microexpressions. Secondly, microexpressions are uncontrollable and occur frequently enough to be detectable by humans or AI. Thirdly, deception detection can occur simply by interpreting an individual’s expressions without the need for an interview or court case. Most concerning is the assumption that everyone feels the same emotions when lying resulting in the same universal facial expressions (Burgoon, 2018). In reality some people may feel guilt, others could experience pride, glee or even fear about successful deception.

But how frequently do these microexpressions occur? Porter and ten Brinke (2008) analysed over 600 expressions made in a laboratory study when participants were shown emotional pictures. Among these, they only detected four partial microexpressions that corresponded with Ekman’s theory. This does not amount to a universal tool to detect deception.

Despite the lack of evidence for the validity of microexpression theory several AI programs are being developed with the intention of using them as border guards to identify potentially suspicious individuals. Examples for these AI programs are AVATAR and iBorderCtrl.

AVATAR stands for “Automated Virtual Agent for Truth Assessment in Real time”. This technology, developed by US homeland security, has been tested on volunteers crossing the US-Mexico border. It supposedly detects deception better than human border guards and provides more “customer convenience” (Elkins et al., 2014).

AVATAR is a booth like room with a screen showing a male virtual agent. This “border officer” conducts an interview enquiring about a person’s identity and reasons for travel. Simultaneously, several cues including microexpressions, heart rate and blood pressure are measured to “check for truthfulness”. If an individual seems overly aroused, for instance having an accelerated heart rate, they are classed as suspicious and the AI alerts a human officer for further assessment.

By measuring several physical cues, AVATAR acts like an advanced polygraph or “lie detector”. Although polygraphs are considered unreliable, the developers of AVATAR judged that the AI is effective since it combines polygraph measures with more recently developed assessments like the analysis of microexpressions (Elkins et al., 2014). A major problem with polygraphs is the risk of false positives, with individuals being stressed due to questioning eliciting behaviour that may be incorrectly interpreted as lying (Patrick & Iacono, 1991). Being questioned by a machine is highly unlikely to be more relaxing than being questioned by a human!

A similar AI called iBorderCtrl, developed by the EU’s Horizon 2020 project, was tested in Latvia, Hungary, and Greece between 2016 and 2019 (European Commission, 2020). Results of these tests are yet to be released (Berti, 2020). iBorderCrtl works differently to AVATAR, in that assessment by the virtual agent happen at home prior to the journey. Here, the AI analyses a person’s “biomarkers of deceit” during a video interview. These are facial cues like microexpressions that supposedly inform about deception. Based on these biomarkers iBorderCtrl decides whether the user appears truthful.

The AI of iBorderCtrl is built on an experiment conducted by Manchester Metropolitan University (O’Shea et al., 2018). Here, 30 participants simulated a border crossing with the AI border guard and acted either truthfully or were deceptive. iBorderCtrl detected 75% of the honest and deceptive participants correctly. These results justified further testing of the AI at EU borders. For such grave implications, the study had a very small and non-diverse sample.

In addition to the limited evidence for their efficacy, AI algorithms are susceptible to bias against individuals from ethnic minority backgrounds.  Other AI programs such as software for voice or face recognition also show this problem. The former has been reported to frequently misunderstand black speakers (Lopes-Lloreda, 2020). The latter is used by police to locate criminals and has led to the arrest of an innocent black man (Hill, 2020). Although the developers of AI border guards claim that their product is free from bias similar problems may arise with the increased use of AI border guards to assess the veracity of migrants. The initial study sample used to test the AI border system consisted of mainly white males and more research is needed with non-white samples and females.

To summarise, AI border guards are being used to detect suspicious individuals at international borders without a scientific evidence base. They rely on the analysis of facial expression changes occurring with a blink of the eye, cues which are not reliable indicators of deception and can result in erroneous judgments that can have life changing consequences for vulnerable individuals.

 

References

Berti, A. (2020, April 5). Finding the truth: will airports ever use lie detectors at security? Airport Industry Review. https://airport.nridigital.com/air_may20/airport_security_lie_detectors

Burgoon, J. (2018). Microexpressions are not the best way to catch a liar. Frontiers In Psychology (9)

Elkins, A., Golob, E., Nunamaker, J., Burgoon, J. & Derrick, D. (2014). Appraising the AVATAR for automated Border Control.

Ekman, P. (2003). Darwin, Deception and Facial Expressions. Annals Of The New York Academy Of Science (1000), 205-221.

European Commission (2020, October 22). Intelligent portable border control system. https://cordis.europa.eu/project/id/700626/de

Hill, K. (2020, December 29). Another Arrest, and Jail Time, Due to Bad Facial Recognition Match. New York Times

Kendrick, M. (2019, April 17). The border guards you can’t win over with a smile. BBC Machine Minds

Korkmaz, E. E. (2020, December 8). Refugees are at risk from dystopian “smart border” technology. The Conversation

Lopes-Lloreda, C. (2020, July 5). Speech Recognition Tech is yet another Example of Bias. Scientific American

O’Shea, J., Crockett, K., Kahn, W., Kindynis, P., Antoniades, A. & Boultadakis, G. (2018). Intelligent Deception Detection through Machine Based Interviewing. Research Gate. DOI: 10.1109/IJCNN.2018.8489392

Patrick, C. J. & Iacono, W. G. (1991). A comparison of field and laboratory polygraphs in the detection of deception. Psychophysiology (28), 632-638.

Porter, S. & ten Brinke, L. (2008). Reading between the lies: Identifying Concealed and Falsified Emotions in Universal Facial Expressions. Psychological Science (19), 508-514.

 

Emotions are everywhere but no one talks about them

November 26th 2020
By Louise O’Connor

 

Everyone experiences emotions. We might use different words but we all know what it feels like to be sad or joyful, angry, anxious or empathic. Social work practice is full of emotions but there’s been surprisingly limited research in this area. Where do emotions fit in the social work profession? How do practitioners think about their emotions? And do emotions have a value or function in everyday professional practice? These are some of the questions I explored in my PhD research.

One of the interesting findings was that the ways in which emotions are understood and thought about are both problematic and complicated.  As one participant said – “Emotions are everywhere but no one talks about them.”

Some social workers find they are caught in a double bind where emotions are a really important part of how they work, but at the same time they fear being judged negatively for even having or acknowledging emotions. And yet, my study showed that ‘emotion practices’ were crucially important, not just in developing working relationships or responding to trauma, but also in how social workers made sense of complex and difficult situations in their assessments and supervision.

If you want to know more about my research, which used observations, interviews and diaries get in touch on louise.oconnor@rhul.ac.uk