Passengers at some European airports will soon be questioned by artificial intelligence-powered lie detectors at border checkpoints, as a European Union trial of the technology is set to begin.
Fliers will be asked a series of travel-related questions by a virtual border guard avatar, and artificial intelligence will monitor their faces to assess whether they are lying.
The avatar will become “more skeptical” and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.
The €4.5 million ($5.1 million) project, called iBorderCtrl, will be tested this month at airports in Hungary, Latvia and Greece on passengers traveling from outside the EU, with the aim of reducing congestion.
“It will ask the person to confirm their name, age and date of birth, (and) it will ask them things like what the purpose of their trip is and who is funding the trip,” said Crockett.
But privacy groups have raised concerns about the trial.
“This is part of a broader trend towards using opaque, and often deficient, automated systems to judge, assess and classify people,” said Frederike Kaltheuner, data program lead at Privacy International, who called the test “a terrible idea.”
The technology has been tested in its current form on only 32 people, and scientists behind the project are hoping to achieve an 85% success rate.
Previous facial recognition algorithms have been found to have higher error rates when analyzing women and darker-skinned people, with an MIT study earlier this year finding that technology developed by companies including IBM and Microsoft contained biases.
“Traditional lie detectors have a troubling history of incriminating innocent people. There is no evidence that AI is going to fix that – especially a tool that has been tested in 32 people,” Kaltheuner added.
“Even seemingly small error rates mean that thousands of people now have to prove that they are honest people, just because some software said they are liars,” he added.
“I don’t believe that you can have a 100% accurate system,” Crockett said in response to criticism, adding that the technology will become more accurate as it is tested on more passengers.
The system will be overseen by human guards, who can see the results of the AI tests on each passenger.
Only passengers who give their consent will come face-to-face with the technology in its initial trial, with consent forms available at the airports when they arrive.
The system “will collect data that will move beyond biometrics and on to biomarkers of deceit,” said project coordinator George Boultadakis, of information technology service company European Dynamics in Luxembourg.