AI Case Study
US Department of Homeland Security has tested identifying illegal immigrants with lie-detecting computer kiosks equipped with artificial intelligence
A kiosk equipped with sensors and biometric readers was used to interview travelers at airports and border crossings. It used lie-detection tests to flag concerns to human security agents. The lie-detection capabilities are based on eye movements or changes in voice, posture and facial gestures. One researcher claimed a deception detection success rate of up to 80 percent — better than human agents. However, one key issue is that it was not fast enough to deal with high volume situations.
Public And Social Sector
"The U.S. Department of Homeland Security funded research of the virtual border agent technology known as the Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, in 2011-12 and allowed it to be tested it at the U.S.-Mexico border on travelers who volunteered to participate. The U.S.-Mexico border trials with the advanced kiosk took place in Nogales, Arizona, and focused on low-risk travelers. The research team behind the system issued a report after the 2011-12 trials that stated the AVATAR technology had potential uses for processing applications for citizenship, asylum and refugee status and to reduce backlogs. Since then, Canada and the European Union tested the robot-like kiosk that uses a virtual agent to ask travelers a series of questions. The AVATAR combines artificial intelligence with various sensors and biometrics that seeks to flag individuals who are untruthful or a potential risk based on eye movements or changes in voice, posture and facial gestures."
'We're always consistently above human accuracy,' claimed a researcher, who worked on the technology with a team that included staff from the University of Arizona.
Dealing with rising numbers of international travellers (and rising political pressure on the matter of illegal immigration) has stretched border control resources - leading to growing queues and concerns.
According to a researcher, the AVATAR as a deception-detection judge has a success rate of 60 to 75 percent and sometimes up to 80 percent. He said: 'Generally, the accuracy of humans as judges is about 54 to 60 percent at the most. And that's at our best days. We're not consistent.'
However, another DHS official familiar with the technology reported that it didn't work fast enough to be practical. 'We have to screen people within seconds, and we can't take minutes to do it.'