DHS generative AI pilot embraces hiccups of emerging tech
Michael Boyce, director of the AI Corps at the Department of Homeland Security, said a pilot program using generative artificial intelligence to train new asylum and refugee officers is leaning into hallucinations to better mirror real interviews.
A Department of Homeland Security official said an artificial intelligence pilot launched earlier this year to train officers in conducting interviews with individuals seeking refugee status is embracing inconsistencies in the technology’s output to better simulate real-world situations.
Speaking at the ATO and Cloud Security Summit on Thursday, Michael Boyce — director of the DHS AI Corps — said the department’s United States Citizenship and Immigration Services is leveraging generative AI tools to train new asylum and refugee officers with simulated interviews mirroring the actual conversations they are likely to have with asylum seekers.
Boyce said officers typically conduct roughly three hour long interviews with applicants seeking refugee status. For new officer training sessions, experienced personnel would often be removed from their normal schedules to play the part of refugees in mock interviews.
Under the new system, Boyce said officers-in-training will type out an interview question and “generative AI will pretend to be a refugee applicant and give them answers, new answers, to practice the three hour long interview with an automated system.”
Rather than placing strict limits on the output of these systems, Boyce said USCIS is leaning into generative AI’s less predictable responses.
“When we think about bias, well, in this particular case, I actually want this generative AI system to pretend to talk about very toxic potential events — you know, murderers, persecution — to replicate,” Boyce said.
This includes creating scenarios where potential refugee applicants have been persecuted in their home countries for their religions or for their personal beliefs, as well as embracing instances where generative AI systems produce incorrect or misleading information.
“I also want them to hallucinate and I want them to be a little inaccurate because you're often, in real life, working with an interpreter and there's a lot of confusion and a lot of sort of dropped things, or things that don’t quite line up or make perfect sense.”
DHS first announced the launch of the pilot in March. It's one of three new use cases the department rolled out to test the potential benefits of AI.
In announcing USCIS’s use of generative AI to help train officers, DHS said the agency “will generate dynamic, personalized training materials that adapt to officers’ specific needs and ensure the best possible knowledge and training on a wide range of current policies and laws relevant to their jobs.”
DHS Secretary Alejandro Mayorkas told reporters at the RSA Conference in May that USCIS’s AI pilot includes “teaching the machine to be reticent,” since refugees “are reticent to be forthcoming in describing that trauma” they have experienced.