Agencies report over 1,700 AI use cases

bestofgreenscreen/Getty Images

Find opportunities — and win them.

While federal agencies finalized their artificial intelligence inventories, discrepancies in reporting risk mitigation persist. 

Federal agencies have submitted their complete inventories of artificial intelligence use cases to the White House to ensure trustworthy and transparent deployment. 

Following the recent Monday deadline to report such programs, 37 agencies submitted over 1,700 use cases to the White House Office of Management and Budget, fulfilling the provision in President Joe Biden’s October 2023 executive order on AI that asks agencies to provide details about how AI is impacting their business operations and about the risk mitigation efforts used to ensure that such systems are deployed safely.

Overall, the top three categories for AI use cases in federal operations were: mission-enabling and operations support, health and medical system support and government service operationalization. 

Of the total 1,757 use cases reported, 227 of these were identified as civil rights or safety impacting. The Department of Health and Human Services leads agencies in total use cases deployed, with 271 active AI use cases, 4 of them being rights-impacting. 

Following HHS, the Department of Veterans Affairs reported 229 active AI use cases, with 145 of them classified as rights-impacting at various levels. The Department of Homeland Security and the Department of the Interior followed just behind, reporting 183 and 180 active AI use cases, respectively. 

Though the updated transparency is seen as a positive development in ongoing AI policy, analysts noted inconsistencies in how agencies document their AI usage.

“Agencies’ updated AI use case inventories are a reflection of both increased AI usage across the federal government as well as improved documentation of agencies’ existing use cases. Some updated inventories contain significantly more information than years passed [sic], marking important progress,” Quinn Anex-Ries, a senior policy analyst at the Center for Democracy & Technology told Nextgov/FCW in an email.

“However, there are still significant inconsistencies in documentation practices between agencies and limited information about how agencies are evaluating the risk level of AI systems," she added. "Agencies now have the chance to learn from one another and the public about how these inventories can be improved upon in the future.”