New White House policy mandates safeguards for federal AI use

Kamala Harris, shown here at a recent White House event, discussed new federal AI policy on a call with reporters.

Kamala Harris, shown here at a recent White House event, discussed new federal AI policy on a call with reporters. Anna Moneymaker/Getty Images

Find opportunities — and win them.

The White House also announced new hiring goals for artificial intelligence talent, a request for information on the procurement of AI and more.

The Office of Management and Budget issued new guidance on the government’s use of artificial intelligence on Thursday that it says is meant to establish safeguards to protect the public when the government uses AI, as well as push agencies to use the technology to their benefit. 

The new policy is one of the deliverables mandated by President Joe Biden’s executive order on AI, signed in October 2023.

“This policy is a major milestone for President Biden's landmark AI executive order,” OMB director Shalanda Young told reporters in a Wednesday press call. “The public deserves confidence that the federal government will use the technology responsibly. The policy released today is the first requiring federal agencies to adopt concrete safeguards when using AI in ways that could impact the rights or safety of the public.”

The OMB memorandum — which also fulfills requirements Congress put into law in 2020 for OMB to issue guidance on AI — follows a draft version OMB released late last year and opened for public comment.

The policy includes a raft of requirements for systems that impact people’s safety or rights, such as AI systems that affect functioning of electrical grids, decisions about asylum status or government benefits. 

Government agencies using those systems — other than intelligence agencies and AI in national security systems — will have to test the AI for performance in the real-world context where it’s fielded, assemble an AI impact assessment for systems and more by Dec. 1, or stop using the systems in question. 

Agencies are also required to test systems affecting people’s rights for disparate impacts across demographic groups and notify individuals when AI meaningfully influences the outcome of negative decisions about them, like the denial of government benefits. 

Federal agencies are also instructed to maintain a system for individuals to appeal or contest an AI decision “where practicable” and offer a way for people to opt-out from AI in favor of a human alternative.

“If the [Department of Veterans Affairs] wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses,” Vice President Kamala Harris told reporters as an example of the practices the policy puts in place. 

Harris explained that OMB’s efforts build upon her earlier work to consolide international support for a consensus on AI regulation and standards while in the United Kingdom last November. She said that the Biden administration intends to use OMB’s domestic policies as examples for other countries.

“To that end, we will continue to call on all nations to follow our lead and put the public interest first when it comes to government use of AI,” Harris said.

The guidance also includes a range of governance requirements for many federal agencies, including the creation of a compliance plan for the new requirements, enterprise AI strategies and public use case inventories.

Agencies are already required to assemble public inventories of how they use AI, but OMB says that it’s adding requirements for more information on risk management and any waivers agencies get for the new risk management practices, which agency chief AI officers have the power to grant. 

OMB’s new AI policies follow the Biden administration’s AI Bill of Rights, released in October 2022, along with the National Institute of Standards and Technology’s AI Risk Management Framework, which was unveiled in January 2023. Both highlight the need to develop trustworthy AI systems, as well as a human rights-preserving approach to AI and machine learning technology usage.

Asked about what resources agencies have to implement the new guidance, a senior administration official pointed reporters to ongoing agency work to convene chief AI officers on a new council and work across teams within agencies. 

The administration included at least $3 billion in asks for AI in government in its budget request for fiscal 2025, which starts in October.

The guidance itself also notes that OMB will be coordinating implementation “through an interagency council” to promote consistency across agencies.

OMB also released a new request for information on Thursday asking for input on how to ensure that government contractors that supply tech to the government also follow requirements and best practices. OMB says that it will “take action” later this year “to ensure that agencies’ AI contracts align with OMB policy.” 

The administration also announced Thursday that it plans to hire 100 AI workers into the government by this summer as part of the “talent surge” outlined under the October executive order. Administration officials have previously called bringing AI talent to the government workforce “one of the biggest barriers” to using the technology.

“AI presents not only risks, but also a tremendous opportunity to improve public services and make progress on societal challenges, like addressing climate change, improving public health and advancing equitable economic opportunity,” said Young. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services.”