NIST to Release New Playbook for AI Best Practices

Just_Super/Getty Images

Researchers will stress a socio-technical approach—which examines the human impact on technology—to mitigate biases in artificial intelligence systems.

Experts at the National Institute of Standards and Technology want public and private entities to take a socio-technical approach to implementing artificial intelligence technologies to help mitigate algorithmic biases and other risks to AI systems, as detailed in a new playbook. 

These recommendations to help organizations navigate the pervasive biases that often accompany AI technologies are slated to come out by the end of the week, Nextgov has learned. The playbook is meant to act as a companion guide to NIST’s Risk Management Framework, the final version of which will be submitted to Congress in early 2023.  

Reva Schwartz, a research scientist and principal AI investigator at NIST, said that the guidelines act as a comprehensive, bespoke guide for public and private organizations to tailor to their internal structure, rather than function as a rigid checklist. 

“It's meant to help people navigate the framework, and implements practices internally that could be used,” Schwartz told Nextgov. “The purpose of both the framework and the playbook is to get better at approaching the problem and transforming what you do.”

She said that, along with proactively identifying other risks, the playbook was created to underscore specific ways to prevent bias in AI technology, but veers away from a rigid format so as to work for a diverse number of firms.

“We won't ever tell anybody, ‘this is absolutely how it should be done.’ We're gonna say, ‘here's the laundry list of things…here's some best practices,’” Schwartz added. 

A key feature the playbook looks to impart is ensuring there is a strong element of human management behind AI systems. This is the fundamental principle of the socio-technical approach to managing technology: being aware of the human impact on technology to prevent it from being used in ways designers did not initially intend. 

Schwartz noted that NIST has been working on controlling for three types of biases that emerge with AI systems: statistical, systemic and human. 

While plenty of bias creeps into AI systems via the mathematical foundations of its algorithms, seen through statistical biases, Schwartz said that the playbook’s approach looks to prevent other, more human-centric biases from distorting AI technologies. 

“If we solved all the statistical bias out there, we'd still have negative impacts,” she said. “So we wanted to say, ‘hey, there's all these other sources of bias and they interact with each other.’ And so the way you're thinking about it right now, you have got to back up and see the big picture.”

Some of the recommendations intended to help private and public groups include fostering a robust governance structure with clear individual roles and responsibilities, as well as fostering a professional culture that supports critical thinking and transparent feedback on AI technologies and products.

“We're trying to say…make all these implicit things about your organization explicit,” she said.

One implicit aspect of many AI algorithms the playbook looks to prevent is systemic bias, where a business or operating process contributes to a consistently skewed decision. Schwartz noted that this type of bias tends to be insidious; many programmers watch out for cognitive and statistical biases and overlook the components that formulate how a system can generate biases based on the data it uses. 

“If your organization has systemic bias, it can override those human decisions,” Schwartz said. 

She clarified that all harmful, discriminatory biases need to be prevented, but that systemic risks tend to sneak into systems, because they don't receive the attention given to statistical or cognitive biases. 

The focus of the playbook won’t be exclusively negative; while it does monitor AI bias, it also works to highlight how AI systems can create a positive impact.

“It's just kind of designing with impact in mind, instead of getting something out there and then deploying it and then finding out that it has all these negative impacts,” Schwartz said. “But we also want to help companies get better at being able to anticipate the positive stuff too, like [how] can your tools be used to help people.”

As an advisory agency, NIST has previously advocated a more human-centric focus on AI technology regulation, working with stakeholders to build public confidence and ethical guidance for the design of AI technologies.

Editor’s note: This article was updated to clarify details about the playbook’s release.