AI can make better decisions but governance is the key

The rapid development of AI has the potential to remake how decisions are made but as we move forward AI decisions need to be transparent and explainable. Something that doesn't always happen today.

The rapid development of artificial intelligence has the potential to remake how the federal government delivers a broad range of core services to citizens in a profound way.

When implemented correctly, these technologies can help the government to render decisions faster, using better data, at a far lower cost in vital areas ranging from awarding disability benefits to granting patents to adjudicating immigration applications and healthcare insurance benefits.

Pursuing AI transformation will also position the government to better respond to sudden demand surges for relief as a result of pandemics or other unforeseen future emergencies.

Right now, AI strategies are at an early adoption stage in most administrative agencies, with just 45 percent of surveyed agencies having an AI use case according to a recent report, Government by Algorithm, issued by Stanford University and NYU to the Administrative Conference of the United States. Out of those agencies that have implemented AI, only 12 percent were considered to be highly sophisticated applications, according to Stanford’s computer scientists.

Already some fear that employing AI-based systems to determine who qualifies for government benefits could lead to dystopian outcomes, in which human judgment and rules-based adjudication are displaced by "black box" algorithms accountable to no one.

The report frames the challenge in the following terms: “When public officials deny benefits or make decisions affecting the public’s rights, the law generally requires them to explain why. Yet many of the more advanced AI tools are not, by their structure, fully explainable. A crucial question will be how to subject such tools to meaningful accountability.”

Our experience shows that to the contrary, the government can deploy AI to make adjudication more transparent and accountable for citizens while rendering millions of decisions faster and more accurately. To realize these benefits, each agency must design an intelligent automation path informed by a detailed understanding of its mission and regulatory environment. And system architects must set clear information governance parameters around what AI algorithms are expected and permitted to do.

Empowering Knowledge Workers

Intelligent automation begins by evaluating where human beings add the most significant value in any given process and what elements are most repetitive, tedious, and prone to human error. Our design goal is to leverage AI to empower former document processors to become empowered knowledge workers while eliminating lower value work to the greatest extent possible. Let humans do what we do best and leverage intelligent machines to do the rest.

Intelligent data capture uses machine learning to extract and verify non-standard data sources and forms that are often submitted in government programs, eliminating the scanning and rekeying of data at the front end of data assembly. Natural language processing, optical character recognition, and machine-learning-based document classification continuously improve their accuracy as they process higher volumes and adjust to user input. These technologies never suffer from eye strain, repetitive motion injuries, or distraction while doing tasks that would be mind-numbing to many humans.

Robotic Process Automation (RPA) traditionally has been leveraged to execute repetitive tasks, those which require adherence to exacting policies but require little human oversight. Yet have advanced this focus by successfully leveraging RPA to develop a digital workforce. Working side-by-side with knowledge workers, AI-powered robots can compare and resolve discrepancies in a vast number of data sources in seconds, identifying missing documents and pinpointing areas requiring further investigation.

Automated decision support uses AI to apply hundreds or even thousands of discrete regulatory, technical, and business rules that are relevant to a precise situation and place them at the fingertips of the knowledge worker. Complex problems that might have called for the intervention of a CPA, engineer, or other experts can be anticipated and resolved in advance. The agency can update business logic parameters to new legislation or regulatory interpretations without the need for extensive retraining.

Dynamic Work Assignment delivers each case to the worker whose experience, skills, and availability are best suited to resolve it quickly and accurately. Automated routing reduces the need for multiple handoffs and escalation of complex cases by assigning it to the right person the first time around to reach a swift, high-quality decision.

A digital twin of the entire workflow provides a real-time simulation fed by data and sensors that provides a transparent view as to the system performance and identify areas for further improvement and capacity planning. Agencies can use this digital twin for scenario planning to test how changes in policies or work processes would impact critical performance metrics.

Taken together, these elements of intelligent automation can enable the Federal government to achieve efficiency gains of 50 percent or more even on highly complex benefits and adjudication programs. Just as importantly, AI can help reduce pending caseload backlogs, respond to demand surges and increase the accuracy of decision making. In our experience, these technologies also create a more engaged workplace environment, since caseworkers have the tools to access relevant data and policies that inform decisions and can focus their energies on improving interactions with citizens.

Ensuring Robust Information Governance

For AI to achieve broad acceptance in high stakes adjudication, the public needs to be confident that the algorithms powering machine learning are fair and verifiable. For this reason, robust information governance is a prerequisite when deploying AI in the government sector.

Proper design of intelligent automation workflow creates a fully auditable decision trail that shows what evidence or documentation the user relied on, how these factors were weighted, and what legal, regulatory, or technical criteria were applied. This level of detail is invaluable to verify the accuracy of outcomes; it also provides an objective record in the event of any escalation or appeal. In most cases, a human knowledge worker still makes the ultimate determination. But a small army of AI assistants operating in the background provides them with the tools to be vastly more efficient.

Where AI is making critical path decisions without human intervention, agencies must employ explainable AI technology. Explainable AI is subject to oversight by human experts and avoids implicit bias that could be contrary to legislative intent or social values. The term "glass-box" algorithm contrasts with the "black-box" AI that reaches conclusions in a manner that is opaque even to its designers.

Strict business rules govern what data sets can be accessed, the privacy of sensitive information, and how outcomes are shared. AI can rapidly identify any policy deviations to intervene early and provide remediation or retraining before a human error becomes a systemic problem.

AI also provides a powerful tool to combat commercial actors or fraudsters to seek to "game the system" of government benefits and regulatory enforcement. Neural networks can continuously seek out emerging patterns that indicate a new scheme of abuse or signal that a private actor may have "reverse-engineered" government processes to obtain more favorable outcomes. Automated risk scoring using machine learning helps focus enforcement resources and ensure program integrity.