Forget the movies. AI's potential lies in more mundane but lucrative applications
But contractors face culture and technical obstacles
- By Derek B. Johnson
- Mar 09, 2017
More than thirty years after its theatrical release, companies that work with artificial intelligence are still battling the impressions created by movies like “The Terminator.” The common pop culture perception of AI - that of a cold, calculating machine consciousness, both omnipresent and all knowing – can often bear little resemblance to the more mundane applications AI has for most governments and businesses today.
Technology contractor InferLink works on artificial intelligence research and development under a series of federal small business grants. Matthew Michelson, chief scientist, said that the current role of artificial intelligence in government is more about fulfilling low-level data entry and processing tasks that free human employees to take on more high-level analysis.
“One unfortunate byproduct of AI hype is that people now think it can do everything. You have to be particularly careful when dealing with agencies that don’t know a lot about AI, you have to set expectations,” Michelson said. “[Government agencies] are very good at collecting data. It’s doing something with that data that is the challenge.”
As a result, even as the AI market stands at the dawn of a potential golden age of expansion, with most industry research firms predicting exponential growth over the next 5-10 years, contractors still face technical and cultural obstacles to breaking into the federal market.
“AI” can mean many different things
The term “artificial intelligence” can be a somewhat vague and ambiguous phrase, one that often acts as an umbrella for a series of different technologies: automated and machine learning, high-level text and voice recognition and processing, deep cognitive learning and other tools. At the core, most AI can be defined as software capable of ingesting, processing and analyzing large and disparate amounts of data, then applying any lessons learned to improve on future iterations.
How well the software learns depends largely on two things: an ample and ready supply of quality data and the interaction and feedback it gets its humans supervisors.
Major technology companies like IBM provided cognitive learning tools to the federal government as early as 2010, according to federal general manager Sam Gordy. In addition to providing AI tools to intelligence agencies, IBM has also started to use Watson, one of the most prominent mainstream examples of modern artificial intelligence, on the defense and civil side of government. This includes a pilot with the Army to optimize vehicle repair cycles and a virtual call center assistant for the 2020 Census. Gordy likened the early development of Watson to bringing on a new hire.
“On the first day when you take Watson out of the box and start working with it, it’s an intern. You can ask it intern questions and you’ll get back intern answers,” Gordy said. “It takes time to develop it into an expert through engagement of your [own] subject matter experts that you bring in to help train it.”
A promising but unclear market
For a variety of reasons, it can be difficult to pin down exactly how large the current market is for artificial intelligence. Most market forecasters peg it at approximately $500 million to 600 million, but it is expected to rise to a multi-billion dollar sector as quickly as 2020 or 2025. In the government space, much of the work being done is currently in the research and development or pilot phase. Additionally, some of the most promising examples in the field are shrouded behind the national security veil, as the intelligence and military communities have been among the most prominent early adopters.
Though unable to discuss contract specifics, companies like Immuta have been providing the intelligence community with AI and machine learning programs for years, according to CEO Matthew Carroll. Even as governments are increasingly open to using AI and machine learning, Carroll said the market is unlikely to see a huge wave of procurements around the technology in the near future. Instead, he believes growth will come in the form of contractors offering AI as a component of their more traditional solutions during the bidding process, similar to how cloud computing or big data analytics became a value-add over the past decade.
“You’re starting to see CEO’s ask ‘how do we get a competitive advantage in selling to the government? What’s going to make us competitive?’” Carroll said. “I don’t think you will see AI-based contracts, but [rather contractors] utilizing machine learning as an additive when they go into procurement as a differentiator.”
Technical and cultural roadblocks remain
There remain obstacles to greater AI adoption in the public sector, for both technical reasons and because of cultural and regulatory restrictions that are unique to the federal government. One roadblock consistently cited by major contractors was getting the agency or office to buy in to the technology and the disruption it may cause to the status quo.
“The whole organization as a culture has to adopt it, because typically you’re going from a qualitative organization to a quantitative organization, and that requires a huge culture shift,” said Carroll.
Another potential concern is what AI providers refer to as “the black box problem.” Most AI tools can analyze vast amounts of data and, over time, provide humans with actionable intelligence to operate on. What they are generally not good at is explaining how those conclusions were reached or what patterns were used to reach them.
That can understandably make government decision makers nervous if they are on the hook for the consequences, and there are signs the market is reacting to these concerns. Last year DARPA put out a solicitation for “explainable AI,” a machine learning system that can reverse engineer, summarize and explain the process behind the decisions made by other AI systems.
Privacy and regulatory barriers also make for a tricky environment for AI tools, which often rely on access to massive amounts of data that carry legal restrictions and are dispersed among many different stakeholders. A system that has access to many different forms of government data can present unique privacy and security risks, said Michael Daly, chief technology officer at Raytheon.
“On the policy side, we’re going to have to think, as a society and institutions, how much information do we want to give these computers and how long should they keep it?” said Daly. “There are lots of privacy issues around allowing machines to have access to very large volumes of data.“
Derek B. Johnson is a former senior staff writer at FCW.