Google announces AI offering for classified environments
A “large percentage” of U.S. military and intelligence agencies briefed on the upcoming Gemini version expressed interest in the tool, a Google executive said.
Google will be offering a version of its Gemini AI model capable of working within classified environments early next year, the tech giant announced Wednesday.
A “large percentage” of the United States government’s military and intelligence enterprise has expressed interest in a specialized, air-gapped version of Google’s Gemini AI model, signaling high demand from analysts wanting to support their day-to-day workload with AI tools that have stormed the consumer tech market over the past two years, according to Ron Bushar, who heads public sector solutions for Google-owned cyber intelligence vendor Mandiant.
Bushar said that in private meetings with the government’s military and intelligence agencies, demos of the specialized version of Gemini have piqued the interests of their workforce. He declined to name specific entities involved in the discussions.
“It’s rapidly becoming a much more interesting proposition for a lot of agencies,” he told Nextgov/FCW in an interview at Google’s Public Sector Summit in Washington, D.C. “I’ve talked to a lot of analysts who are playing around with these tools on the internet [at home] … and they can see the power of it.”
The new Gemini version comes as other tech competitors work to bring AI tools into sensitive government networks. AI systems can assist analysts with ordinary tasks but have been typically restricted from classified use because they are tethered to the internet or trained using parts of consumers’ conversations.
Google also unveiled new Gemini use cases across the federal civilian government on Wednesday. MK Palmore, the director of the Office of the CISO at Google Cloud, said that some immediate use cases will include chatbots and translation services. He told Nextgov/FCW that a primary goal of introducing Gemini into Google’s government cloud offerings is to reduce the administrative tasks that burden federal employees.
“When you think about individual work productivity and the responsibilities that each individual has within the timeframe that they spend nine to five, Gemini is meant to increase productivity, or help organizations and people increase productivity, so it’s present in our tooling for that purpose,” he said.
Palmore noted that Gemini can prove useful in more sensitive environments and that Google is preparing to cater to customers in these arenas.
“We are building, and will continue to build, solutions that meet federal requirements,” Palmore said. “We have a number of solutions that are available at both the [impact level] 4 and [impact level] 5 for defense.”
Data security will be key to safe deployment when handling sensitive information in classified environments.
“Only that organization has access to the information associated with what they’ve put into it,” Palmore said. “So the information is not available for public use, not utilized by Google and protected and utilized only for the enterprise that has chosen that model and chosen to tweak it.”
Data scientists and advanced programmers at agencies will be able to work with Vertex AI, a complementary AI development suite for Gemini that can be internally tailored to meet staff needs, Bushar said. That includes tuning parameters that can change how the model behaves when handling certain forms of information fed to it in conversations with workers.
The federal ecosystem has been naturally slower to adopt AI and other evolving technological capabilities because of incentive structures and time needed for agency leaders to understand how the tools work in practice. An October 2023 executive order rolled out a sweeping AI strategy that, in part, asked agencies to name chief AI officers to encourage AI technology adoption.
An August survey from accounting firm Ernst & Young says about half of the public sector — including local governments — has been engaging with AI tools on a daily basis, though a third of participants reported a lack of AI training initiatives in their respective agencies.