2023 was just the start of generative AI’s rise, government and industry leaders say

aislan13/Getty Images

Find opportunities — and win them.

Generative artificial intelligence is not just a buzzword. The number of use cases is going and looks poised to keep doing so.

In August, Connecticut-based IT research and consulting firm Gartner – famous for its Magic Quadrant market research reports the tech industry i­­­­ntently monitors – weighed in on the 2023 hype surrounding generative artificial intelligence, projecting the technology would reach a “transformational benefit within two to five years.”

“The popularity of many new AI techniques will have a profound impact on business and society,” said Arun Chandrasekaran, distinguished VP analyst at Gartner, in a press release. “The massive pretraining and scale of AI foundation models, viral adoption of conversational agents and the proliferation of generative AI applications are heralding a new wave of workforce productivity and machine creativity.”

Two months later, amid much greater buzz for the technology, Gartner updated its forecast for generative AI as Congress questioned leaders on the technology and President Biden issued a 111-page executive order on artificial intelligence.

“Generative AI has become a top priority for the C-suite and has sparked tremendous innovation in new tools beyond foundational models,” Chandrasekaran said in October. By 2026, Gartner predicted more than 80% of enterprises will have used generative AI, up from less than 5% in 2023.

Gartner defines generative AI as “AI techniques that learn a representation of artifacts from data, and use it to generate brand-new, unique artifacts that resemble but don’t repeat the original data.”

OpenAI’s ChatGPT and Google’s Bard are just two examples of generative AI chatbots based on large language models that can produce novel content, like text, images, video, audio, code and more. Gartner notes generative AI-produced content can serve “benign or nefarious purposes,” and while such phrasing generates significant debate among experts — is generative AI good or bad? — most tech leaders in government and industry no longer debate the technology’s potential.

In short, 2023 was the year of generative AI, and government and tech industry leaders have a lot to say about its promise.

Generative AI could lead to a “fundamental rewiring of the tech landscape and be an incredible accelerant to human ingenuity,” Karen Dahut, chief executive officer of Google Public Sector, told Nextgov/FCW at the company’s summit on Oct. 17. “AI is already having a positive impact on the way we live, work and learn, and in how the government delivers services. We approach it boldly and responsibly.”

Dahut, who has more than two decades of experience serving government customers, adroitly pointed out that generative AI is not just an emerging technology, but rather a technology that has emerged. It’s here.

On Dec. 12, the Government Accountability Office released a landmark report reviewing federal agencies’ artificial intelligence inventories, documenting some 1,241 “current and planned AI use cases” across 20 non-defense agencies.

Conducted over a year under the comptroller general’s authority, the report indicates NASA and the Commerce Department were the two most aggressive AI adopters — with 390 and 285 AI use cases, respectively — and signified varying uses of the technology.

For example, NASA uses AI to intelligently target specimens collected by planetary rovers, while Commerce employs the technology to count seabirds from drone photos. The Department of Health and Human Services uses AI chatbots to automate email responses for its help desk, and the Office of Personnel Management uses AI to provide additional job recommendations to users of its USAJobs website, matching them with skills and opportunity descriptions.

GAO’s report does not specifically detail generative AI in action, though it does acknowledge the growing popularity of “second-wave AI” systems like ChatGPT, even among a necessarily more cautious government tech user base.

“AI is driving the car whether we want it to or not,” Kevin Walsh, director of IT and cybersecurity at GAO, told Nextgov/FCW in December. “Is somebody going to take the wheel and put some guardrails around this thing, or is it going to keep doing what it wants?”

In a conversation with Nextgov/FCW earlier this year, Gary Washington, the Agriculture Department’s chief information officer, outlined why federal agencies — especially those with troves of data that fuel the kind of large language models these systems run on — need to be cautious.

“AI is sexy to people now, and everybody wants to dip or get their foot into the area, if you will. But because we deliver so many services to the American public, we don't want to put ourselves in a situation where we don't provide those services in an equitable manner,” Washington said. “So I think we have to be very cautious about how we implement AI and what those use cases are.”

 Without guardrails, Washington said AI “could potentially be dangerous.”

A November white paper released by the Advanced Technology Academic Research Center, in partnership with Google, adds cybersecurity threats and misinformation to the list of potential risks along with the aforementioned bias problem.

Titled "Tacking a Foundational Challenge: Generative AI in Federal Spaces," the white paper is an unattributed summary of roundtable discussions among experts from government, industry and academia. Its government members are looking first to generative AI tools “to help the workforce with research, queries, contracting, text-generation and coding.”

“These small adjustments can add up to improve the overall operational efficiency of the organization,” the paper states. “There’s so much transformation potential with generative AI, but also the potential risk if not implemented properly. As we harness AI’s potential, we must ensure responsible use. This is crucial for maintaining public trust and ethical governance.”

The paper outlines the importance data — and specifically the quality of an organization’s data — could have when fused with generative AI tools. In short, garbage in equals garbage out, or in some cases, garbage in equals dangerously bad, hallucinogenic garbage out.

“The importance of quality data is underscored by the fact that certain AI models simply will not work if there are holes in datasets, and other models will invent, or hallucinate, information if faulty data is introduced,” the white paper states. “For some agencies, the inadvertent introduction of incorrect or protected data into an AI model can pollute an entire dataset, making AI outputs unreliable and potentially harmful.”

Jeff Frazier, Head of Global Public Sector at Snowflake, told Nextgov/FCW in a December interview that organizations “first need to get their data strategy right” if they’re looking for long-term success in AI endeavors.

“I always say, be boring and valuable,” Frazier said. “Get that data strategy right. Then experiment where practical, legal, ethical and moral. Find out what you have, start with AI on something already acceptable and then experiment to optimize.”

Dave Levy, who recently stepped into the role of Vice President Worldwide Public Sector at Amazon Web Services, told Nextgov/FCW at the company’s re:Invent conference in Las Vegas that we’re “three steps into a marathon” with generative AI. But already, Levy said, two commonalities among AWS customers and public sector organizations are clear.

“Every organization wants to know if they have put their data into some system whether that data is going to be secure, and that they’re going to have provenance over that data and that it doesn’t just disappear somewhere to where they won’t have control over it,” Levy said. “And the second one is responsible AI. And so everything we do is built around responsible AI and building in things like explain-ability.”

Levy added that generative AI’s potential in any organization is likely to correlate to the effectiveness of its cloud computing strategy, given the compute power and storage needs for generative AI tools and their use.

“The advice, step one, is if you really want to take advantage of generative AI, and the tools that are out there, you’re going to get to the cloud,” Levy said.

The ATARC whitepaper also addresses the human variable in generative AI’s potential. The existing workforces in industry and government will both require training and upskilling to keep pace with generative AI-based tools. Another fundamental challenge is recruiting AI practitioners, data scientists and others with new tech skill sets.

The white paper cites Stanford University’s AI Index 2023 study, which indicates that about 60% of students graduating with doctoral degrees in AI head to work in the private sector, while just under 40% continue on in academia. Fewer than 1% of those graduates go to work for the government.

The federal government could certainly use a youth infusion, given the growing age gap of its workforce. In early December, OPM CIO Guy Cavallo told Nextgov/FCW that last year only 7% of federal employees were under the age of 30, while about 1 in 3 feds were retirement-eligible employees 55 or older.

Twenty-five years ago, Cavallo said, the number of feds under 30 and over 55 were about equal. Cavallo, however, said he was able to hire six recent college graduates through the Pathways internship program in addition to offering AI classes for existing staff to take.

“The government must take a different approach to recruit and retain talent skilled in AI technology,” the white paper states, suggesting labs, incubators, or centers of excellence to help attract talent interested in driving meaningful change. Yet the white paper argues that because college textbooks on generative AI are in their nascency, the best bet might simply be to reskill the talent you have.

“With such a huge demand for AI talent from an industry in its infancy, there are simply not enough experts available. As such, the government must train and grow experts from their existing talent,” the white paper said.

There may be uncertainty regarding how industry and government can responsibly use generative AI, but there seems little doubt this technology has a big role to play for government — and, by proxy, the American public — in the years ahead.

“We are really talking about something special,” Ian Buck, vice president and general manager of NVIDIA's Hyperscale and HPC Computing Business told Nextgov/FCW in late November. “It’s making [people] co-pilots.” 

Editor’s note: ATARC is owned by GovExec, Nextgov/FCW’s parent company.

NEXT STORY: Paul Lombardi dies at 82