AI and CMMC: A double-edge sword for defense contractors

Gettyimages.com/Dilok Klaisataporn
Here’s how to get ahead of the problem and turn AI into a compliance asset, writes AJ Yawn, governance, risk and compliance expert.
The Pentagon’s Cybersecurity Maturity Model Certification initiative, a program for verifying that defense contractors have implemented the cybersecurity controls required to protect sensitive government information, requires those contractors to take concrete steps to protect controlled unclassified information. These requirements are substantial – if companies fail to comply, they risk losing their contracts.
Recently, the rise of artificial intelligence has added even more complexity to the efforts of contractors to comply with CMMC requirements. This has created a real and immediate problem: AI tools are inadvertently expanding organizations’ CMMC assessment boundaries, introducing new attack vectors into CUI environments and complicating assessments.
For example, an employee may paste a CUI document excerpt into a commercial large language model such as ChatGPT, inadvertently transmitting CUI to a cloud environment not authorized or assessed under the company’s CMMC boundary. Doing this may represent a potential breach of CMMC requirements and a CMMC scope violation.
Additional risk may also be introduced when AI tools are used to draft policies, procedures and system security plan content. AI-generated content looks authoritative but may be inaccurate, generic or describe controls that do not match the actual technical environment. When using AI for these purposes, every implementation description still needs to be verified against the actual environment before it goes into a compliance review.
The good news is that contractors can conversely deploy AI tools to enhance CMMC compliance. Specifically, AI can help by automating the evidence collection process as well as system security plan generation and continuous monitoring.
In the area of evidence collection automation, AI-powered tools can reduce the cost of compliance by assisting with queries of the environment’s identity platforms, configuration management systems and security tools. AI can also help process raw output into consistently formatted artifacts and flag anomalies such as accounts with unexpected permissions, systems not enrolled in endpoint protection and patches that exceed remediation timelines.
When used correctly, AI tools are also effective for supporting the drafting of system security plans. An AI assistant can review a draft plan and identify missing implementation descriptions, inconsistencies between sections or controls that are documented without the appropriate references. In addition, AI can map existing policy documents to the applicable CMMC requirements they satisfy, identifying policy gaps and redundancies.
Also, when applied to continuous monitoring and anomaly detection, AI-based tools can help detect anomalous network behavior that may indicate malicious activity and monitor compliance to ensure that the controls assessed at certification remain in place. And when applied to risk assessment, these tools can process vulnerability scan data, threat intelligence feeds and configuration data to generate risk-prioritized remediation recommendations. This prioritization directly addresses one of the most common challenges in CMMC programs: knowing which of many gaps to fix first given limited resources.
Contractors can employ a five-step process for leveraging AI without creating more compliance risk. First, they need to identify every AI tool in their environment, including commercial AI assistants used by employees on work devices, and categorize them by whether they are deployed on-premise, in a private cloud, or in a commercial cloud. Contractors also must determine whether the tools can access, process or store CUI.
Second, they must assess whether users can input CUI into each of the tools identified in the environment. If the answer is yes, they have to look at whether the tool’s backend is authorized by the government’s FedRAMP program to process CUI.
Third, the organization should update their system security plan to document every AI tool identified as an in-scope asset, the security function it performs and how it is managed and controlled. For AI tools that have been determined not to process CUI, document the justification and the controls that prevent CUI from entering the tool.
Next, they should establish an acceptable use policy for AI that defines which tools are authorized for use on work systems; which categories of information cannot be entered into any AI tool; the approval process for adding new AI tools to the authorized list; and how violations are reported and addressed. Finally, they should train employees on which AI tools they can use, which information categories cannot be processed by AI tools and why. Abstract policy without context does not change behavior.
One caveat: Despite its usefulness in assisting with CMMC compliance, AI output requires human verification. A completeness check that an AI produces is useful, but a human with actual knowledge of the environment must confirm that the implementation descriptions accurately reflect technical reality.
AJ Yawn is the governance, risk and compliance advisor at NR Labs.