Five IT security priorities shaping federal procurement in 2026

Gettyimages.com/Weiquan Lin
AI threats to quantum migration, agencies are moving from frameworks to deadlines — and vendors need to keep up, writes Gina Scinta, deputy chief technology officer of Thales Trusted Cyber Technologies.
The federal government does not issue security guidance quietly. In recent months, NIST, CISA, NSA, GSA, and the Department of Defense have each published new frameworks, primers, and memoranda covering quickly growing IT security trends:
- Artificial intelligence security,
- Post-quantum cryptography migration,
- Zero trust architecture,
- Edge security, and
- Data security posture management.
For vendors, all this guidance is clearly a huge opportunity: Agencies need technology partners that understand the regulatory environment and can map their capabilities to specific requirements. That opportunity, however, demands that companies must tailor their product pitches to these specific requirements.
Let’s dive into the top five IT security priorities shaping federal procurement conversations in 2026.
Artificial Intelligence Security
AI security has moved from a buzzword to a budget line item. According to Thales’ Data Threat Report, 70% of IT survey respondents identified the speed of change within AI ecosystems as their most pressing AI security concern — ahead of trustworthiness (58%) and confidentiality concerns (46%).
NIST's preliminary draft Cybersecurity Framework Profile for Artificial Intelligence frames the challenge across three domains: securing AI system components, leveraging AI to enhance cyber defense, and building resilience against AI-enabled attacks. It deliberately extends the existing CSF and Risk Management Framework rather than creating a new compliance structure — a design choice that makes mapping to existing agency documentation more tractable.
The Department of Defense's January 2026 AI memo reinforces an operational tempo that departs from the way government traditionally looks at technology adoption. This new way of thinking — try things, fail fast, learn, iterate — means that agencies must be willing to pilot unproven capabilities, as long as the right security guardrails are in place.
For vendors, this means the conversation is no longer about whether AI will be used in federal environments, but what happens to sensitive data when it enters an AI model, who controls access to that model's outputs, and how the model is protected from adversarial manipulation. The threat from AI is not primarily data theft — it is data manipulation, to corrupt decision-making at scale. To that end, agencies are increasingly interested in solutions that address prompt injection, sensitive data leakage, supply chain risk, and model poisoning.
Post-Quantum Cryptography Migration
Post-quantum cryptography didn’t go away when generative AI took over the headlines. It has always been a longer-cycle problem, but that cycle is compressing. NIST's release of the first three standardized PQC algorithms, followed by GSA's Post-Quantum Cryptography Buyer's Guide and a DoD memo mandating crypto inventory across all department systems, has moved the migration from a theoretical discussion to operational reality.
The threat model here is distinct from AI. Adversaries are harvesting encrypted data today, intending to decrypt it once a cryptographically relevant quantum computer (CRQC) becomes available. Any data protected by asymmetric encryption with long-term sensitivity is already at risk. Agencies that understand this are not asking whether to migrate — they are asking how fast they can realistically move. The answer is: Not very fast. But the length of the road ahead doesn’t diminish the urgency of taking the right first steps.
The migration sequence is well-documented: Risk assessment, crypto discovery and inventory, algorithm evaluation, key management hygiene, and transition of high-risk systems. Another important aspect of migration to post-quantum is to implement solutions that are crypto agile. As far back as May 2024, CISA required civilian agencies to submit a manual crypto inventory. Now agencies are focused on the next phase: automated, continuous discovery. Because cryptographic assets are constantly generated, and deprecated algorithms linger in production long after they should have been retired, a one-time inventory is not enough.
One requirement vendors should note carefully: The Depart of Defense’s November 2025 memo specifies that PQC engagements with department systems now require submission of relevant artifacts to the DoD acquisition office for risk review before proceeding. Vendors who cannot document their PQC implementation in detail will not be in the conversation.
Zero Trust Architecture
Zero trust is practically old news these days. What is new, however, is the shift from architectural aspiration to certification requirement. The Department of Defense has set a hard deadline: all components, defense agencies, and the Defense Industrial Base must achieve target-level zero trust by the end of fiscal year 2027.
The NSA's January 2026 Zero Trust Implementation Guideline Primer formalizes a five-phase framework for reaching target and, eventually, advanced zero trust capability. Documentation for the advanced phases does not yet exist.
Vendors operating across both civilian and defense markets should understand the difference between CISA’s zero trust maturity model versus the Department of Defense’s zero trust reference architecture. CISA's model looks at visibility, analytics, automation, and orchestration as supporting capabilities for the five core pillars of identity, devices, networks, applications, and data. Meanwhile, the Department of Defense considers those cross-cutting capabilities as pillars themselves and emphasizes continuous monitoring and automated response.
Three DoD systems have achieved zero trust certification to date: the Navy's Flank Speed Microsoft 365 environment (target level, October 2024), DISA's Thunderdome (advanced level, all 152 capabilities, April 2025), and Dell's Project Zero (target level). It’s important to note that these should not be treated as templates for success. No single vendor can deliver everything an agency needs to achieve zero trust. The vendors who succeed need a thorough understanding of where their capabilities fit in the stack, and how they need to partner with other contractors or agencies themselves.
Edge Security
Edge computing in the federal context means something different than it does in the private sector. Forward-deployed military operations, field units in disconnected environments, mobile command centers without real-time enterprise connections are the edge for the public sector, and they’re more than simple enterprise problems.
Size, weight, and power (SWaP) constraints govern what can be deployed at the edge in federal IT. Bandwidth limitations determine what data can move and when. Solutions must be (and are) operated by people who are not technical specialists, therefore, they need to be easier to support and maintain. And the requirement to function fully offline means that encryption keys, policies, and access controls must travel with the mission, rather than residing in a central data center that may be unreachable from the field.
Vendors that want to participate in edge security programs should expect extended evaluation cycles. Their solutions must comply with ruggedized form factor requirements and must withstand close scrutiny of how they handle the transition between connected and disconnected states. Demonstrated interoperability from the edge to the cloud is a requirement, not a differentiator.
Data Security Posture Management
Data Security Posture Management (DSPM) is a trend gaining traction in federal procurement conversations. Rather than protecting data reactively, DSPM provides continuous visibility into where sensitive data lives, who has access to that data, how it is being used, and the security posture of the systems holding it.
A 2025 Cloud Security Alliance survey found that roughly one-third of respondents lacked adequate tooling for data visibility, creating blind spots that undermine proactive risk management. Furthermore, some 80 percent reported low confidence in their ability to identify high-risk data sources.
The distinction between mature DSPM and basic data discovery is the integration of protection and risk intelligence. Discovery tells you where sensitive data is. DSPM tells you what the risk exposure is, enables you to take action on that assessment, and is continuously monitoring for changes. The result is an audit trail that demonstrates ongoing compliance, rather than certification of compliance at a particular point in time.
For vendors, DSPM is both a product opportunity and a positioning challenge. Those who are able to connect their capabilities to specific compliance requirements will be well-positioned for success as the market matures.
Looking Ahead
What connects these five priorities is a common underlying challenge: The federal government is managing an explosion of sensitive data across an increasingly distributed environment, under mounting regulatory pressure, with adversaries who are more sophisticated and more patient than ever.
The vendors best positioned to serve this market will be those that engage at the level of specific federal guidance, not generic security principles. They need to show that their own product roadmaps are responsive to the evolving threat landscape. The guidance is clear. The timelines are set. The question now is whether the vendor community is truly ready to be part of the conversation.
Gina Scinta is deputy chief technology officer of Thales Trusted Cyber Technologies.