Preloader Image

Last week, the Trump Administration laid out its AI Action Plan, comparing the international competition for AI dominance to the 20th century space race: “Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.” The plan, accompanied by three Executive Orders, lays out a wide range of government-led efforts to “build and maintain vast AI infrastructure and the energy to power it.”

Over the next few years, a multitude of agencies will be responsible for executing on this plan, working with both private sector partners and the academic community. 

The Administration’s AI Action Plan outlines a broad vision for AI innovation and infrastructure. For security leaders, two key themes stand out:

The first priority is speed. The primary goal of this plan is to expedite and lead in the research, development and deployment of AI. This is the plan’s north star. To get there, it lays out a broad ‘whole of state’ effort, including:  

  • Removing regulatory barriers that slow down AI adoption;

  • Incentivizing the buildout of data centers and energy infrastructure;

  • Driving the creation of sector-specific AI standards (e.g., healthcare, energy, agriculture);

  • Expanding AI use across the federal government;

And much more.

Like any new and rapidly evolving technology, AI requires additional focus from technologists and security professionals as it is integrated into our digital infrastructure. This escalation of rapid adoption, novel AI code, and increasing complexity of our systems poses very real risks for cybersecurity systems and processes. 

Wiz estimates that 85 percent of cloud environments already leverage AI technologies. According to our research, organizations are scrambling to manage the security risks that come along with this rapid change. Gartner puts a finer point on mounting AI security threats, asserting that “AI technology usage is increasing risk, and without effective governance and security controls they will have damaging unforeseen impacts on organizations.”  

That brings us to the second theme….

The plan goes beyond just promoting AI innovation. It also outlines plans to strengthen the security of AI systems from development through deployment. Notable initiatives include:

Securing Critical AI Systems: 

While the plan recognizes the roles AI can play in cyber defense, it also acknowledges “AI in cyber and critical infrastructure exposes those AI systems to adversarial threats.” For critical infrastructure and safety-critical applications, there is a call for implementing “secure-by-design, robust, and resilient AI systems.” 

This includes the Department of Homeland Security (DHS) establishing an AI Information Sharing and Analysis Center (AI-ISAC) “to promote the sharing of AI-security threat information and intelligence across U.S. critical infrastructure sectors.” The agency is also tasked with guidance on remediating and responding to AI-specific vulnerabilities and threats, as well as sharing data on known AI vulnerabilities.

Protecting AI Innovation:

Key government agencies–from defense and intelligence to civilian–are directed to collaborate with AI developers and the private sector to protect AI innovations from security risks. Additionally, the plan seeks to leverage academia, including an AI hackathon initiative to “test AI systems for transparency, effectiveness, use control, and security vulnerabilities.”

Secure by Design:

The Action Plan asserts that “promoting resilient and secure AI development and deployment should be a core activity of the U.S. government.” This includes building the right development practices for AI technologies and applications, with the Department of Defense and NIST tasked with continued development of AI frameworks, roadmaps, and toolkits.

AI Incident Response: 

Initiatives include establishing frameworks and best practices for private sector AI incident response, updating federal response playbooks, and sharing AI vulnerability information.

Assessments of Risk from Frontier AI Systems:

The plan calls for deeper research into the risks posed by AI systems. That includes potential security vulnerabilities and adversarial use of foreign AI in U.S. infrastructure, the economy, and national security.

Deepfake Protections:

It also acknowledges the growing legal and societal threats posed by deepfakes. The plan aims to combat such materials that “present novel challenges to the legal system.”

The AI Action Plan and Executive Orders show a clear recognition that rapid innovation must be matched by strong efforts to secure AI systems and protect privacy. The federal government now has an opportunity to lead on defending AI intellectual property, infrastructure, and the broader ecosystems these technologies power. To achieve the vision of AI that “promote[s] human flourishing, economic competitiveness, and national security,” security must be a core part of how AI is developed and deployed.

In March, Wiz responded to the Administration’s request for information as they were developing the plan. The Wiz team provided remarks that emphasize AI’s ubiquity, the risks it poses, and the need for robust security practices to mitigate those risks.

We’re encouraged to see the emphasis on the secure development and deployment to be so prominently featured in the plan. As federal agencies now release guidance and actions regarding defending these models, we should all seek a few key foundations:

  • As we have noted in our research, AI security measures should focus on AI in the context of the systems in which they exist rather than individual elements in isolation. There has been a historic focus on the models themselves and model weights, but as they are infused across complex environments, the attack surface surges. Much like our own health we need to ensure that the systems that contain AI are healthy and resilient.

  • Without standards, AI presents novel security challenges. In the absence of consensus standards for defense of AI systems, the federal government should move quickly to adopt AI security posture management and proven best practices. Staying ahead will require continuous visibility into these dynamic environments and the ability to respond as new risks emerge.

  • AI-focused companies, federal contractors and critical infrastructure providers should be expected to deploy foundational AI cybersecurity measures within both their development and production environments to reduce the attack surface. The Action Plan emphasizes this need — and it’s critical we don’t lose sight of it as innovation accelerates.

As the AI Action Plan progresses, we urge all stakeholders to prioritize AI security across development and deployment. Secure-by-design practices and continuous risk management will ensure AI’s resilience, enabling safe innovation.