It also builds on previous efforts to promote “secure by design” principles in AI systems and tools.
Listen to this article
0:00
Learn more.
This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.

The Trump administration’s new AI Action Plan calls for companies and governments to lean into the technology when protecting critical infrastructure from cyberattacks.
But it also recognizes that these systems are themselves vulnerable to hacking and manipulation, and calls for industry adoption of “secure by design” technology design standards to limit their attack surfaces.
The White House plan, released Wednesday, calls for critical infrastructure owners — particularly those with “limited financial resources” — to deploy AI tools to protect their information and operational technologies.
“Fortunately, AI systems themselves can be excellent defensive tools,” the plan said. “With continued adoption of AI-enabled cyberdefensive tools, providers of critical infrastructure can stay ahead of emerging threats.”
Over the past year, large language models have shown increasing capacity to write code and conduct certain cybersecurity functions at a much faster rate than humans. But they also leave massive security holes in their code architectures and can be jailbroken or overtaken by other parties through prompt injection and data poisoning attacks, or leak sensitive data by accident.
As such, the administration’s plan builds on a previous initiative by the Cybersecurity and Infrastructure Security Agency under the Biden administration to promote “secure by design” principles for technology and AI vendors. That approach was praised in some quarters for bringing industry together to agree to a set of shared security principles. Others rolled their eyes at the entirely voluntary nature of the commitments, arguing that the approach amounted to a pinky promise from tech companies in lieu of regulation.
The Trump plan states that “all use of AI in safety-critical or homeland security applications should entail the use of secure-by-design, robust, and resilient AI systems that are instrumented to detect performance shifts, and alert to potential malicious activities like data poisoning or adversarial example attacks.”
The plan also recommends the creation of a new AI-Information Sharing and Analysis Center (AI-ISAC) led by the Department of Homeland Security to share threat intelligence on AI-related threats.
“The U.S. government has a responsibility to ensure the AI systems it relies on — particularly for national security applications — are protected against spurious or malicious inputs,” the plan continues. “While much work has been done to advance the field of AI Assurance, promoting resilient and secure AI development and deployment should be a core activity of the U.S. government.”
The plan does not detail how the administration intends to define which entities or systems are “safety-critical” or constitute “homeland security applications.” Nor does it outline how companies or utilities of limited financial means would pay for and maintain AI defensive systems, which are not currently capable of autonomous cybersecurity work without significant human expertise and direction.
The plan proposes no new spending for the endeavor, and other sections are replete with mentions of the administration’s intentions to review and limit or reduce federal AI funding streams to states that don’t share the White House’s broader deregulatory approach.
Grace Gedye, an AI policy analyst for Consumer Reports, said “it’s unclear which state laws will be considered ‘burdensome’ and which federal funds are on the line.”
The plan also calls for the promotion and maturation of the federal government’s ability to respond to active cyber incidents involving AI systems. The National Institute of Standards and Technology will lead an effort to partner with industry and AI companies to build AI-specific guidance into incident response plans, and CISA will modify existing industry guidance to loop agency chief AI officers into discussions on active incidents.
Initial reactions to the plan included business-friendly groups cheering the administration’s deregulatory approach to AI and negative reactions from privacy and digital rights groups, who say the White House’s overall approach will push the AI industry further toward less-constrained, more dangerous and more exploitative models and applications.
Patrick Hedger, director of policy for NetChoice, a trade association for tech companies and online businesses, praised the plan, calling the difference between the Trump and Biden approaches to AI regulation “night and day.”
“The Biden administration did everything it could to command and control the fledgling but critical sector,” Hedger said. “That is a failed model, evident in the lack of a serious tech sector of any kind in the European Union and its tendency to rush to regulate anything that moves. The Trump AI Action Plan, by contrast, is focused on asking where the government can help the private sector, but otherwise, get out of the way.”
Samir Jain, vice president of policy at the Center for Democracy and Technology, said the plan had “some positive elements,” including “an increased focus on the security of AI systems.”
But ultimately, he called the plan “highly unbalanced, focusing too much on promoting the technology while largely failing to address the ways in which it could potentially harm people.”
Daniel Bardenstein, a former CISA official and cyber strategist who led the agency’s AI Bill of Materials initiative, questioned the lack of a larger framework in the action plan for how mass AI adoption will impact security, privacy and misuse by industry.
“The Action Plan talks about innovation, infrastructure, and diplomacy — but where’s the dedicated pillar for security and trust?” Bardenstein said. “That’s a fundamental blind spot.”
The White House plan broadly mirrors a set of principles laid out by Vice President JD Vance in a February speech, when he started off saying he was “not here to talk about AI safety” and likened it to a discipline dedicated to preventing “a grown man or woman from accessing an opinion that the government thinks is misinformation.”
In that speech, Vance made it clear the administration viewed unconstrained support for U.S.-based industry as a key bulwark against the threat of Chinese AI domination. Apart from some issues like ideological bias — where the White House plan takes steps to prevent “Woke AI” — the administration was not interested in tying the hands of industry with AI safety mandates.
That deregulatory posture could undermine any corresponding approach to encourage industry to make AI systems more secure.
“It’s important to remember that AI and privacy is more than one concern,” said Kris Bondi, CEO and co-founder of Mimoto, a startup providing AI-powered identity verification services. “AI has the ability to discover and utilize personal information without regard to impact on privacy or personal rights. Similarly, AI used in advanced cybersecurity technologies may be exploited.”
She noted that “security efforts that rely on surveillance are creating their own version of organizational risks,” and that many organizations will need to hire privacy and security professionals with a background in AI systems.
A separate section on the Federal Trade Commission, meanwhile, calls for a review of all agency investigations, orders, consent decrees and injunctions to ensure they don’t “burden AI innovation.”
That language, Gedye said, could be “interpreted to give free rein to AI developers to create harmful products without any regard for the consequences.”