Preloader Image

Promote secure-by-design AI technologies and applications: The plan says the US government “has a responsibility to ensure the AI systems it relies on — particularly for national security applications — are protected against spurious or malicious inputs” and that “promoting resilient and secure AI development and deployment should be a core activity of the US government.” It recommends that DoD, in collaboration with NIST and ODNI, continue to refine DoD’s responsible AI and generative AI frameworks, roadmaps, and toolkits. It also asks the ODNI, in consultation with DoD and CAISI, to publish a standard on AI assurance.

Promote mature federal capacity for AI incident response: The plan asks NIST, including CAISI, to partner with the AI and cybersecurity industries to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities of incident response teams. It further asks CISA to modify its cybersecurity incident and vulnerability response playbooks to incorporate considerations for AI systems and to include requirements for CISOs to consult with chief AI officers, senior agency officials for privacy, CAISI, and other officials as appropriate.

Assess national security risks: Another key provision asks for “American AI developers to enable the private sector to actively protect AI innovations from security risks, including malicious cyber actors, insider threats, and others.” It further asks CAISI, in collaboration with national security agencies, to “evaluate and assess potential security vulnerabilities and malign foreign influence arising from the use of adversaries’ AI systems in critical infrastructure and elsewhere in the American economy, including the possibility of backdoors and other malicious behavior.”