

Artificial intelligence (AI) has rapidly emerged as the double-edged sword of the cyber threat environment. Sophisticated AI models now serve as both potent tools for attackers and vulnerable hinge points for organizations girding against intrusions. On the offensive end, AI algorithms enable hackers to automate hyper-personalized campaigns and quickly create adaptive malware that evades detection. On the defensive end, organizations increasingly deploy AI for threat detection and response, inadvertently creating new attack surfaces as adversaries find ways to exploit AI systems. The result: an AI arms race that amplifies cyber risks faster than many legal and security frameworks can adapt.
This article explores the fast-evolving landscape of AI-driven cyber threats, highlighting the distinct legal challenges these advances present. We begin by reviewing the ways AI has accelerated the pace and variety of attacks, even as companies increasingly turn to AI as a shield. We then provide an analysis of Executive Order 14144—a recent U.S. government directive that sets forth new priorities for the development of secure AI—before examining how the order shapes risk management strategies, incident response protocols, and compliance requirements. We conclude by offering practical guidance to business leaders and legal teams for navigating the emerging legal risks of AI-driven cyber threats.
Offensive Asset and Defensive Liability
Cyber attackers are unleashing AI to supercharge their campaigns. AI allows threat actors to automate and scale their operations, accelerating attack timetables compared to the days of manually controlled campaigns.
AI optimizes reconnaissance and targeting, giving hackers the tools to scour public sources, leaked and publicly available breach data, and social media to build detailed profiles of potential targets in minutes. This enhanced data gathering lets attackers identify high-value victims and network vulnerabilities with unprecedented speed and accuracy. AI has also supercharged phishing campaigns by automatically crafting phishing emails and messages that mimic an organization’s formatting and reference real projects or colleagues, making them nearly indistinguishable from genuine human-originated communications. Security researchers have observed a 1,265% increase in phishing emails and a 967% rise in credential-stealing attempts since late 2022, owing to AI’s ability to produce convincing, personalized, and inexpensive lures. AI has also jumpstarted the creation of deepfakes and synthetic media. Attackers use AI-generated audio and video to impersonate trusted individuals and manipulate victims. A common technique: clone a CEO’s voice or image to instruct an employee to urgently transfer funds or divulge passwords.
AI is also being weaponized to write and adapt malicious code. AI-powered malware can autonomously modify itself to slip past signature-based antivirus defenses, probe for weaknesses, select optimal exploits, and manage its own command-and-control decisions. Security experts note that AI accelerates the malware development cycle, reducing the time from concept to deployment. And AI-enabled ransomware can thwart defenses by selecting high-value data to encrypt or adapt encryption methods on the fly.
AI presents more than external threats. It has exposed a new category of targets and vulnerabilities, as many organizations now rely on AI models for critical functions, such as authentication systems and network monitoring. These AI systems themselves can be manipulated or sabotaged by adversaries if proper safeguards have not been implemented.
Another risk is adversarial attacks on AI/ML models, where attackers feed malicious inputs or training data to an AI system to corrupt its behavior. This could render a cybersecurity AI tool oblivious to certain malware or trigger false alarms. Attackers have also experimented with model tampering, directly altering the code or parameters of AI models to degrade their accuracy or plant undetectable back doors. Because many AI algorithms operate in a black box, it may be difficult to quickly ascertain if an AI’s outputs are being influenced by an attack or by an internal error.
Companies must treat AI as not only an asset but also a potential liability. AI models need protection through measures like robust validation, adversarial testing, access controls, and monitoring of AI decision outputs for anomalies. AI’s dual role as malicious tool and attractive target makes it a focal point for evolving cyber risk management.
The Law Plays Catch-Up
The rise of AI-driven cyber threats raises difficult legal questions that traditional frameworks struggle to answer. The legal tools for determining liability, attribution, and accountability continue to lag behind.
When AI is involved in a cyber incident, liability becomes thorny. Legal experts warn that company boards and officers may face allegations of breaching their fiduciary duties if they fail to implement prudent AI policies. On the flip side, if a business uses an AI system that inadvertently causes harm, the company could face lawsuits for the AI’s actions. AI vendors might also be targets of litigation — copyright holders have already sued AI developers over the training of their models. Current U.S. legislation on AI liability is scant and untested, leaving courts to stretch existing laws to cover AI-related harms. This legal limbo creates uncertainty for businesses deploying AI and those victimized by AI-driven attacks.
Identifying the party responsible for an AI-enhanced cyberattack can also be difficult. Attackers can leverage AI to cover their tracks, leaving minimal forensic evidence. Deepfakes amplify the problem, as pinning down the creator of a fake video or audio is technically and legally complex. The law traditionally requires a “sufficient degree of certainty” to attribute attacks and hold actors accountable, but AI muddies the waters by enabling plausible deniability and identity spoofing at scale. This complicates everything from criminal prosecutions to insurance claims.
Today’s cyber and privacy laws did not envision AI-generated threats. While some safeguards have been established, newer phenomena like deepfake impersonation have prompted calls for updated laws. Regulatory frameworks are racing to catch up: guidance like the U.S. National Cybersecurity Strategy urges shifting more responsibility for security failures onto software and AI providers. Until modernized legal tools that clearly define duties and accountability for AI-related incidents are implemented, companies must navigate a patchwork of laws and the high likelihood of fact-specific litigation.
Risk Management, Incident Response, and Compliance Under Executive Order 14144
In recognition of these emerging threats, the U.S. government issued Executive Order 14144 in January 2025, setting federal priorities and requirements that directly address AI in the cyber domain. The EO directs federal agencies to accelerate the development and deployment of AI for cybersecurity purposes, while also placing heavy emphasis on making AI safe and secure by design. It calls for prioritized research into designing secure AI systems and preventing and responding to attacks on and by AI. Companies developing powerful AI models may be required to report test results and vulnerabilities to the government, increasing accountability.
Transparency is another pillar of the Executive Order’s approach. The EO requires the development of standards for authenticating and labeling AI-generated content, making it easier to identify synthetic media and counter deepfakes. It also recognizes the need to adapt legal and regulatory frameworks to AI-enabled threats by instructing federal agencies to assess whether existing laws are sufficient to address AI risks and, if not, to recommend new actions. These efforts suggest that regulators will expect companies to proactively account for AI risks in their security programs or face regulatory scrutiny.
Executive Order 14144 and related federal initiatives have significant implications for how organizations manage cyber risk and meet their legal obligations in the age of AI.
Organizations need to update their risk assessment processes to explicitly consider AI-related threats and vulnerabilities. This means evaluating how AI could be abused against the company and how internal AI systems could fail or be attacked. Incorporating AI into enterprise risk management demonstrates due diligence, which is important if regulators or courts later question whether a company took reasonable precautions.
Traditional incident response plans may not account for scenarios like a deepfake-triggered fraud or an AI model malfunction. EO 14144’s focus on improving “resilience and incident response for AI” signals that companies should prepare for these AI-related incidents. The speed and scale of AI-driven attacks also demand faster detection and containment. Response teams must be trained and equipped to handle the novel attack vectors and failure nodes that AI introduces.
While EO 14144 itself applies to federal agencies, its ripple effects will influence industry standards and oversight. Companies that provide software or cloud services to the government may soon be required to attest to secure AI development practices. Critical infrastructure operators might see updated regulations or guidance incorporating the NIST AI Risk Management Framework as a condition of maintaining certain licenses or certifications. Even absent new laws, a lack of transparency or safeguards around AI use could invite regulatory investigations. Emerging standards also stress AI system documentation, bias mitigation, security testing, and governance. Organizations should strive to comply not only with current law but also with these evolving best practices to mitigate legal exposure. In the event of an incident, showing that your company followed recognized AI risk management guidelines can be a strong defense against negligence claims or enforcement actions.
The EO’s emphasis on transparency may translate into expectations that businesses be forthright about their use of AI, especially in sensitive contexts. Companies developing or deploying AI that materially affects customers or critical services should consider providing disclosures about how the AI is used, its limitations, and steps taken to ensure its outputs are accurate and secure. In some sectors, failing to disclose AI-related risks could attract liability. Moreover, if an AI model generates content, organizations should be aware of emerging laws around labeling. Being proactive in identifying AI-generated material and preventing the spread of false information not only aligns with EO 14144’s policy goals but also reduces reputational and legal risks.
Practical Guidance
Facing this complex threat and legal landscape, what practical steps can organizations take?
Revise company policies to address AI explicitly by covering both the use of AI and defense against AI threats. Incorporate AI considerations into vendor contracts and customer terms, clarifying issues of liability and indemnification if AI services fail. Clear internal rules set expectations and demonstrate to regulators that your organization exercises AI governance.
Update cybersecurity awareness training to teach employees how to recognize AI-generated phishing and deepfakes. Technical staff should also be trained on adversarial AI scenarios. An informed workforce can provide an early warning system for AI-driven intrusions that technology alone might miss.
Form a cross-disciplinary AI risk committee or working group—spanning legal, compliance, IT, security, and HR functions—to help ensure comprehensive oversight. Include C-suite leadership so that AI risk receives appropriate priority and resources. By bringing multiple perspectives, the organization can craft holistic strategies that cover prevention, detection, response, and recovery for AI-related risks.
Closely follow developments such as agency guidance, new laws, updates to standards and frameworks, and enforcement trends. Designate someone (or engage outside counsel) to track and analyze these changes. Being proactive is key, as keeping pace with policy changes will allow your organization to anticipate compliance obligations and avoid the scramble of last-minute adjustments. It will also position you to engage constructively with regulators, potentially allowing for recommendations to help shape and implement reasonable, effective AI security requirements before they become mandatory.
By updating policies, educating employees, fostering cross-functional governance, and staying current with policy, businesses can reduce their exposure to AI-related legal risks. Organizations that treat AI risks with the gravity they deserve will better protect themselves and contribute to a broader culture of transparency, accountability, and resilience in the AI-driven world.