

The National Institute of Standards and Technology is addressing the security risks of artificial intelligence by building off its well-trodden cybersecurity guidance, as lawmakers also push for NIST to boost AI standards and red teaming guidelines.
NIST announced the development of a “cyber AI profile” earlier this year. The guidance would be based off NIST’s Cybersecurity Framework, which guides many organizations in both the public and private sectors.
Katerina Megas, lead of the “cybersecurity for Internet of Things” program at NIST, said the effort has already gained some critical feedback from chief information security officers and other cyber practitioners about the impact of AI on their work. This month, NIST is hosting a workshops on the cyber AI profile.
“What I heard back from the CISO community was, ‘Absolutely, it is something that I’m being asked about. It is something that I’m concerned about. But I really am very busy just managing my day-to-day operations, and I don’t necessarily have time to kind of stop and think about, what am I going to do about this new kind of thing called artificial intelligence, and how it’s like impacting my life?’” Megas explained at the 930gov conference hosted by the Digital Government Institute in Washington last week.
CISOs are particularly interested in how NIST’s various guidelines overlap with the emerging world of AI. But Megas said there has been another clear message from CISOs: “please do not reinvent the wheel.”
“Cybersecurity professionals are already crushed under so much guidance,” she said. “We don’t need yet another thing that we need to bring in and start looking at brand new and training people up on.”
The NIST project has carved out three distinct ways that AI implicates cybersecurity. The first is securing AI systems and components. The second is the adversarial use of AI in the cyber domain. And the third is how CISOs can use AI to advance their cybersecurity measures.
NIST’s cyber AI profile effort is focused on closing the “taxonomy gaps” between AI practitioners and cybersecurity experts, Megas said. The project will help CISOs map different aspects of AI to the Cybersecurity Framework and other NIST guidelines, including the AI Risk Management Framework, which accounts for myriad AI risks beyond just cybersecurity.
The goal is to help CISOs and others understand “the implications of AI on you achieving this cybersecurity outcome,” Megas said.
Following the upcoming workshops, she said NIST will likely publish a preliminary draft of the cyber AI profile for public comment.
AI red teaming
Meanwhile, a bipartisan bill in the Senate would advance NIST’s efforts in the broader arena of evaluating how AI systems are developed and tested. Sens. John Hickenlooper (D-Colo.) and Shelley Moore Capito (R-W.V.) reintroduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act last week.
The bill would direct NIST to work with the Energy Department and the National Science Foundation on voluntary guidelines for how developers and users of AI systems can conduct internal assurance work, as well as third-party verification and red teaming of AI systems.
“The horse is already out of the barn when it comes to AI. The U.S. should lead in setting sensible guardrails for AI to ensure these innovations are developed responsibly to benefit all Americans as they harness this rapidly growing technology,” Hickenlooper said in a press release.
The bill comes as the Trump administration focuses NIST’s recently re-branded “Center for AI Standards and Innovation” on measuring and evaluating AI models.
“It’s all about understanding the measurement science of models,” Michael Kratsios, director of the Office of Science and Technology Policy, said during an event in Washington last week. “And that is what we’re excited for NIST to be working on, and to be able to share with the world how you actually can measure a model. And once you do that, that’s really valuable to industry. If you’re in financial services and you want to deploy a model and make sure that client or customer data isn’t being siphoned off by the model or whatever, NIST standards around how you do a model eval could be super valuable in you being comfortable in that decision.”
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.