

The adoption and development of AI is progressing rapidly, but as this technology evolves it also poses significant cybersecurity and data privacy risks to organizations.
One standard has now emerged which aims to support the responsible development and use of AI systems, the Artificial Intelligence Management System (AIMS) ISO/IEC 42001.
ISO/IEC 42001 was first published as an international standard at the end of 2023 by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC).
The British Standards Institution (BSI), in its role as the UK’s national standards body, has been offering ISO/IEC 42001 certification since January 2024, and has reported strong interest and rapid uptake.
Meanwhile, global assurance partner LRQA recently launched its own ISO 42001 certification service. Many other firms, including some leading accountancy firms, are also launching AI auding programs as interest in AI assurance grows.
Shirish Bapat, AI & cybersecurity product leader for LRQA, told Infosecurity, “Interest in ISO 42001 is growing rapidly and is expected to scale significantly over the next 12 months. Over the next two to three years, we anticipate a broad uptake across sectors.”
It is vital that cybersecurity professionals understand the standard and begin work towards achieving certification.
Aims of ISO 42001 and Who it Applies To
The overall aim of ISO 42001 is to guide organizations in responsible development and use of AI.
It outlines requirements and guidelines for establishing, implementing, maintaining and continually improving an AI management system based on the context of an organization.
Therefore, it is applicable to both companies who develop their own AI systems as well as those using AI to enhance products, services and internal workflows. It is also industry-agnostic and can be applicable to organizations of any size.
Bapat told Infosecurity, “AI is quickly becoming foundational to how business is done, whether you’re building models or not. Your competitors, customers and partners are adopting AI tools and understanding how these systems work will be critical to staying relevant.”
ISO 42001 provides a clear and structured way to understand and manage the risks, responsibilities and opportunities associated with AI.
The standard focuses on addressing aspects specific to AI such as unwanted bias, fairness, inclusiveness, safety, security, privacy, accountability, explainability and transparency.
Mark Thirlwell, global digital director at BSI, told Infosecurity, “As with all management systems, it takes a risk-based approach and uses a consistent high-level structure with existing management system standards, allowing them to be used together. It enables organizations to apply appropriate controls aligned to their development and/or use of AI, supporting balance between governance and innovation.”
The timeline for achieving the ISO 42001 standard varies between organizations but typically takes between six and 12 months.
Why ISO 42001 is Relevant to Cybersecurity Professionals
AI is another technology that brings cybersecurity and data privacy risks to any organization developing or deploying it.
ISO 42001 focuses on AI lifecycle management, which includes addressing cyber risks.
Thirlwell said, “Cybersecurity practitioners will be called upon to contribute, collaborate and support the implementation and continual improvements that address the associated cyber risks required by ISO/IEC 42001 to help ensure safe and responsible AI deployment and use.”
However, he noted that ISO 42001 is not suitable to give comprehensive cybersecurity and privacy management guidance. Instead, other established standards, like the information security management system standard (ISO/IEC 27001), should be a go-to for cyber practitioners for use alongside ISO/IEC 42001.
Bapat noted, “For cybersecurity professionals, the standard connects familiar principles with the unique risks AI introduces. It’s not just a standard; it’s a strategic lever for trust and resilience in AI adoption.”
Ensure Your Auditor Has the Expertise You Need
The increased interest in ISO 42001 can also be linked to the release of AIMS ISO 42006, which sets out requirements for auditing bodies to certify against the 42001 standard.
To date, auditors have been working from a draft version of ISO 42006 in order to certify organizations with the ISO 42001 standard. In July 2025, BSI will officially publish ISO 42006.
Officially the ‘Information technology — Artificial intelligence — Requirements for bodies providing audit and certification of artificial intelligence management systems (ISO/IEC 42006:2025), the new standard sets out the additional requirements for bodies that audit and certify artificial intelligence management systems according to ISO/IEC 42001. It is designed to ensure that certification bodies operate with the competence and rigour necessary to assess organisations developing, deploying or offering AI systems.
“The AI audit market has been hotting up. There is a wide range of people wanting to get involved in that market that is being developed, I do think there is a danger it could become the wild west,” Thirwell said. “Everyone is going to be saying they can accredit against 42001 but it needs to be a stringent process otherwise people are not going to have the comfort they think they have.”
ISO 42006 is the first international standard dedicated to AI system certification, rather than the AI systems themselves.
Ultimately, achieving the ISO 42001 certification from an accreditation body that holds ISO 42006 is beneficial as it adds a layer of confidence that the auditors have the correct experience in AI to be able to conduct the audit.
How ISO 42001 Can Support Other Regulations and Standards
In the rapidly evolving landscape of AI, staying ahead of regulatory changes is a necessity. Standards like ISO 42001 offer organizations a framework to remain compliant with current regulations and adapt to the demands of future ones.
Thirwell noted that while the standard is not intended for any specific regulatory or supply-chain use case, it does require organizations to consider external factors including policies, guidelines and decisions from regulators for things that can have an impact on AI development and use.
For example, the EU’s Network and Information Systems Directive 2 (NIS2) cybersecurity framework includes critical infrastructure, such as data centers, and recognizes the growing dependence of AI systems on data and its storage requirements to run models and algorithms.
Other regulations, like the forthcoming UK Cyber Security and Resilience Bill and the EU Digital Operations Resilience Act (DORA) for financial services, have requirements around the management of increasing supply chain risks, Thirwell said.
How to Get Started with ISO 42001
For organizations considering certification with the ISO 42001 standard, BSI’s Thirwell outlined the following recommendations on where to begin the process:
- Do some research, for example listen to some on-demand webinars to gain a better understanding of the standard and its potential application
- Determine where your organization is with AI development and use so you know your starting point
- Gain senior leadership commitment – management systems are embedded into the fabric of the organization and require senior leadership commitment. Make sure there is buy-in from the top to progress safe and responsible AI adoption
- Get a copy of the standard so you know what you are working toward
- Take a look at a self-assessment checklist, you may already have a lot of the requirements in place with existing practices
- Explore training courses to help build in-house knowledge on understanding, implementing and auditing ISO/IEC 42001
- If internal resources are limited, consider external support to review where you are, where you want to go and help implement policies and processes as outlined in the standard to get you there
“Think about whether certification is your end goal – it is a great way for external validation that you have an AI management system aligned to this internationally recognized standard,” Thirwell concluded.