Suggested region and language based on your location
Your current region and language
21 July 2025 – New guidance has been published to support the burgeoning AI audit market and ensure businesses can be confident that those assessing AI governance are doing so in a clear, consistent and coherent manner. The publication by BSI, the UK’s national standards body, is designed to protect against a ‘wild west’ of unchecked providers by enabling regulators, customers, and investors to differentiate credible AI governance implementations.
Having previously published the first AI management standard in late 2023 (BS ISO/IEC 42001:2023), and certified businesses to this including KPMG Australia, BSI has now released the world’s first international standard designed specifically for certifying bodies that independently audit AI management systems.
Information technology — Artificial intelligence — Requirements for bodies providing audit and certification of artificial intelligence management systems (BS ISO/IEC 42006:2025) offers a tool to ensure that organizations seeking BS ISO/IEC 42001 certification can rely on accredited auditors with standardized competencies and sufficient rigour.
While the global debate about AI regulation continues, following the introduction of the EU AI Act and governance mandates, but with diverging approaches on the horizon in other countries, AI assurance is increasingly recognized as being critical to responsible adoption. Leading accountancy firms, including the Big Four, are understood to be following in BSI’s footsteps and launching AI auditing programmes, responding to growing market demand for transparency. Yet to operate effectively, certification bodies require standardized methodologies to assess AI systems effectively across jurisdictions.
Unlike broader AI governance frameworks, BS ISO/IEC 42006 is the first international standard dedicated to AI system certification, rather than the AI systems themselves. The standard establishes robust governance mechanisms for auditors assessing compliance with BS ISO/IEC 42001. This additional layer of verification is designed to strengthen trust in AI governance certifications and ensure consistency among certifying bodies.
The standard responds to urgent industry challenges, including the shortage of qualified auditors in the AI market. Hundreds of firms in the UK offer AI assurance services, many provided by AI developers, raising concerns about independence and rigor. BS ISO/IEC 42006 introduces clear competency frameworks for auditors.
BSI convened UK experts who contributed to the development of BS ISO/IEC 42006, reinforcing the UK’s role as a champion for ethical and principles-based AI certification. The standard has been, and will continue to be, referenced by accreditation bodies and conformity assessment organizations to support trustworthy AI assurance practices, helping ensure compliance with evolving global regulatory frameworks
Mark Thirwell, Global Digital Director, BSI said: “As companies race to provide AI audit services, there is a risk of a ‘wild west’ of unchecked providers and the potential for radically different levels of assessment. Businesses need to be sure that when their AI management system is being assessed, it is being done in a robust, coherent and consistent manner. Only this will build much-needed confidence in a safe, secure AI ecosystem.
The new guidance, BS ISO/IEC 42006 represents a crucial milestone in global AI accountability, by setting clear certification requirements. This standard will enable regulators, customers, and investors to differentiate credible AI governance implementations from unchecked claims, supporting responsible AI innovation and paving the way for AI to be a force for good.”
For further information or to purchase the standard, visit: https://knowledge.bsigroup.com/products/information-technology-artificial-intelligence-requirements-for-bodies-providing-audit-and-certification-of-artificial-intelligence-management-systems