- Search BSI
- Verify a Certificate
Suggested region and language based on your location
Your current region and language
24 September 2025 – A new international technical specification designed to bring greater transparency to artificial intelligence (AI) systems and in doing so protect users from unexpected impacts has been published by BSI, the UK’s national standards body.
The specification, Making AI Systems Decision-Making Transparent (ISO/IEC TS 6254), provides guidance on explainable AI (XAI) — the methods and techniques that help ensure AI outputs and decisions are understandable to humans. It follows research by BSI last year which found that 57% of people felt that vulnerable people need protections to ensure they can benefit from AI, and 62% wanted a standard way of flagging concerns, issues or inaccuracies with AI tools so they can be addressed.
As AI increasingly underpins critical decision-making in sectors such as healthcare, finance and criminal justice, explainability has an essential role to play in building public trust and ensuring ethical outcomes from the use of AI. The new standard offers organizations a practical framework to embed transparency into their AI systems, ensuring decisions can be clearly and meaningfully explained to those impacted — whether a clinician, financial advisor, policymaker, patient or individual user.
Across key sectors, the standard aims to deliver measurable benefits. In healthcare, it supports transparent diagnoses and treatment planning; in finance, it improves the clarity of credit decisions and fraud detection; and in the justice system, it enables auditable and accountable predictive tools. For organizations, it aims to foster trust among stakeholders, enhance compliance with regulations and societal expectations, and reduce the risk of biased, unfair, or harmful outcomes throughout the AI systems lifecycle. Embedding the principle of transparency throughout the AI system lifecycle is the best way to building trust among users and developers. AI should not be seen as black boxes but rather as accountable tools.
Developed through global industry collaboration and informed by real-world use cases, the value of explainable AI is already being demonstrated by tools such as Google’s What-If Tool, which allows users to visualize model behaviour, or IBM’s AI Fairness 360 toolkit, intended to help improve fairness and transparency.
ISO/IEC TS 6254 aims to build on this momentum by providing a consistent, globally-recognized foundation for designing and evaluating transparent AI systems. It is also intended to complement policy initiatives like the UK Government’s Artificial Intelligence Playbook, which calls for AI to be “as explainable as possible.” The UK market for trustworthy AI is projected to grow six-fold[1] over the next decade, reflecting the urgent need for robust, transparent, and accountable AI systems.
David Cuckow, Director of Digital, BSI, said: “As AI continues to shape high-stakes decisions across our society, from healthcare to financial services or the justice system, transparency is no longer optional—it’s essential and critical for AI being a force for good in society. This new specification is designed to provide a vital framework to ensure AI decisions can be understood, scrutinized, and trusted. It marks a significant step toward building ethical, responsible AI systems that serve people and society fairly.”
The launch of this specification is the latest stage in BSI’s work to build trust in AI. It provides more detailed guidance to complement the Information technology. Artificial intelligence. Management system (BS ISO/IEC 42001), which was published by BSI in late 2023. The standard assists organizations in responsibly using AI, addressing considerations like non-transparent automatic decision-making, the utilization of machine learning instead of human-coded logic for system design, and continuous learning. A number of major organizations, including KPMG Australia, have now certified to this standard.
For further information or to purchase the standard, please visit https://knowledge.bsigroup.com/products/information-technology-artificial-intelligence-objectives-and-approaches-for-explainability-and-interpretability-of-machine-learning-ml-models-and-artificial-intelligence-ai-systems