BSI supports responsible AI management with new package of measures

  • BSI unveils programme to empower organizations to safely integrate AI
  • Package of measures takes step to close AI confidence gap, after 61% (vs 64% in Australia) called for global guidelines

17 January 2024: A new package of measures, including certification to a new management system designed to enable the safe, secure and responsible use of Artificial Intelligence (AI) across society, is being launched by BSI, following research showing 61% (vs 64% in Australia) want global guidelines for the technology.

The scheme, aligned to the recently published international management system standard for AI (BS ISO/IEC 42001), is intended to assist organizations in responsibly using AI, addressing considerations like non-transparent automatic decision-making, the utilization of machine learning instead of human-coded logic for system design, and continuous learning.

Susan Taylor Martin, CEO, BSI said: “AI is a transformational technology. For it to be a powerful force for good, trust is critical. This is an important step in empowering organizations to responsibly manage the technology, which in turn offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”

The new package builds on BSI’s portfolio of AI services intended to help shape trust in AI, including AI training courses to equip individuals and organizations with the knowledge and skills necessary to navigate the complex landscape of AI standards and regulations. In this rapidly evolving field, understanding the ethical, legal, and compliance aspects of AI is essential for responsible and sustainable deployment.

Algorithm testing is of paramount importance as it directly impacts the reliability, accuracy, and performance of AI systems. AI algorithms, such as machine learning models, deep neural networks, and natural language processing, underpin the decision-making processes of AI applications. BSI’s rigorous testing is essential to validate these algorithms’ correctness and efficiency, ensuring they produce trustworthy results and perform optimally in real-world scenarios.

BSI also offers AI Excellence Benchmark Assessments for organizations seeking to ensure their AI technologies are used responsibly and ethically. Assessments are designed to foster responsible AI practices, positioning companies for success in the AI-driven future.

BSI is progressing with its objective of becoming a notified body for AI products that require notified body oversight, in the wake of progress on the EU AI Act, as well as providing services to manufacturers and software providers proactively seeking AI Excellence Benchmark assessments of their AI-enabled products and AI management systems.

Charlene Loo, Managing Director, BSI Australia, said: “As we seek to expand our AI horizons, whether in medical devices and healthcare, transport, the built environment or any other sector, it’s critical that we complement innovation and progress with safe and ethical deployment. I am delighted that BSI is developing a comprehensive package of training and oversight aligned to the ground-breaking new AI management standard to support organizations to make the most of innovation and ensure it is a force for good for society.”

BSI’s recent Trust in AI Poll of 10,000 adults across nine countries found three fifths globally wanted international guidelines to enable the safe use of AI. Nearly two fifths globally (38% vs 23% in Australia) already use AI every day at work, while more than two thirds (62% vs 56% in Australia) expect their industries to do so by 2030. The research found that closing ‘AI confidence gap’ and building trust in the technology is key to powering its benefits for society and planet.

Find out more about BSI’s AI services here.