The invisible thread: How global collaboration and innovation can create a framework to build greater trust in AI

In this blog, Sahar Danesh, Senior Government Engagement Manager, sets out how international collaboration can shape trust in AI so everyone can realize the benefits of innovation.

Collective need for trust  

As the uses for AI multiply, as many questions as answers are being created. To address these, and help ensure that AI application positively impacts society, governments globally are racing to develop and implement robust governance. In this context, international guidance on AI management is being developed, offering an opportunity to draw on consensus-based standards to create a framework designed to build greater trust.

Indeed, according to BSI’s Trust in AI Poll[1]:

  • 61% of people globally say we need international guidelines to enable the safe use of AI and only one in ten think these are not necessary.
  • 57% of people globally felt that vulnerable consumers need protections to ensure they can benefit from AI.

Compliance with international standards has the potential to act as an invisible thread, for example around sustainable finance, where an international standard has now been developed from a UK starting point to embed global consistency, or similarly with regard to consumer vulnerability. Today, BSI, as the UK’s national standards body, is partnering with the UK Government to utilize standards in support of the UK’s AI strategy.

The regulatory landscape

Work is underway as governments think about how organizations will use AI and the guardrails that can be put in place. The fact that countries are considering the regulatory approach indicates a desire to respond to AI with speed; a contrast to the slow global policy response to social media[2]. It suggests a recognition of the potential impact AI can have on society and a desire to act in response.

But no country can influence how another country chooses to regulate – or at least there’s no guarantee of it, and certainly no guarantee for consistency across borders. This is why international standards can be so beneficial – they can create consensus between countries so there is an understanding of what good practice looks like. Having an internationally agreed view could help ensure that AI is trustworthy, that no individuals, organizations, or countries get left behind in the AI revolution and that AI systems are indeed a force for good.

The hidden infrastructure

When it comes to rapidly emerging technology, standards and certification can act as the hidden infrastructure, with clear guiding principles designed to ensure innovation is safe and used properly. And because standards are dynamic and built on consensus of what good looks like, they offer the opportunity to influence behaviours across organizations.  

Gathering consensus in this way offers the opportunity for us to tackle global issues such as the race to net zero, where a suite of standards has been created to help organizations navigate the road to decarbonization. Creating the ISO Net Zero Guidelines involved hundreds of people from over 100 countries, including many new or under-represented voices.

In cybersecurity, international standards and certification such as the international standard to manage information security (ISO/IEC 27001) are commonly used to mitigate risk[3]. Such guidance is designed to ensure that what is on the market is safe and to help organizations implement better technological solutions.

In BSI’s poll, nearly three-fifths of people globally said they felt vulnerable consumers need protections to ensure they can benefit from AI. When markets fail or there is a risk to consumers, regulation can be essential. And regulation certainly has a role to play with AI, for example, so it isn’t used to take advantage of human behaviour. But the advantage standards have over-regulation is that they are shaped through global consensus and consider the concerns of society with a level of transparency that can accelerate good practice.

Global AI consensus is underway

While building consensus can take time, in the case of AI, we are not starting from scratch. The forthcoming AI Management Standard (ISO/IEC 42001) draws on existing guidelines, and there are already many standards around trustworthiness, bias and consumer inclusion that organizations can use to inform their practice. The agility of standards and organizations’ ability to apply them is critical given the speed of change in AI.

History has shown that emerging technologies – from cars to computers and smartphones - can bring enormous benefits. Having robust governance frameworks in place can help accelerate adoption and build greater trust. Partnering globally to agree on what good practice can look like when we use AI can help us to ensure that AI benefits, rather than disrupts, society and becomes a force for good.

 

This content is from BSI’s Shaping Society 5.0 campaign. Download Sahar’s full essay here or access others in the collection here.  

 

Sahar Danesh, Senior Government Engagement Manager, BSI

Sahar Danesh, Senior Government Engagement Manager, BSI

Sahar leads on BSI’s engagement with the UK and Devolved Governments on digital and tech policy and helps identify opportunities where standardization can deliver benefits for UK businesses and societal stakeholders. She is part of the delivery team of the UK’s AI Standards Hub, and works alongside government and partners to help bring together UK expertise to identify what good practice should look like in the development of AI technologies.

 

 

[1] BSI partnered with Censuswide to survey 10,144 adults across nine markets (Australia, China, France, Germany, India, Japan, Netherlands, UK, and US) between 23rd and 29th August 2023

[2] Congress, Far from 'A Series of Tubes,' Is Still Nowhere Near Reining in Tech, New York Times, December 2021

[3] ISO 27001 certification figures increase by 20%, IT Governance, September 2017