Artificial Intelligence (AI) is being talked about everywhere. Will it help us or hurt us? Will it unlock human potential or replace our jobs with robots? The first wave of AI technology is already established. Machines are already making decisions that affect our lives. How can we be assured that those decisions are being made properly? Even if the software algorithms are processing datasets that are comprehensive, unbiased and accurate, how can we be assured the outcome is ethical or legal?
Countries all over the world are working out what AI might mean for them. Concerns revolve around three primary areas; how to build trust and ensure public acceptance of AI, how to stimulate more research and development, and what regulatory framework is needed to protect people and ensure ethical behaviour.
Standards have a key role to play. The principles that underpin BSI’s work are open public consultation, full stakeholder engagement and consensus. Ultimately, as the UK Standards Organization, BSI is here to take the areas where government, industry and consumers need to develop consensus standards and ensure the mechanisms are in place for that to happen.
The first British Standard published on ethical design of robots was published in 2016, BS 8611. It gives guidelines for software engineers on how to identify potential ethical harm in the design and application of robots and autonomous systems. In terms of AI more broadly, BSI formed its first committee dedicated to AI two years ago, bringing together a broad range of stakeholders who could agree on best practice and together develop new standards for industry that would respect the public interest. The focus of the committee is moving from broad technical questions into topics such as governance and bias.
Our focus is on international standards developed in ISO and IEC but to widen debate BSI and the US standards organization IEEE have set up the OCEANIS initiative with other leading standards bodies around the world. We see this as an open community, a high-level global collaborative forum for organisations interested in standards that address ethics in autonomous and intelligent systems.
UK could and should be a global leader in AI governance and ethics. The work of the Alan Turing Institute and the creation of the Centre for Data Ethics and Innovation last year reflect government commitment, but more needs to be done if the UK is to take a leadership role in the global AI revolution. I believe that UK can and should shape the international standards that will govern the ethical performance of AI products and services. UK must be a global standards maker, not a standards taker in AI.
Standards are the fastest way to shape best practice in the governance and ethics of AI and build trust for people faced with AI wherever they meet it – in the workplace, at home, abroad, in the healthcare system, the justice system, indeed potentially all public services. Through BSI, UK experts already directly influence thousands of international standards used worldwide by business and industry and adopted by governments to support regulation. AI standards championed from the UK are the obvious next step.
You can read more about BSI’s activities on AI in the March edition of Standards Outlook that is dedicated to innovation and the role standards can play in underpinning emerging technologies.