News article

AI: Accrediting Intelligence

Ever since Alan Turing posed the question of whether machines can think back in the 1950s, Artificial Intelligence (AI) has been billed as potentially one of the most significant and pervasive general-purpose technologies, sitting alongside the likes of electrification, the internal combustion engine and computing.  The exponentially increasing rate of development in this sector means that rather than being the next big thing, AI is in reality the now big thing.

Whilst AI’s potential for societal, technological, health and business enhancement is vast, AI usually hits the headlines for the wrong reasons.  Whether it’s interference in elections, unlawful use of face recognition technology, deaths caused by autonomous vehicles or the threat to human employment, the focus on the potential harm of AI is largely borne out of a lack of trust, particularly over the ‘ethics’ of these systems.

Rather than seeing AI as the machines taking over, perhaps the first step is to view AI as a tool to deliver valuable insights that support human decision making.  However, even if we are comfortable with defining AI’s role, we still need to have confidence that any AI system is reliable, impartial, accountable, secure, safe and reflects society’s values if we are to place any trust in its output.

As highlighted in a recent report by IBM’s AI division, building AI systems for performance is not a sufficient design paradigm.  We must learn how to build, evaluate and monitor for trust.

Governments, industries and regulators across the world have recognised this need and begun taking steps to establish a system of oversight and guidance to help deliver this trust.  As part of its remit to build better policies for better lives, the Organisation for Economic Co-operation and Development (OECD) has developed Principles on Artificial Intelligence that aim to promote AI systems that are innovative, trustworthy and respect both human rights and democratic values.  These became the first such principles to be signed up to by governments when they were adopted by all OECD member countries, along with several non-member countries, in May 2019.  The OECD AI principles also formed the basis of the human-centred AI principles adopted by the G20 the following month.

In addition to laying out its main value-based principles, the OECD recommends to governments that they ensure a policy environment that will open the way to the deployment of trustworthy AI systems, including cooperation across international borders and industry sectors.

The UK government has established the Centre for Data Ethics and Innovation (CDEI), which has recently published its first three snapshot papers dealing with ethical issues in AI, such as Deepfakes, Smart Speakers and AI in Insurance. Similarly, the Committee on Standards in Public Life is undertaking a review into AI’s impact on standards across the public sector.

In both academia and industry, several AI research groups have been established and reports published.  Notable amongst these is the series of AI and Governance insight papers produced by United Nations University Centre for Policy Research (UNU CPR).

A common theme to emerge from all this separate sources is the need for standards covering the development, implementation and use of AI systems.  In his UNU CPR article and research report Peter Cohen, a leading AI academic, has said that international standards-setting bodies can complement emerging AI governance efforts to address its three main challenges; namely achieving buy-in from governments, ensuring non-governmental participation and being flexible enough to adapt to rapidly changing technology.

As illustrated above, the first hurdle of achieving government buy-in has largely been cleared with the worldwide adoption of OECD Principles on Artificial Intelligence.  In addition, the US government is developing a strategy around standards in AI whilst its Australian counterpart is exploring the best way to adopt a standards-based approach to AI.

Standards are usually adopted on a voluntary basis and have a long track record in garnering participation by both consumers and a wide range of industries.  Standards are also able to develop more quickly and with a greater degree of flexibly than regulation.  Following the formation of its first committee dedicated to AI in 2017, the British Standards Institute is a founder member of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS).  Established as a high level global discussion forum for organisations interested in the use of standards to further AI development, OCEANIS has created the world’s first centralised and transparent notification system for AI standards development. The work of developing the AI standards themselves is being conducted by all key stakeholders through internationally recognised standards bodies such as IEEE and the ISO/IEC Joint Technical Committee on standardisation in AI.

Standards alone though are not sufficient to generate trust, as it’s all very well saying that a product or service complies with a standard, but how do you prove it?  In order for standards to be most effective, it needs to be easy to recognise whether a product, service or process conforms to that standard.  The process of conformity assessment provides an unbiased way to show whether the product, service or process meets the relevant requirements.  In turn, accreditation is internationally recognised as a robust independent declaration of an organisation’s competence, the validity and suitability of its methods, the appropriateness of its equipment and facilities, and ongoing assurance through its internal quality control.

Together, standards, certification and accreditation form the pillars of assurance for governments, businesses, and end users alike.  The ever-increasing demands on governments to do more for consumer protection, combined with limited resources has seen a growing reliance on accredited conformity assessment.  This is particularly true for rapidly developing areas such as AI and one of the key strengths of accreditation is that it can be applied to almost any industry sector.

In addition to creating specific accredited standards for AI, there are other standards currently under development that could be applied to AI, with the recently published ISO/IEC 17029 (general requirements for verification and validation bodies) being one example.  UKAS is developing programmes and training packages to provide accreditation for schemes under ISO/IEC 17029 and will be seeking expressions of interest and pilot participants for those schemes as they become available.

 

UKAS is currently conducting a survey into the impacts of the 4th Industrial Revolution and we’d love to hear your views.  To complete this 5-minute survey, click here.