4 mins read

Building the future of artificial intelligence assurance: UKAS and the AIQI Consortium

As artificial intelligence (AI) reshapes industries and societies, the need for robust, internationally recognised assurance frameworks has never been more critical. In an era where AI systems influence everything from healthcare diagnostics to financial services and autonomous vehicles, ensuring these technologies are trustworthy, ethical and reliable is not just a technical challenge, it’s a fundamental requirement for public confidence and economic prosperity.

This imperative has driven UKAS to exercise its unique convening power, bringing together quality infrastructure organisations from around the world to establish the AI Quality Infrastructure (AIQI) Consortium. Born from the Walbrook AI Accord, and developed in partnership with the City of London and the TIC Council, the AIQI Consortium represents a landmark collaborative effort to shape the future of AI assurance.

UKAS’s convening power in action

Throughout its 30-year history, UKAS has consistently demonstrated an exceptional ability to bring diverse stakeholders together to address complex technical challenges. The establishment of the AIQI Consortium exemplifies this convening power at its most sophisticated – uniting accreditation bodies, conformity assessment bodies, standards organisations and research institutes under a shared vision for AI assurance.

The Consortium’s formation reflects UKAS’s commitment to its public interest mission: supporting businesses and consumers through the safe, secure and ethical adoption of AI technologies across the market. By leveraging relationships built over three decades, UKAS has created a platform where different perspectives, regulatory approaches and technical expertise can converge to address one of the most significant technological opportunities of our time.

A collaborative framework for global AI assurance

The Consortium’s collaborative approach recognises that AI assurance cannot be achieved in isolation. The technologies we seek to assure are inherently global – AI systems trained in one country may be deployed worldwide, algorithms developed by multinational teams may impact citizens across continents and data flows transcend borders. Assurance frameworks must reflect this reality.

The Consortium promotes ISO/IEC 42001, the international standard for AI management systems, as a foundational element of trustworthy AI deployment. Further, our ambitions extend to harmonised approaches to AI testing, supporting mutual recognition frameworks for AI certifications and developing practical guidance for organisations.

This collaborative model enables smaller nations and organisations to benefit from shared expertise, while ensuring that diverse regulatory contexts and cultural values are reflected in our collective approach.

AI assurance that utilises the global quality infrastructure, through international standards and globally recognised means of assuring they are met, will help avoid the fragmentation that would result from purely national approaches.

Addressing tomorrow’s challenges today

The pace of AI development means that assurance frameworks must be rigorous and agile. The AIQI Consortium is developing forward-looking approaches that can adapt to emerging technologies while maintaining the technical rigour of accreditation and conformity assessment.

Our work encompasses several critical areas: establishing skills frameworks for AI assessors, supporting policymakers to ensure they understand how standards and accredited conformity assessment can be used in an AI context and examining how AI can be deployed within conformity assessment activity.

The Consortium serves as a living laboratory where different approaches can be tested, refined and shared. This collaborative experimentation is essential as we navigate uncharted territory in AI assurance, learning from both successes and failures to build more robust frameworks.

Building trust through international cooperation

Public trust in AI systems depends on the credibility of the assurance frameworks that validate them. The consensus-based, multi-stakeholder standards development process, the foundation of these frameworks, enables this credibility, demonstrating that AI assurance standards are developed through genuine international cooperation. Without common standards and assurance, organisations must navigate a patchwork of incompatible national frameworks. Instead, the Consortium promotes approaches that facilitate cross-border recognition and reduce the unnecessary duplication of assessments.

UKAS at 30: leading into the next decade

As UKAS marks its 30th anniversary, the Consortium represents a culmination of convening expertise and a foundation for the next chapter. The Consortium embodies a commitment to stay ahead of technological developments, proactively positioning accredited conformity assessment as a foundation of AI assurance.

Through the Consortium, UKAS is not just responding to the challenges of AI, it is actively shaping the future of how the world approaches AI assurance, ensuring that as powerful technologies transform our societies, they do so in ways that are trustworthy, ethical and to the benefit of all.