Assurance as a service draws originally from the accounting profession, but has since been adapted to cover many areas such as cyber security and quality management. In these areas, mature ecosystems of assurance products and services enable people to understand whether systems are trustworthy. These products and services include: process and technical standards; repeatable audits; certification schemes; advisory and training services. For example, in financial accounting, auditing services provided by independent accountancy firms enable an assurance user to have confidence in the trustworthiness of the financial information presented by a company.

Assurance services can provide the ‘infrastructure’ for checking and verification needed to evaluate and communicate reliable evidence about the trustworthiness of AI systems, against standards, regulations and other principles or guidelines, enabling other actors in the ecosystem to build justified trust in the development and use of these systems.

AI assurance services have the potential to play a distinctive and important role within AI governance. It’s not enough to set out standards and rules about how we expect AI systems to be used. It is also important that we have trustworthy information about whether they are following those guidelines.

It is important that meeting these rules and regulations can be checked, both by organisations using these systems and by broader stakeholders affected by their use. Assurance is important both for addressing compliance with rules and regulations, and also for assessing more open ended risks where rules and regulations alone do not provide sufficient guidance to ensure that a system is trustworthy. For example, assessing whether an individual decision made by an AI system is ‘fair’ in a specific context. There will be an important role for consensus based, technical standards here to fill gaps from a non regulatory perspective and to provide guidance on mitigating risks from a technical standpoint.

A mature AI assurance ecosystem is needed to coordinate appropriate responsibilities, assurance services, standards and regulations to ensure that those who need to trust AI have the sort of evidence they need to justify that trust.

By ensuring both trust in and the trustworthiness of AI systems, AI assurance will play an important enabling role in the development and deployment of AI, unlocking both the economic and social benefits of AI systems. Consumer trust in AI systems is crucial to widespread adoption, and trustworthiness is essential if systems are going to perform as expected and therefore bring the benefits we want without causing unexpected harm. As we have seen professional services emerge to support other areas from traditional accounting, to cybersecurity services.

For example, the UK’s cyber security industry employed 43,000 full-time workers, and contributed nearly £4bn to the UK economy in 2019, according to DCMS. More recently, research commissioned by the Open Data Institute (ODI) on the nascent but buoyant data assurance market found that 890 data assurance firms are now working in the UK with 30,000 staff. The research carried out by Frontier Economics and glass.ai noted that 58% of these firms incorporated in the last 10 years. AI assurance is likely to become a significant economic activity in its own right. AI assurance is an area in which the UK, with particular strengths in legal and professional services, has the potential to excel.

An assurance ecosystem is beginning to emerge for AI, but it is currently immature. AI assurance offers a number of approaches for actors across the AI supply chain to reliably assess, verify and communicate the trustworthiness of AI systems, allowing others in the ecosystem to build justified trust.

A challenge for developing effective AI assurance is that AI covers a broad range of complex, emerging technologies being deployed in various contexts. In these different contexts, a number of aspects of an AI system need to be assured to ensure that the whole system is trustworthy. These aspects include the system’s robustness, accuracy, bias and fairness, societal and human rights impacts, data quality, intended use, management processes and controls.

To assure AI systems effectively we will require a toolbox of different products, services and standards suited to assessing these different aspects. For example we might expect to assure the Robustness of AI systems in a more objective and standardisable way than how we assure inherently subjective aspects of automated decision making, such as fairness.