CDEI publishes its Barometer of the opportunities & risks of AI

CDEI publishes its Barometer of the opportunities & risks of AI

The the UK’s Centre for Data Ethics and Innovation (CDEI) has published its AI Barometer, a major analysis of the most pressing opportunities, risks, and governance challenges associated with AI and data use in the UK, initially across five sectors. The AI Barometer draws on the expertise of over 120 expert panellists from industry, academia, civil society and government.

This is a system-wide view of how AI and data is being used in five key sectors; Criminal Justice, Financial Services, Health & Social Care, Digital & Social Media and Energy & Utilities. The AI Barometer identifies a number of wider key areas where AI also has huge potential to address key challenges being faced by society. These include operating an efficient green energy grid, understanding the impact of automated services on vulnerable people and tackling misinformation. Its aim is to identify where there are “pressing opportunities” for use of data and AI as well as challenges where the CDEI can develop further work and develop advice to UK government on “policy priorities”.

At the launch of the AI Barometer, the CDEI stated that a strong message of the report is to highlight the real opportunities for the application of data and AI use to tackle issues being faced across society today. However, the Barometer discusses how not all these opportunities are equal and recognises that some opportunities will be easier or harder to achieve. It also explores the fact that the hardest opportunities to achieve could be the ones where the highest potential benefits to society may be found. The CDEI stated that an intention of the Barometer is to highlight a sense of “urgency” about the need to identify the most pressing, and most difficult, barriers and issues that need to be addressed to achieving the full potential of data and AI use.

What are the key findings?

The AI Barometer highlights the potential for AI and data-driven technology to address society’s greatest challenges. However, the analysis suggests we have only begun to tap into the potential of this technology, for example in improving content moderation on social media, supporting clinical diagnosis in healthcare, and detecting fraud in financial services. Even those sectors that are mature in their adoption of digital technology (e.g. the finance and insurance industry) have yet to maximise the benefits of AI and data use.

Some opportunities are easier to realise than others. ‘Easier to achieve’ innovations tend to involve the use of AI and data to free up time for professional judgement, improve back-office efficiency and enhance customer service. ‘Harder to achieve’ innovations, in contrast, involve the use of AI and data in high stakes domains that often require difficult trade-offs (e.g. police forces seeking to use facial recognition must carefully balance the public’s desire for greater security with the need to protect people’s privacy).

Alongside looking at opportunities, panellists were asked to rank a series of risk statements according to their impact and likelihood. Some of their judgements were to be expected, for example, with technologically-driven misinformation scoring highly in healthcare. Yet the scoring exercise also brought to the surface risks that are less prominent in media and policy discussions, for instance the differences between how data is collected and used in healthcare and social care, and how that limits technological benefits in the latter setting.

While the top-rated risks varied from sector to sector, a number of concerns cropped up across most of the contexts examined. This includes the risks of algorithmic bias, a lack of explainability in algorithmic decision-making, and the failure of those operating technology to seek meaningful consent from people to collect, use and share their data. This highlights the value of cross-sector research and interventions.

Several barriers stand in the way of addressing these risks and maximising the benefits of AI and data. These range from market disincentives (e.g. social media firms may fear a loss of profits if they take action to mitigate disinformation) to regulatory confusion (e.g. oversight of new technologies like facial recognition can fall between the gaps of regulators).

While many of these barriers are daunting, they are far from intractable. Incentives, rules and cultural change can all be marshalled to address them. The AI Barometer highlights examples of promising interventions from regulators, researchers and industry, which could pave the way for more responsible innovation.

Three types of barrier merit close attention: low data quality and availability; a lack of coordinated policy and practice; and a lack of transparency around AI and data use. Each contributes to a more fundamental brake on innovation – public distrust. In the absence of trust, consumers are unlikely to use new technologies or share the data needed to build them, while industry will be unwilling to engage in new innovation programmes for fear of meeting opposition and experiencing reputational damage.

What happens next?

Over the coming months, the CDEI will promote the findings of the AI Barometer to policymakers and other decision-makers across industry, regulation and research. The AI Barometer itself will also be expanded over the next twelve months, looking at new sectors and gathering more cross-sectoral insights. Additionally, the CDEI is embarking on a new programme of work that will address many of the barriers identified in the AI Barometer as they arise in different settings, from policing to social media platforms.

Click to download a copy of the CDEI AI Barometer.