Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Equality watchdog takes action to address discrimination in use of AI

Equality watchdog takes action to address discrimination in use of AI

The use of artificial intelligence by public bodies is to be monitored by Britain’s equality regulator to ensure technologies are not discriminating against people. There is emerging evidence that bias built into algorithms can lead to less favourable treatment of people with protected characteristics such as race and sex.

The Equality and Human Rights Commission (EHRC) has made tackling discrimination in AI a major strand of its new three-year strategy. It is today publishing new guidance to help organisations avoid breaches of equality law, including the public sector equality duty (PSED). The guidance gives practical examples of how AI systems may be causing discriminatory outcomes.

From October, the Commission will work with a cross-section of around 30 local authorities to understand how they are using AI to deliver essential services, such as benefits payments, amid concerns that automated systems are inappropriately flagging certain families as a fraud risk.

The EHRC is also exploring how best to use its powers to examine how organisations are using facial recognition technology, following concerns that the software may be disproportionately affecting people from ethnic minorities.

These interventions will improve how organisations use AI and encourage public bodies to take action to address any negative equality and human rights impacts.

Marcial Boo, chief executive of the EHRC, yesterday said:

“While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.

“Many organisations may not know they could be breaking equality law, and people may not know how AI is used to make decisions about them.

“It’s vital for organisations to understand these potential biases and to address any equality and human rights impacts.

“As part of this, we are monitoring how public bodies use technology to make sure they are meeting their legal responsibilities, in line with our guidance published today. The EHRC is committed to working with partners across sectors to make sure technology benefits everyone, regardless of their background.”

The monitoring projects will last several months and will report initial findings early next year.

The Artificial intelligence in public services guidance advises organisations to consider how the PSED applies to automated processes, to be transparent about how the technology is used and to keep systems under constant review.

In the private sector, the EHRC is currently supporting a taxi driver in a race discrimination claim regarding Uber’s use of facial recognition technology for identification purposes.