How bias occurs in Artificial Intelligence

How bias occurs in Artificial Intelligence

Given that Artificial Intelligence is “artificial”, created by algorithms and non-human learning mechanisms, it seems unusual that bias can occur. However, the old adage of “Garbage In, Garbage Out” (GIGO), an acronym that implies bad input will result in bad output has been around since the invention of the computer. Because computers operate using strict logic, invalid input may produce unrecognisable output, or “garbage.” Even Artificial Intelligence (AI), which may not always use strict logic, can suffer from the GIGO effect.

Artificial Intelligence (AI) and Big data is one of the 4 Grand Challenges that will see AI used across a variety of industries to put the UK at the forefront of the AI and data revolution. However, recent reports have shown that there is a low level of understanding about AI among the public, particularly in the Black, Asian and minority ethnic (BAME) community; fear about the implementation of AI particularly in automation and robotics and considerable bias in the way that AI is being implemented.

Bias Spiral

Diversity UK asserts that the spiraling effect of bias (Bias Spiral) stems from five areas:

  • Systemic bias fuelled by a lack of diversity among employees in AI companies.
  • Human biases being embedded into the AI and Machine Learning algorithms.
  • Test data quality and reliability and the probity of data enhancement techniques.
  • Deployment of AI and whether optimal control and feedback loops are used.
  • Governance in a sector with a weak regulatory framework, unarticulated ethical implications and undefined risks and liabilities.

Systemic bias

There is a diversity crisis in the AI sector across gender and race. According to the Global AI Talent Report 2019 study, it found only 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men according to the AI Index 2018. In the AI Industry women comprise only 15% of AI research staff at Facebook and 10% at Google. For black workers, the picture is even worse. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%.

Lack of representation leads to systemic bias, about how AI companies work, how they serve their customers; what products get built and who will benefit from their development.

Human biases

In November 2019, in its article Black Facebook staff describe workplace racism in anonymous letter, The Guardian reported that workers at the firm say they are treated as if they do not belong at the company. “[W]e are sad. Angry. Oppressed. Depressed. And treated every day through the micro and macro aggressions as if we do not belong here,” employees wrote in the memo. The issue was raised over a year ago in former employee Mark Luckie’s post about how Facebook is failing its black employees and its black users.

Human biases may lead us to make erroneous decisions in the development of AI. For example, in its article Rise of the racist robots – how AI is learning all our worst impulses, The Guardian reports several examples of where human biases, based on gender of race differences, can lead to bad product development; from a Google image recognition program that labelled the faces of several black people as gorillas to a Microsoft chatbot called Tay which spent a day learning from Twitter and began spouting antisemitic messages.

Test data

The test data used to develop Artificial Intelligence applications can also lead to bias; from insufficiency, inaccuracy and error to exclusion of outlying data points. The quantity of detail and sufficiency of test data can also exacerbate the problem, as well as, classifiers from decades of historical data; as can modifying algorithms to skirt round any groups protected by existing anti-discrimination laws.

For example, researchers have known for decades that women are more likely to be killed or injured in a car crash and the reason is that most of the dummies used in automotive crash tests ‘by the government and the insurance industry – the ones that determine whether a car gets a coveted five-star safety rating or is named a top safety pick – represent a very specific man. An average adult female crash test dummy simply does not exist’.

Insufficient test data and inadequate representation can affect every facet of our lives. For example, Women aren’t properly represented in scientific studiesleading to bias in drug development. ‘For instance, three times as many women suffer from autoimmune diseases as men, and the statistics are reversed for autism. Sex also impacts how a person responds to medication: Women taking antidepressants and antipsychotics tend to have higher drug concentrations in their blood than men do; they also require half as much influenza vaccine for the same level of protection, though they are always given the same amount.’

Use of in-house test data can also lead to bias; in October last year Reuters revealed Amazon scraps secret AI recruiting tool that showed bias against women. ‘The Amazon team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, however its computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period (test data). Most came from men, a reflection of male dominance across the tech industry.’

Deployment of AI

In her book Biased, pioneering social psychologist Professor Jennifer Eberhardt explains how unconscious biases can be small and insignificant, but they affect every sector of society, leading to enormous disparities, from the classroom to the courtroom to the boardroom. Until recently, large scale AI systems were being developed almost exclusively in technology companies, within university research laboratories and within government. However, with the availability of larger open datasets and cheap computing power, the deployment of AI is now within the grasp of many more companies and organisations.

But hasty deployment of AI without consideration of the bias spiral can lead to wrong outcomes. Recently London’s King’s Cross station had to abandon its controversial surveillance scheme after its was discovered that local police gave the King’s Cross owner images of seven people for use in a facial recognition system. The AI-based technology maps faces in crowds and compares them to images of people on a watchlist, which can include suspects, missing people and persons of interest to the police. The cameras can scan faces in large crowds in public places such as streets, shopping centres and football crowds. Critics argue that the images too many innocent people are harvested without their consent. And there have been concerns about the regulatory framework governing facial recognition and its effectiveness, with studies suggesting it is less effective at accurately distinguishing black people. Similarly in 2017, the South China Morning Post reported that a Chinese woman was offered a refund after Apple’s facial recognition allowed her colleague to unlock her iPhone X.

Governance

The use of AI systems for the classification, detection and prediction of race and gender is in urgent need of re-evaluation’ stated the AI Now report on ‘Discriminating Systems: Gender, Race and Power in AI’.

Understanding bias in data and fixing such bias requires sound governance of how the data was collected, for what purpose and the social context in which the data was produced. In recognition of this, the UK Government Office for Artificial Intelligence published the Data Ethics Framework and a Guide to using AI in the Public Sector to enable public bodies adopt AI systems in a way that works for everyone in society.

Data ethics is an emerging branch of applied ethics which describes the value judgements and approaches we make when generating, analysing and disseminating data. This includes a sound knowledge of data protection law and other relevant legislation, and the appropriate use of new technologies. It requires a holistic approach incorporating good practice in computing techniques, ethics and information assurance.

Beyond data ethics lie issues about rights; such as ownership of AI enabled systems, intellectual property, privacy, human rights, democratic rights and constitutional rights which are all affected by the implementation of AI. The issues have been much discussed following the The Facebook/Cambridge Analytica case in 2018, which revealed alleged misuse of personal data for political advertising, demonstrating how the underlying values of the data protection rules are essential for democracy. The EU has recently adopted a series of additional initiatives to support free and fair elections, reflected not least in European Parliament debates and resolutions.

Five ethical principles of AI

In the UK, Lord Clement-Jones, Chair of the House of Lords select committee on AI, outlines the committee’s recommendations five ethical principles which, it says, should be applied across sectors, nationally and internationally:

  • Artificial intelligence should be developed for the common good and benefit of humanity.
  • Artificial intelligence should operate on principles of intelligibility and fairness.
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

It is still to be determined is exactly how these principles will be adopted by developers of AI applications.

AI: Inclusion podcasts

In its AI: Inclusion podcasts, equality and inclusion think tank Diversity UK aims to examine how bias occurs in the application of Artificial Intelligence and how to mitigate some of the biases. This is part of a greater awareness campaign to counter the threat of bias in Artificial Intelligence (AI) in its aim to inform and educate the public about equality and inclusion initiatives, particularly in relation to race, and to promote greater diversity in the tech sector in Britain.