*** DEBUG START ***
*** DEBUG END ***

AI: Technology magnifying human bias

by
01 September 2023

Its programming can be unfavourable to the marginalised, Erin Green suggests

iStock

BEHIND the glitz and promise of AI, the digital transformation that we are experiencing is fundamentally about power — and power is preserved by excluding people. In Christianity, theologies of liberation prioritise the struggles and stories of those denied power. Historically, this includes women and queer people, racial and ethnic minorities, those living under colonial rule, people with disabilities, and many, many more.

These are the same people who are so poorly represented not only in the development of AI, but in our theological and ethical conversations about it. When we pause and reflect on who is missing, we are taking an essential first step in bringing a world to life where AI is a force for positive social and ecological transformation.

Diversity within the broad AI landscape is improving. Along with the proliferation of AI and related technologies, we find a similar expansion of ethical and theological voices offering a critique of the part that it plays in societies.

Still, there are imbalances. Women, Indigenous peoples, refugees, and children are widely missing from the development of AI, and are often most at risk of harm arising from it.

Digital divisions affect women from birth. Girls, especially in so-called developing countries, are less likely than boys to have access to smartphones and the internet, not to mention basic schooling. This results in poor digital literacy and hinders their participation in school and work — both on- and offline.

In AI, gender bias pops up through subservient female virtual assistants, human-resources software that prefers male candidates, and banking algorithms that unfairly assess credit applications from women. Instead of adding a healthy dose of objectivity to human decision-making, AI can just make already unfair systems even more oppressive.

Women, especially women of colour, are poorly represented in the academic institutions and technology companies responsible for developing AI. When it comes to “Big Tech”, the numbers are stark. Women make up only 15 per cent of the AI researchers at Facebook, and even fewer at Google.

At the same time, women are often the voice of conscience, risking their livelihoods to proclaim the dangers of AI. A former Google employee, Timnit Gebru, protested against the sale of facial-recognition technologies to police forces, and later left Google under contentious circumstances. She now leads her own research centre, outside the influence of “Big Tech”.

ANYONE who knows even a little bit about AI has heard of algorithmic bias. The term is a sanitised name for the deep systemic flaws that show up in AI and related research. Algorithmic bias is just human bias finding its way into our technologies, whether by accident or design. Lack of diversity both in the designing of AI and the data used to train it leads to awful outcomes for people of colour.

During the pandemic, the rise of “online proctoring” was a source of distress for many students who were stuck at home, trying to learn in already stressful circumstances. Black students repeatedly encountered problems with online test- taking, as the software could not verify their identities because it was developed for light skin tones.

Healthcare is a particularly vulnerable sector, given the volume and value of the data that it deals with. AI trained on historical data can perpetuate racial bias, with the result that racial minorities again receive poorer-quality treatment. For example, AI draws on large data sets to detect skin cancers, but it is mostly from people with lighter skin tones, which risks late or misdiagnosis in racial minorities in the UK.

BY THE end of 2022, there were more than 108 million forcibly displaced people in the world. More than 400,000 people are born refugees each year, and the numbers are growing. When we consider the needs of refugees, material things spring to mind: food, housing, and clothing. We may also be concerned for their social well-being and integration into host countries. Rarely, however, do people link the rights of refugees with the rise of AI.

Increasingly, AI is used for border surveillance and control, as well as to predict the movement of people in response to the outbreak of violent conflict, political instability, or famine. AI is also used as a tool in processing asylum claims, including detection of fraudulent documents and pre-sorting applications.

Despite the high stakes involved, refugees and migrants are often forgotten in AI policy and legislation. The recent European Union AI Act, for example, failed to protect migrants specifically from the far — and potentially biased — reach of AI in decisions affecting their very lives.

AI often replicates old patterns of bias and discrimination which are all around us. The problems start with the research, the very questions that we bring to AI, and continue through the development and use of these technologies. Similarly, attempts to regulate and control AI are dominated by very powerful people — usually white men, usually from the United States. As a result, the majority world faces new forms of colonialism through digital technologies.

DIGITAL colonialism plays out in many ways. These include the race to harvest and own data, the dominance of the English language online, and reliance on social-media content moderators who are living in extreme poverty. For Indigenous communities, AI presents new threats to their languages and culture, which are already made vulnerable through centuries of violent colonialism and genocide.

Indigenous communities have responded to AI compellingly. Indigenous scholars often emphasise relationship, community, and guardianship when they talk about AI. This leads to a remarkably different understanding of data and algorithm from what is widely seen in most AI research.

For example, Dr Karaitiana Taiuru, a Maori researcher from Aotearoa (New Zealand), has written an AI entry treaty that draws on Indigenous wisdom and customs. The treaty reveals a deep concern for future generations, the common good, and sovereignty over one’s data which is rarely seen in dominant discussions about the regulation of AI.

CHILDREN today are unwitting participants in the grand AI experiment. Their lives are excessively documented, published, and commodified for commercial benefit. Study after study affirms strong links between poor mental health and social-media use in children and adolescents. And the way in which children learn and socialise is increasingly mediated by digital devices.

The rights of children require special consideration, because they are unable, or are denied the opportunity, to make decisions for themselves about their data and their digital lives. UNICEF, for one, has called on companies and regulators to develop child-centred AI. This includes a strong emphasis on online safety, protection of their privacy, and preparing them for a world in which their working lives will look radically different from their parents’.

Women, refugees, Indigenous peoples, and children are among those missing from AI technologies and broader ethical debates about its development and use. But it is not only people who are left out of our worry and wonderment about AI. Our planet suffers in our race to build for ourselves a digital world.

The world is literally on fire. The planet suffers under brutal heatwaves, forest fires, and hail, and the threat of a Gulf Stream collapse.

Since its development is dominated by commercial companies, AI is inextricable from greed and consumption. Together, Google and Facebook haul in about half of the world’s online advertising revenue, with more than a little help from AI.

AI contributes to climate catastrophe in a few ways. It fuels wasteful consumerism through targeted online advertising, factory robots, and delivery drones. AI is also used in resource-extraction, as exploitative mining practices support the production of one billion new mobile phones every year. The AI demand for electrical power is also astronomical. It is reported that, by the close of the decade, “machine learning and data storage could account for 3.5 per cent of all global electricity consumption.” It is time, perhaps, to consider a new warning besides “Think of the environment before printing this email.”

WITH the dominance of AI and all the doom and gloom that goes with it, it may feel as if there is little good news; but I think that there is plenty — truly. For every Mark Zuckerberg who wants to “move fast and break things” (as an early Facebook motto proclaimed), there are thousands of Timnit Gebrus doing creative, liberating, and powerful work for the sake of AI and all those who are touched by it.

Tech workers in Africa have recently moved to unionise; the Vatican has made huge efforts in bringing faith and tech together in the Rome Call for AI Ethics; and the Campaign to Stop Killer Robots is bringing civil society, including church organisations, together, to campaign for a ban on lethal autonomous weapons systems. If you look for it, there is an abundance of good news.

The topic of AI will come up again and again — in sermons, headlines, and maybe even dinner-table conversations. When this happens, the most powerful thing that any of us can do is to stop and ask: “Who is missing?” In answering this question, we will find our way forward.
 

Dr Erin Green is a theologian who researches artificial intelligence and digital justice.

Details of the Church Times AI webinar here

Browse Church and Charity jobs on the Church Times jobsite

The Church Times Archive

Read reports from issues stretching back to 1863, search for your parish or see if any of the clergy you know get a mention.

FREE for Church Times subscribers.

Explore the archive

Welcome to the Church Times

​To explore the Church Times website fully, please sign in or subscribe.

Non-subscribers can read four articles for free each month. (You will need to register.)