*** DEBUG START ***
*** DEBUG END ***

AI: Could it turn us all into paperclips?

01 September 2023

Andrew Brown puts the potential benefits and threats of AI into their political context

iStock

A COUPLE of months ago, I was at a dinner party with a professor of one of the hard sciences at Cambridge. The talk turned to AI, and he told us the following story.

He had asked ChatGPT to write one of the essays that he sets his students. The result, he said, was a solid 2:1 piece of work. One of his colleagues believed that the students were making use of this, but, rather than fight it it, he simply passed that week’s batch of marking to the AI and let it, literally, mark its own homework.

It is difficult to grasp the speed at which uses for AI technology are appearing. So, for a flavour, here are a few things from the latest issue of a weekly newsletter aimed at software developers. The big news is the release of Facebook’s “large language model”: the same set of programs as Facebook uses are available to anyone who fills out a form and downloads it.

Then there is an assistant that will write quite complicated programs for you if you tell it in some detail what to do — the example given is a game of “Snake” — which appear, in the demo, in about five minutes from nothing at all. Another app produces short videos from a text description.

Meanwhile, in Ireland, a chain of provincial papers has experimented by publishing a clickbait opinion piece (“Should refugees be sent home?”) that was generated also by AI. These are just the novelties that have appeared in one week.

All these programs are variations on a couple of techniques that broke into the public arena last year, and which are together known as “large language models”, even when the language in question is pictorial. The models consume pretty much everything that anyone has ever published on the internet, and are then trained, using incomprehensibly complex statistical techniques, to recognise patterns within this universe of data.

In particular, they learn which words are most likely to follow other words in any particular context. This is a skill that humans interpret as answering a question. But, when machines do it, they don’t understand the question in the way that a human does. They don’t, in fact, understand it at all, though they have been trained to respond as if they did. Sometimes, this is convenient even for a writer. I can pass the text of this paragraph to one and tell it to change “model” from singular to plural, and it will fix all the affected verbs as well. In a similar way, it is easy to change the tense in which a passage has been written.


THE models can also learn which patterns of pixels have been associated with particular words — a skill that humans interpret as being able to understand what a picture shows. Here again, there is no understanding in the machine. The apparent understanding — the descriptions of these pixels — was supplied by human labour.

You’ve done it yourself every time you have been asked to click on all the traffic lights in a CAPTCHA, or, indeed, every time you have labelled a friend in a photograph on Facebook. All the machine is doing is sifting out which patterns are most often associated with which descriptions.

But the internet is now so vast, and these machines and techniques are so powerful, that they can appear to be a reasonably accurate representation of all the things that internet users care about in the world, and this representation can be manipulated in all sorts of ways.

Many of these are obviously harmful. If you want to download Facebook’s model, you must agree to a long list of things that you will never use it to do. Among them are the obvious categories — human trafficking, child sexual abuse, terrorism, inciting or promoting violence, or discrimination, copyright infringement, impersonating particular people, or indeed pretending that you are a human. . . The list goes on.

These would not be banned if the technology could not be used for them, and the agreement is obviously impossible to police. It can all feel completely overwhelming and very dangerous.


THERE are questions that help us to put the current debates about AI into perspective. We can ask first whether the Vatican is alive. We do talk of the Vatican as if it had agency, and intellectual activity. To say that “The Vatican thinks in centuries” is a perfectly sane thing to do, and helps us to understand what it does. To say that it has learned from experience is also quite obviously true.

The “it”, in this instance, is a bureaucracy, an entity made up of people and rules but larger and more powerful than any of its constituent parts. The rules have been modified over time: that’s how the system learns, but, most of the time, they are binding and difficult to change. This is true of any long-lasting institution. But no one supposes that the code of canon law is itself alive. We can talk of a living tradition, but the living is done by human beings.


SIMILARLY, we should not think of the code inside a computer as alive in any sense, even though the present wave of AI is designed in many respects to have us interact with it as though it were not just alive, but human, too. We ask questions, and it answers in well-formed sentences such as a human might produce.

But its motivations are not those of a human being, and neither are the desires and drives of the giant companies or states that built these machines and use them — or make them available — for their own purposes.

iStockiStock

Canon law on its own is simply a written set of decision-making procedures, what computer scientists call algorithms. These can also be seen as the products of a technology — writing — which made possible the storage and manipulation of information and the creation of knowledge in ways that are not available to purely oral societies. As one astute critic pointed out, this transformation has costs: “The letter killeth, but the spirit giveth life.” None the less, the benefits are greater, especially in competition with societies that lack the skills of reading and writing.

This can be told as a story of technological determinism: that some technologies will lead to inevitable social consequences, and all we can do is try to ride the wave. Institutions governed by written codes, such as the Vatican, become hybrid bodies, in which humans and codes interact to produce something different from either and more powerful than both.

But technology on its own does not determine anything. It opens some possibilities and closes off others. The choices that then arise can be explained only by human factors. When, at the end of the Middle Ages, printing made literacy more general, the Catholic Church did not increase its power as a result.

Instead, Christian Europe fragmented, and, at the end of this bloody process, what emerged were other hybrid bodies, bureaucratised states as well as Churches, all competing or co-operating with each other, and all of them, apparently, with purposes and strategies of their own.


STATES are not the only large autonomous systems that can be described as hybrids or symbioses between people and technology. Modern corporations are very much the same thing. The political scientist David Runciman points out that when Nick Clegg moved from being Deputy Prime Minister of the UK to becoming a vice-president at Facebook, he became much more powerful, as well as much richer.

Runciman believes that modern states and corporations represent not a new form of intelligence, but a new form of decision-making. These inhuman entities, he said in a podcast, have a completely different attitude to risk from humans’. When the archives opened after the collapse of Communism, we discovered that “the battle plan of the Russian state was at the first sign of trouble to blow Western Europe into smithereens. . . No human beings could think like that, but the state did.”

This inhumanity of bureaucratic or state decision-making could hardly be news to anyone who lived through the 20th century, or who has understood the doctrine of nuclear deterrence. But there is now a fear that widespread AI will supply states and companies with all the information that they need to make even more ruthless decisions.

To some extent, this is already true: look at the ways in which Amazon watches every minute of the lives of its delivery drivers and warehouse workers to ensure that they are maximally “efficient”, which is to say, profitable to the shareholders.

It seems obvious, then, that if AI makes corporations more profitable, it will make authoritarian states such as China irresistibly powerful. This idea has been popularised by Yuval Noah Harari, but has since been subjected to withering criticism by the political scientist Henry Farrell. Farrell points out that the vast quantities of information made available to the Chinese state — essentially a complete record of where everyone in China is, whom they talk to, and what they think or look for on their phones — still has to be filtered and brought to the decision-makers.

At every level in the hierarchy, there is a penalty for giving your superiors news that they do not want to hear — even though this is very often the only important signal in all the noise. So, the news that they really need to hear, which is almost always unwelcome, never reaches them. The mighty decision-making algorithms are fed garbage and reply with better-digested garbage.


SO, THE questions what effects the great wave of AI might have, and how frightened we should be of these, can be answered only by asking who will make use of the technology and to what purposes. The answers are not particularly cheering, but neither are they are as apocalyptic as can be feared. The rough beasts slouching towards Bethlehem today have human riders and grooms.

The philosopher Nick Bostrom coined the phrase “paperclip maximiser” to describe a theoretical AI that could end civilisation simply because it carried out some apparently harmless task in an unimaginably destructive way: the paperclip maximiser would want to turn the whole planet into paperclips, and to destroy everything that stood in the way of this objective — such as anything that used metal for any purpose other than the manufacture of paperclips.

This was meant to be a prophecy of the future, but, as many people have pointed out, the paperclip maximisers already walk among us, or trample over us. They are the modern profit-driven corporations, all of them seeking to enrich their shareholders at the expense of every other aim.

The fear of a “God-like AI” is actually the fear of a god-like corporation or state equipped with the newest technology. The paradox is that the only thing that can fight such a paperclip maximiser on equal terms is another corporation or another state, and those are what we will have to build if we are to defend humanity.

We can’t defend humanity, though, without a clear idea of what it is that we are defending. If Christianity is true, then humanity is constituted by its — our — reciprocal relationship with God. This is not something that a computer program, or even a modern state, can develop. Perhaps a hybrid can. Does the Vatican have a reciprocal relationship with God? Does the C of E?

Details of the Church Times AI webinar here

Browse Church and Charity jobs on the Church Times jobsite

Letters to the editor

Letters for publication should be sent to letters@churchtimes.co.uk.

Letters should be exclusive to the Church Times, and include a full postal address. Your name and address will appear alongside your letter.

Forthcoming Events

 

Church Times/Sarum College:

Traditions of Christian Spirituality

January - May 2024

This is a five-part series on major strands of the Christian spiritual tradition.

Book individual session tickets or sign up for the full programme

 

Companions on the Way: a retreat in preparation for Lent:

Saturday 10 February 2024 - 10am - 1pm GMT

Jay Hulme, Rachel Mann, Rob Marshall, Nick Papadopulos, Richard Carter and worship by the St Martin’s Voices

Online Tickets available

 

RS Thomas & ME Eldridge Society in association with Church Times:

RS Thomas Winter webinar 2024

Saturday 17 February 2024 - 4pm - 5.15pm GMT

Malcolm Guite in conversation with Jon Gower

Online Tickets available

 

Church Times/RSCM:

Festival of Faith and Music

26 - 28 April 2024

See the full programme on the festival website. 

Early bird tickets available

 

 

Green Church Awards

Closing date: 30 June 2024

Read more details about the awards

 

The Church Times Archive

Read reports from issues stretching back to 1863, search for your parish or see if any of the clergy you know get a mention.

FREE for Church Times subscribers.

Explore the archive

Welcome to the Church Times

​To explore the Church Times website fully, please sign in or subscribe.

Non-subscribers can read four articles for free each month. (You will need to register.)