*** DEBUG START ***
*** DEBUG END ***

AI: It must be more human

01 September 2023

Eve Poole is worried about the coding that has been left out

iStock

AI CHURCH is already here. Admittedly, it is still a gimmick. But I know that every week, in some pulpit or other, there is another sermon that owes something to ChatGPT. AI will soon take the pain out of generating all those weekly service sheets. But AI as a key plank of Save the Parish? I hope not.

We are still at the stage of finding out how this new technology can help us, and thrilling anew at each fresh discovery. But our delight in its novelty is a dangerous distraction from the bigger and more urgent task: how can we help it?

AI is unusual as a human tool. We have designed many of them before to make our lives easier and to save us labour; but, perhaps this is the first tool whose aim is to replace us. Copying human intelligence is a heady enterprise. Israel’s prophets would line up to warn of folly and hubris. But, unheedingly, we have dived straight in, giving this task to the very wisest and most trusted of our global citizens. . . Oh, wait. . .

What we have ended up with is AI designed in the image of its average creator: rational, introverted, individualistic, and highly precise in its operation and decision-making. It has been born at the zenith of our love affair with science and materialism, and there is no room for the non-rational in its design. But now dawns the creeping realisation that AI might career off out of alignment with us, and that we might lose control over our creation in a ghastly real-life re-enactment of every man-made monster trope that has ever graced the stage or screen.

As I argued in my article “Will the robots of the future be kind or cruel?” (Comment, 23 December 2022), when we left out all that irrational “junk” code, we left out the good bits. Specifically, we left out the bits that are the reason that we do not generally career out of control as a species.

IF YOU think about that foundational gift of free will, it is a risky choice. How would you design around it to stop the species’ becoming summarily extinct? Well, you retro-fit some ameliorators. There are three pairs of them.

First, emotions and intuition. Humans are vulnerable, because their young take nine months to gestate and are helpless in infancy. Emotion is a good design choice, because it makes us bond with our offspring and in community to protect the vulnerable. This gives humanity some chance of making it to adulthood. We bolster this with intuition. Where concrete data are not available, humans use their “sixth sense” to pick up wisdom from the collective unconscious, which helps to de-risk decision-making.

Then, how do we stop them making bad choices? We design in another pair of stabilisers: uncertainty and mistakes. A capacity to cope with ambiguity stops people rushing into precipitous decisions and makes them seek out others for wise counsel. And if they make mistakes? Well, they learn from them. And mistakes that make them feel bad will develop their conscience, steering them away from repeated harms in future. Phew!

But now that we have corrected human design to promote survival, what motivators are needed for future flourishing? A propensity for storytelling allows communities to transmit their core values and purpose down the generations in an efficient and memorable way. These stories last for centuries, future-proofing the species through learned wisdom from the lived experience of their ancestors. Our religious traditions are past masters at this.

Its ameliorating partner is meaning-making: a species that can discern or create meaning in the world around it will find reasons to keep living in the face of any adversity, and the human species will prevail.

iStockiStock

In and of themselves, these properties look flaky. But, to the discerning, they smack of soul.

In other fields of design, we have learned humility from nature, because biomimicry has taught us that there is no better primer for problem-solving than the fine-tuning that we see all around us in the design of creatures that fly, climb, hide, heal, and sense far better than we do. But, when we came to design AI, we did not look closely enough at the blueprint, because there was just too much that we did not understand in human design.

But, if we really tried to copy well, we would not shy away from the difficult bits. Specifically, if we wanted to understand soul, we would call in the experts to help. But, if you are a thrusting young coder, striding along the treadmill at your standing desk, how do you know whom to Google? You could go down rabbit holes finding experts in each of these areas, who would disagree with one another as a matter of professional courtesy, leaving you extremely puzzled and none the wiser. Or, you could just ask your vicar.

WHEN a priest is given charge of a parish, the Bishop hands them the Deed of Institution, and says: “Receive this cure of souls, which is both yours and mine.” This is often the moment in the service when the full weight of the charge really lands; so most clergy get chills when they think about it.

It is an archaic phrase, the cure of souls. Here, cure means care, but, in a sense, that also involves restoring souls to God. This is the Church’s agenda, and I think that it should be our agenda with AI, too, because the Church is expert in human intelligence, which is what artificial intelligence is trying to copy. Specifically, the Church is expert in the very bits that are being left out.

In the world of AI, there is much talk about the Control Problem and the Alignment Problem. But anyone who has ever brushed up against the liturgy will know that this, in humans, is the Church’s day job. We were beautifully designed by God, with all the coding that we need to keep us right; and yet we err and stray, which is why we meet together week after week, to hear again the good news, and to encourage each other to stay on track. This is the cure of souls.

WE NEED to get over our awe at the inventiveness of AI, and our worry about our inadequacy, and get stuck in. Humanity has designed AI badly. It is fatally flawed. We need to fix it quick. And you are the community that holds the key.

Here is but one example. In programming rules to govern decision-making, you have to decide on an ethic to guide your logic — for instance, in the case of autonomous vehicles, or automated triage systems. Because morality is a highly contested space, the safest thing to do is to choose a default that feels neutral to you so that you do not find yourself in the midst of a culture war.

The ethic that has found most favour in the secular world is utilitarianism. Its aim of the greatest good for the greatest number seems so self-evident that it feels unexceptional as a choice. But adopting this ethic means a commitment to prioritising ends over means, and anyone steeped in Christian ethics immediately hears alarm bells.

The famous herd-immunity strategy at the start of the coronavirus pandemic is a case in point. Implementation of this as a public policy would have meant knowingly sacrificing the elderly, the disabled, and the weak, to save the majority of the population. In utilitarian terms, this makes complete sense. But, as humans, we hold on to an idea that we are somehow special and precious, and that even those who are not “useful” to society deserve dignity and respect.

This is why we continue to resist eugenics and cloning, and to police embryology and medical policy. For Christians, this is because we believe that every person is made in the image of God, and, as such, is an end in themselves. But even the non-religious tend to baulk at the brutality of any regime that culls the weak. We know why, and we need to teach this back to a community that has forgotten.

As experts in human design, we need to ensure that it is being properly copied into AI, while at the same time taking more seriously our calling as the experts in the human cure of souls, too.


Dr Eve Poole writes on theology, economics, and leadership. Her book,
Robot Souls: Programming in humanity, was published by Routledge last month at £22.99 (Church Times Bookshop £20.69). Read a review here


Excerpt from Robot Souls (Chapter 6)

IN A famous lecture, the anthropologist Margaret Mead once brandished a human femur high above her head, pointing to the area on the thigh-bone where a fracture had healed over. She explained that this, more than a pot or a tool, was the first sign of civilisation: that a community had cared for one of its injured.

Because for much of the history of Philosophy of Mind, mind and soul have been conflated, the allergic reaction to religion in current intellectual circles serves as a useful discipline to drive precision. If mind in its most general sense maps to consciousness, how does soul map to them, if at all?

In the disambiguation of consciousness, we looked at its philosophical definition as being an ability to experience qualia. In robotics, this idea of self-awareness has been further refined, because in robotics, as for humans, AI is embodied in a physical structure.

Looking at how babies and toddlers learn, self-awareness in this context can be reduced qua Lipson — to the very specific ability to draw up a mental map of your own spatiality in order to work out how to physically move. [The robotic engineer] Hod Lipson’s robots learn to walk in the same way that toddlers do, through learning in their bodies by trial and error.

It is this capacity that permits our physical “zero-shot” learning, whereby a human can size up an unfamiliar object and then move it, because we have already built up a spatial self-map and know how to tackle this kind of challenge. Lipson’s approach helps to explain Emergentism rather better than AlphaGo can, because it is easier to see the evolutionary imperative of self-awareness when you consider how crucial consciousness would have been to the survival of a species gaining complexity in both brain and body.

What the Princeton neuroscientist Michael Graziano calls “semi-magical self-description” would have been vital to our day-to-day ability to find food and avoid predators, whereas learning how to be clever enough to win at Go is a luxury for a later phase.

This certainly supports the argument of the Emergentists that, on this logic, AI will naturally evolve consciousness at the appropriate time, when the tasks it is given and the complexity of its wiring makes that an inevitability, in order for it to fulfil its function. AI would ultimately be able to experience qualia as data, and appreciate the colour red, because programmed into its logic would be the information it required to set this in context.

Does this, however, somehow transform the computer in the Chinese Room into a person? Legally, perhaps: under Bostrom’s rules, as soon as AI is both Sentient and Sapient, it merits full moral personality and, in our current culture, full protection in law. But I think we would all want to argue that there is still something qualitatively different between AI learning to appreciate the colour red, and a human spontaneously doing so.

In French this would be the difference between the verbs for knowing, savoir and connaître. Savoir is the kind of knowing that we can give AI; connaître, that familiarity with red, comes from somewhere else.

BUT could this ever be a salient enough argument to hold weight, or is it just a wail of hurt from a species that once thought it was permanently special? There is an episode in the modern Doctor Who franchise about a Second World War monster, a child with a gas mask fused to his face who is turning the rest of London into gas-mask-wearing monsters, all wandering around like zombies asking “Are you my mummy?”

The problem is resolved when the Doctor realises that alien nanogenes, programmed for healing, had got the design of the first injured child they met wrong. Assuming that the gasmask was part of him, they “healed” him with the gas mask attached. It is only when they “learn” about his mother, by reading her DNA when she hugs her son, that they can reconfigure the infected humans as normal and the world is saved.

This story illustrates the difference between what is called source code and executable code. In programming, the first step is to write down what you want to do, then you compile this in machine code and operationalise it through executable code. The latter is the “black box” that is handed over. You might copy the executable code, but, if you have no access to the source code, you have to guess about the underlying logic and rules.

This is essentially what we have been doing about fossils and geology and astronomy to come up with our theory of evolution and the Big Bang, and is the epistemology behind all of our inferred knowledge. It is also how we are explaining consciousness, yet I see behind consciousness a design that I would call the soul.

As we have seen from our analysis of schools of thought about the soul, there are a number of explanatory schemes for this state of affairs. If one believes in the concept at all, one is likely either to assume that it is part of a design that has prospered through evolution, so must have a teleology linked to survival and success; or, one may glimpse a divine creator behind it, who has left behind their hallmark. Either way, it would be useful to understand as much as we can about it, and why we often act as though it is important.

This also immediately flags an issue with AI: if we are only copying across functionality that we see and perhaps poorly understand, are we designing in fused gas masks? Could we do better? As Nick Bostrom argues, we are badly in need of fresh “crucial considerations” and difficult original thinking to drive the next step change in the development of AI.

Might trying to understand the soul take us to the next level? But the problem is: the soul is not a “knowable item”, and, if we stare at it for long enough, it does not look like a perfect Form: all we see is junk code.

Details of the Church Times AI webinar here

Browse Church and Charity jobs on the Church Times jobsite

Letters to the editor

Letters for publication should be sent to letters@churchtimes.co.uk.

Letters should be exclusive to the Church Times, and include a full postal address. Your name and address will appear alongside your letter.

Forthcoming Events

 

Church Times/Sarum College:

Traditions of Christian Spirituality

January - May 2024

This is a five-part series on major strands of the Christian spiritual tradition.

Book individual session tickets or sign up for the full programme

 

Companions on the Way: a retreat in preparation for Lent:

Saturday 10 February 2024 - 10am - 1pm GMT

Jay Hulme, Rachel Mann, Rob Marshall, Nick Papadopulos, Richard Carter and worship by the St Martin’s Voices

Online Tickets available

 

RS Thomas & ME Eldridge Society in association with Church Times:

RS Thomas Winter webinar 2024

Saturday 17 February 2024 - 4pm - 5.15pm GMT

Malcolm Guite in conversation with Jon Gower

Online Tickets available

 

Church Times/RSCM:

Festival of Faith and Music

26 - 28 April 2024

See the full programme on the festival website. 

Early bird tickets available

 

 

Green Church Awards

Closing date: 30 June 2024

Read more details about the awards

 

The Church Times Archive

Read reports from issues stretching back to 1863, search for your parish or see if any of the clergy you know get a mention.

FREE for Church Times subscribers.

Explore the archive

Welcome to the Church Times

​To explore the Church Times website fully, please sign in or subscribe.

Non-subscribers can read four articles for free each month. (You will need to register.)