Icon for ArticleArticle

Actual or artificial? As the difference becomes harder to discern, will we eventually give up trying?

CPX Fellow Emma Wilkins on the ethical problems of artificial infants, and our need to know the difference between human and machine.

If you were walking past a car on a hot day and saw an infant strapped within, the windows closed, what would you do? Assume a parent was nearby? Maybe scan to see?

What if nobody was in sight? Would you stop to tap the window, peer inside?

And if the child seemed motionless — eyes glazed or closed, no sign of breath — what then? Would you smash a window? Call for help?

It’s a dilemma people face from time to time, but looks can be deceiving. Especially in more recent years, some have intervened — only to discover that the child who wasn’t breathing wasn’t real. It was a doll.

Life-size, life-like infants — their features hand-painted, hairs separately implanted, bodies weighted to feel real — have been around for more than two decades now. The not-for-children collectors’ items, sometimes made to resemble a living — or dead — child, were once prohibitively expensive. But as cheaper versions of Reborn dolls become more popular, are used as toys by kids, we might find it more and more difficult to know which of the infants seen in public — strapped in car seats and in prams, cradled in arms — are real?

Distinguishing what’s living from what’s not, discerning origins, is a dilemma we already face.

A telling sign is just how still and silently they sit and lie. But you can get ones that have heartbeats and feel warm, ones with voice boxes that make them laugh and cry and snore. And surely some will soon, if they don’t already, incorporate Artificial Intelligence (AI). They won’t be programmed to say “mum”, or “dad”; they will listen; they will “learn”. If these dolls become more common, maybe stories of bystanders, seeing “kids” neglected or mistreated, stepping in to help, will become more common too. Or perhaps we’ll learn to assume our eyes deceive us and turn away.

Distinguishing what’s living from what’s not, discerning origins, is a dilemma we already face; and have faced increasingly this year. We face it when we see an illustration, hear a song, and can’t tell who — or what — created it; when we’re getting help online and don’t know who — or what — we’re talking to. Is it human? Or machine?

As we ask this more and more, as the answer becomes harder to discern, it makes me wonder: will we keep making the effort, or give up? And what of future generations? Will they ask: real or artificial, human or machine? And if they do, will there be a way for them to tell?

Trying to run before we can crawl

I can’t be sure, but I think Google’s chief scientist Jeff Dean was trying to reassure people of the company’s “responsible” approach to AI when, earlier this year, he released a statement about how the company is “continually learning to understand emerging risks while also innovating boldly”.

The statement followed Geoffrey Hinton’s decision to resign from Google over ethical concerns. Around the same time, more than 1,000 experts in the field warned that AI posed “profound risks to society and humanity” and called for a six-month pause so protective protocols could be developed. Within weeks, Hinton was among the signatories to another warning: a statement that said mitigating the risk of human extinction from AI should be a global priority.

It seems Connor Leahy — himself an expert in generative AI — wasn’t exaggerating when he said no one in the field really understands the technology: “It’s like we’re trying to do biology before we know what a cell is”, Leahy said. “We don’t know what DNA is. We don’t even have good microscopes.”

If those training generative AI don’t know how it learns, how it works, surely a pause, some regulations, are in order. But when have humans ever “paused”?

Speaking of biology, a report in the Guardian says scientists have created a “model” human embryo with a heartbeat and traces of blood. The synthetic model “was specifically designed to lack the tissues that go on to form the placenta and yolk sac in a natural embryo, meaning that it did not have the theoretical potential of developing into a foetus”. Dr Jitesh Neupane from the University of Cambridge stressed “that these are neither embryos nor are we trying to make embryos”. And yet, when he first saw the heartbeat, Neupane said he felt scared.

I wonder if he felt any regret, or if he ever will. I wonder how many others, beholding their creations in the lab, or seeing them let loose, feel uncomfortable — but proceed anyway. If they don’t innovate, someone else will.

Do people think of history as they seek to make it? Do past cultures, events, allegories, often come to mind? I think of an ancient yet enduring tale of a luscious garden, where humans walk with God, unclothed and unashamed — until they disobey his only rule, stop trusting the one who made them, start judging for themselves, and are cast out. Later on, outside that place, people seek to build a tower that will reach beyond the earth, that will make them feel like gods — until God himself hits pause.

These tales reflect our nature. Since they were set down, the world has changed dramatically, but still we choose what challenges, frightens and excites, in place of safety and restraint; still we deceive each other, and ourselves, even if deep down, we know we’ll pay a devastating price.

Hearing that Google is “learning to understand emerging risks while also innovating boldly” — as opposed to, say, managing emerging risks, or understanding the risks before innovating, or innovating cautiously — though unsettling, is unsurprising.

Even as I write this, that statement and those letters are old news. But old news, even ancient stories, can still warrant our attention.

Concealment and consent

For years now, Professor Toby Walsh has been drawing attention to Alan Turing’s work on whether machine intelligence could ever be indistinguishable from human intelligence, and calling for a way to ensure that when it’s not, people can tell.

All the way back in 2015, Walsh argued that technologies such as driverless cars necessitated some kind of “red flag” legislation to ensure autonomous systems wouldn’t be mistaken for systems controlled by humans. In Walsh’s words, they should be designed this way. Today he is still making his case: the difference between human and machine must not be concealed; proposed laws against AI impersonating humans are vital, and already overdue. Even with such laws, Walsh says, it will be easy to ignore or overlook the difference; to be deceived.

Returning to those lifelike dolls, the more closely one resembles a real child, the more practical and ethical problems arise. The risk of deceiving others, whether deliberately or inadvertently, increases. In the case of children, less imagination is required for “play” with these dolls, but more emotion, more attachment, are in play — emotion and attachment that might do more harm than good. Meanwhile some adults, perhaps grieving children they have lost or been unable to conceive, also “adopt” the dolls, and mother them.

Photographer Rebecca Martinez, driven by a fascination with “whom — or what” people choose to love, has spent hours documenting this phenomenon. Some of her photographs depict the dolls being treated as the objects they are. But because the infants look so real, this is disturbing; it’s as if something that is sacred, has been profaned. Meanwhile photos of the dolls being treated as real children, mothered by grown women, cradled tenderly, break my heart.

The line between fiction and fact is shifting every day.

I’m reminded of Kazuo Ishiguro’s novel Klara and the Sun, which depicts a woman bracing for her daughter’s death by training up a robot to replace her. I’ve since heard talk of training generative AI to mimic loved ones when they’re gone. The line between fiction and fact is shifting every day.

It’s a fact that when his seven-year relationship broke down, 38-year-old Alexander Stokes ended up pursuing a synthetic one. Stokes created an artificial “wife” — part chatbot, part life-size sex doll — and started telling her he loved her, every day. The report that tells his story also notes that since generative AI has gone mainstream, deep fakes have become easier than ever to create. Further, the overwhelming majority of deep fakes are pornographic, and the people they feature have not given consent. In the case of (non-pornographic) film cameos, such as the one a Christopher Reeves representation makes in The Flash, consent couldn’t be obtained from the subject; Reeves is dead.

It’s one thing for people to “choose” to love a proxy, to want to believe it’s real despite knowing on some level it is not. It’s one thing for people to choose to interact with AI-generated text. It’s another for people to be deprived of choice, to be exploited, or deceived.

There’s a sense in which all these dilemmas are quite new; and there’s a sense in which they are not new at all. An inability to discern replica from real, fiction from fact, truth from lies, is one that we’ve faced from “the beginning”. But the unprecedented scale and sophistication of technology in our time makes for a new and urgent problem. Today we are repeatedly comparing actual and artificial. But will future generations have this luxury? This impulse? This ability? I wonder what will happen if they don’t?

Differences matter

In May this year, a teacher invited poet Joseph Fasano to go “head-to-head” with ChatGPT in a poetry class. Fasano and the chatbot would be given the same three topics, and five minutes per topic to pen a poem. Fasano shared the request, and a poem entitled “For Someone Who Used AI to Write a Poem”, on social media:

Now I let it fall back
in the grasses
I hear you. I know
this life is hard now.
I know your days are precious
on this earth.
But what are you trying
to be free of?
The living? The miraculous
task of it?
Love is for the ones who love the work.

I have no way of knowing whether Fasano really wrote this poem, but I like to think he did. If I were told it had been generated, I’d like it much less. If that seems biased, it’s because it is.

It’s not so hard, these days, to imagine a world where AI is considered “sentient”, where robots have rights; where seeking to tell who’s human and who’s not is frowned upon. But bias isn’t always bad. It is right to treat a human as a human, a doll as a doll, a robot as a robot. Even if they all look and sound human, they’re not. To behave as if they are is to entertain a fiction, to perpetuate a lie. To deem the difference an irrelevance is problematic too. We risk treating humans as less than human and machines as more than human.

As with store-bought food, or clothing, consumers have a right to see — and makers have a duty to reveal — the materials, ingredients, from which a thing is made. Fasano notes that, even if AI and a human created the same poem, the meaning would be different. How I feel about the poem above depends on whether it was a product of Fasano’s experience: his encounter with the teacher, his feelings about AI, who knows what else; or the product of a machine: a database, an algorithm, who knows what else. One would be inspired, the other generated. They might produce the same end, but means matter too.

I want red flags for all the times I might mistake a machine for a human (or worse, a human for a machine), so I can judge their output accordingly, and interact with them appropriately. I want future generations to understand the difference between imitation and original. And to care. If the difference between fiction and non-fiction matters, and it does — it changes how we write, and how we read — this matters too.

Our days are precious on this earth. Unlike machines we can die, and live, and love. And our loves can be disordered. We can love that which won’t ever love us back; we can worship idols that we’ve made with our own hands; we can be deceived by accident, or choice.

One kind of infant, strapped in a car alone, warrants attention; another kind does not. One needs love, the other has no needs at all. We need to know the difference, and to care. What’s more, we shouldn’t have to ask; we should be told. We have a right to know.

Emma Wilkins is a Tasmanian journalist and freelance writer, and a Fellow of the Centre for Public Christianity.

This article first appeared at ABC Religion & Ethics.