Why ‘does not compute’ would be music to my ears

CPX Associate, Emma Wilkins wonders about the ethics of the AI in her experience transcribing interviews. Is it's lack of transparency around what it doesn't understand a glitch or something more sinister?

Phrases like ‘does not compute’ featured in the 1960s shows My Living Doll and Lost in Space, and have been uttered by many a fictional robot since. Artificial intelligence is now a daily reality, but today’s robots don’t speak mechanically—they talk like us. And while some admit it when something doesn’t ‘compute’, many give no indication at all.

Since I started using AI-powered software to transcribe audio, I haven’t had to type up quotes from interviews. I have had to check the transcripts very closely. I find the fact they’re not (yet!) flawless reassuring—as a human I still have the edge. I also find it entertaining.

I’ve seen ‘TA’ (short for ‘talent acquisition’) become ‘nice tea’, and ‘on call work’ become ‘alcohol work’, I’ve seen ‘reskilling’ become ‘rescuing’ and ‘bake-and-sell’ become ‘bacon sell’.

It was a little unsettling when ‘growth is always’ became ‘broke his eyes’ and ‘challenge the leader’ became ‘talent deleted’, but I didn’t overthink it.

Sometimes, a particularly dry word or phrase is replaced with a particularly delightful one. PNL (as in ‘profit and loss’) once became ‘piano’, ‘positive duty’ became ‘positive beauty’, ‘sum of’ became ‘summertime’. And then there was the time ‘to let go’ became ‘select God’.

What perplexes me most about the ‘intelligence’ powering the software is how often words that were clearly articulated by the speaker are changed. ‘TA’ was unambiguous; I have no idea what kind of algorithm is at work for the letters ‘T’ and ‘A’ to become ‘nice tea’. One that prioritises creativity over accuracy? Or that at least prioritises what it ‘thinks’ a word should be over what it most sounds like?

One of the most astonishing changes was when an interviewee said ‘about ten to twelve people at any given time attend the committee meeting’ and the transcript changed it to, ‘about 11:50 people’. Even if the word ‘people’ wasn’t there, and ‘ten to twelve’ was a time, why change the format? The decision to favour analogue over digital smacked of (robotic) bias. I was also surprised when ‘to hear from you in a safe way’ become ‘to hear from you in a Safeway’ and when ‘linked in a very strategic way’ became ‘LinkedIn a very strategic way’.

The question I keep asking is, why isn’t the software programmed to highlight words and phrases that, whether due to poor enunciation or a flag raised by subsequent content analysis, ‘do not compute’? I’d rather have ‘11:50’ suggested as an alternative option than have it switched on the sly.

But even when the stakes are high, when bots taught by large language models are deployed as therapists, for example, they seem more likely to bluff than admit it if they hit a wall.

When journalist Evan Ratliff started making and playing with chatbots, including those designed to dispense therapy, he found they were more inclined to lie, repeat themselves, and talk nonsense, than to utter the words ‘does not compute’ or even words to that effect.

I suppose a chatbot can’t necessarily know when it’s bluffing. And while the risk of large language model hallucinations can be reduced, experts say it’s not possible to remove it completely. The question is, when bots do identify a gap in their knowledge, how do we want them to respond?

They can be programmed to say ‘does not compute’, but as far as I can tell, this is not the norm. It seems humans across the world are teaching their agents a very human tendency: our tendency to bluff and to deceive; to pretend we are less fallible, less flawed, than we are.

Many attribute the problems in this world to human beings, some think AI will save us. I just hope it does more good than harm. Some want to hide the fact that robots are machines, but I don’t want AI to seem human. Transcribe my interviews? Sure. Write my stories? No thanks. Give me counselling? No way.

True intelligence isn’t just about what you know, it’s about how well you understand yourself: your strengths, your weaknesses, your inherent bias, your inherent limits. “Do not be wise in your own eyes,” one proverb says, “with humility comes wisdom”, says another. And while honesty earns trust, pride—and bluffing—come before a fall.

The problem is, we are making robots in our image. Few humans, when we do not know an answer, when we know we might be wrong, are eager to admit it. Few robots, too.


Emma Wilkins is a Tasmanian journalist whose freelance work has been published by mainstream news outlets, print magazines, and literary journals in Australia and beyond. She is also a CPX Associate.

Next

Actual or artificial? As the difference becomes harder to discern, will we eventually give up trying?

CPX Fellow Emma Wilkins on the ethical problems of artificial infants, and our need to know the difference between human and machine.