It’s a daily occurrence now to read articles outlining the imminent threat of AI; the precarious state of humankind in the face of mind-bending technology off the leash. And the warnings are coming not from the technophobes but the engineers and designers themselves.
The “Godfather of AI”, Geoffrey Hinton, left Google this month in no doubt about potential threats: A flood of misinformation will soon mean that normal people will “not be able to know what’s true anymore”.
Open AI boss Sam Altman fears “significant harm” from the technology, although he has faith in regulation, telling a Senate Committee that “powerful AI is developed with democratic values in mind”. Meanwhile, China has demanded AI algorithms reflect the core values of socialism. I’m not sure these things can be reconciled.
Writing for the New Yorker, Matthew Hutson summarises arguments put forward by researchers who think the AI dangers are real. “In the worst-case scenario envisioned by these thinkers, uncontrollable A.I.s could infiltrate every aspect of our technological lives, disrupting or redirecting our infrastructure, financial systems, communications, and more.”
The ancient Tower of Babel story in the Old Testament feels eerily prescient. Humankind decides to build a tower that “reaches to the heavens” so that “we may make a name for ourselves.” Human technology becomes an object of worship, deployed for human glory. It all ends badly, with disunity, confusion, and the thwarting of progress. The centre doesn’t hold.
Lately our faith in this technology and attempts to explain how we might “control” AI are beginning to feel hollow. Whatever happens, we are going to need something more substantial and foundational than a naïve trust in “regulations” and “international cooperation” to get us through this moment.