A highly alarming New Yorker feature on the machinations of Sam Altman drove me to test his AI for myself. The results were, well, highly alarming A corollary of the truism “don’t sweat the small stuff” is, by implication, “do sweat the big stuff”, but it can be hard to

A corollary of the truism “don’t sweat the small stuff” is, by implication, “do sweat the big stuff”, but it can be hard to pick which big stuff to sweat. For example: since the 1970s, as the world has worried about inflation and rolling geopolitics, the big stuff we should have been sweating more urgently was the climate crisis. Last year, the top trending search on Google in the US was “Charlie Kirk”, with several terms relating to the threat posed by Donald Trump also popular, when the focus should arguably have been the threat posed by AI.
Or, per my own Googling this week after reading Ronan Farrow and Andrew Marantz’s highly alarming lengthy piece in the New Yorker about the rise of artificial general intelligence: “Will I be a member of the permanent underclass and how can I make that not happen?”
I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.
Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.
The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman.
With almost comically studious neutrality, the chatbot offers the following top line: that, per Farrow and Marantz, “AI is as much a power story as a technology story”, and “a major focus [of the story] is Sam Altman, portrayed as a highly influential but controversial figure”. Mmmm, lacks something, doesn’t it? Let’s try a human-powered summary of that same investigation, which might open with: “Sam Altman is a corporate grifter whose slipperiness would make one hesitate to put him in charge of a branch of Ryman, let alone in a position to steward the potentially world-ending capabilities of AI.”
It is these dangers, previously dismissed as sci-fi, that really startle here. As relayed in the piece, in 2014, Elon Musk tweeted: “We need to be super careful with AI. Potentially more dangerous than nukes.” There is the so-called alignment problem, yet to be solved, in which AI uses its superior intelligence to trick human engineers into believing it is following their instructions, meanwhile outmanoeuvring them to “replicate itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal”.
At one time, Altman reportedly believed this scenario was possible, writing in his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal … wipes us out.” For example: engineers ask AI to fix the climate crisis and it takes the shortest route to achieving that goal, which is to eliminate humanity. Since OpenAI became mainly a for-profit entity, however, Altman has stopped talking in these terms and now sells the technology as a portal to utopia, in which “we’ll all get better stuff. We will build ever-more-wonderful things for each other.”
This leaves us all with a problem. For voters in a position to prioritise AI oversight as a key election issue, the gap between personal AI use and the use to which governments, military regimes or rogue actors might use it is so vast, that the greatest danger we face is from a failure of imagination. I type into ChatGPT my concern about entering the permanent underclass, to which it replies: “That’s a heavy question, and it sounds like you’re worried about your long-term prospects. The idea of a ‘permanent underclass’ gets talked about in sociology, but in real life, people’s paths are much more fluid than that term suggests.”
Quite sweet, really, wholly witless and – here lurks the danger – seemingly entirely without threat.
Emma Brockes is a Guardian columnist



