Sam Altman's AI Juggernaut — Can It Be Stopped?
Karen Hao's new book details why we should all be worried about a company and a technology controlled by just a few billionaires
When I saw the pictures of tech billionaires at Donald Trump’s inauguration this past January, my gorge rose at the sight of Mark Zuckerberg, Jeff Bezos et al. seated on the dais, as if they were auditioning to have their busts carved for a Roman palazzo. Sam Altman was there, too, but he kept a lower profile. I didn’t think, Good for Sam! He’s a better human being than Elon Musk. I figured Altman was playing both sides.
Sure enough, the day after the inauguration, he was photographed with Trump, announcing a $100-billion AI deal called “Stargate” to build giant data centers.
At the time, I assumed the unhealthy cronyism of the tech elite and a new political regime was obvious. Now I’ve come to think its toxicity is not obvious enough — and it needs to be. Fortunately, Karen Hao’s excellent Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI came out last week, a counterpoint to all the hype about artificial intelligence that continues to pump up the business press like swamp gas.
OpenAI’s improbable rise provides plenty of dramatic fodder, including dueling celebrity CEOs such as Altman and Musk but also oddball cofounder and chief scientist Ilya Sutskever. Before the public release of ChatGPT in 2022, Sutskever, with other executives at a luxury company retreat, incinerated an effigy in a fire pit he said represented a “lying and deceitful” AI. Altman, for his part, “shared a birthday with [Robert] Oppenheimer, which he’d point out to reporters,” Hao writes. She also refers to his courting of U.S. senators like Chuck Schumer as a “policy charm offensive.”1
Empire of AI does more than chronicle one juicy company story, however. Based on years of reporting for the MIT Technology Review, Atlantic, and other outlets, Karen Hao weaves in far less savory aspects of the AI business saga. She details the colonial-style exploitation of workers hired for pennies in developing countries to annotate the data (images and text) used to train large language models. Then there are the huge quantities of energy and water sucked up by data centers running generative AI products, contributing to droughts in Iowa, Arizona, Chile, and elsewhere.2
It’s a heavy mix, with many threads and people and countries to keep track of.3 Yet this also illustrates Hao’s overarching point about the global nature of the threat. Most people still have no idea how bots like ChatGPT work, let alone the research premises or biases involved. Enter entrepreneurial fast-talkers like Altman, who make promises but don’t explain what they’re doing. In her introduction, Hao argues:
“While ChatGPT and other so-called large language models or generative AI applications have now taken the limelight, they are but one manifestation of AI, a manifestation that embodies a particular and remarkably narrow view about the way the world is and the way it should be. Nothing about this form of AI coming to the fore or even existing at all was inevitable; it was the culmination of thousands of subjective choices, made by the people who had the power to be in the decision-making room. In the same way, future generations of AI technologies are not predetermined. But the question of governance returns: Who will get to shape them?”
The technology itself is not necessarily bad. It’s the humans in charge who have often proven themselves to be morally and ethically bankrupt. Empire of AI highlights why we need much more public attention paid to generative AI’s impact on climate change, labor, the sources of training data — and the way in which Silicon Valley business dynamics have served the wealthy few at the expense of everyone else.
This is also why I haven’t come to terms with the rise of GenAI, even as I’ve been considering how it might be used effectively for education. Two things can be true at the same time: generative AI is a transformative technology and the business empires rushing it to market don’t care about who gets hurt in the process.
Empire of AI makes clear that Microsoft, Google, Anthropic, and other tech corporations are responsible as well for what’s become a competitive AI race — and that’s the subtext for OpenAI turning away from its original nonprofit mission. Hao’s prologue leaps into the bungled attempt to fire Altman by Sutskever and three independent board members at the end of 2023. Later in the book, she returns to these events in detail, illustrating how weird it all was and Altman’s deceptiveness.
While many investors and employees have spoken effusively about his deal-making abilities, others have noted that Altman tells you what you want to hear before doing the opposite. Hao refers to his “seeming compulsion to distort the truth.” Once Sutskever became convinced Altman needed to go because of his dishonesty, she writes of the board investigation that followed:
“For the independent directors, every instance added up to a single troubling picture: Bit by bit, Altman was trying to cloud their visibility and maneuver in ways that prevented the board from ever being able to check him.”
They were right to be worried. Two of these board members got the boot in the cataclysm that followed. Sutskever caved into pressure from protesting employees and corporate partners like Microsoft to bring Altman back, as if he were the only CEO who could steer OpenAI to its glorious future. Right after Thanksgiving 2023, I wrote my own post about why Altman’s return seemed particularly worrisome. Hao’s reporting fills in what I and other critics sensed as it was unfolding. (See my piece, “A.I.’s Ascent in Heartless Times” here and with the image at the end.)4
Sutskever, a storied AI researcher, left in the spring of 2024. After reams of bad press over additional PR messes that year, during a stretch known at OpenAI as the “Omnicrisis,” Altman is now cozying up to an alt-right U.S. president and his followers. Those celebratory pictures of tech CEOs at the start of Trump’s second presidency include Altman hanging out with “dude-bros” Jake and Logan Paul.
He might have been “relegated to an overflow room” at the inauguration, but “Mr. Altman sneaked into the White House,” according to a New York Times article about how he outflanked Musk on controlling A.I. policy from the jump:
“Mr. Altman appealed to Mr. Trump’s love of a big story and of a big deal. Mr. Altman told the president-elect that the tech industry would achieve artificial general intelligence — the hypothetical moment when technology matches human intelligence — during the Trump administration, according to three people familiar with the call. And to get there before competitors from China, OpenAI, Oracle and SoftBank had completed a $100 billion deal to build data centers across the country.”
And there you have it: the blue-eyed boy of technology spinning a president who himself knows the art of the con. The Stargate deal was in the works months before the election (Hao refers to it like so), but this public appearance allowed Trump to horn in, calling it the “largest A.I. infrastructure project by far in history.” (Altman’s response: “We wouldn’t be able to do this without you, Mr. President.”) He had also donated $1 million to Trump’s inauguration along with others of his tech brethren, implying that the new regime was “a breath of fresh air.”
In the video clip above with Times interviewers, Altman seems so nice, so thoughtful in his pauses and careful wording — and, for me, so full of bullshit. He’s like a shadow Trump, even if he still manages to seem more charming than the excitable, hate-speechifying Musk. By last fall, Altman was claiming that A.I. was on the road to “fixing the climate, establishing a space colony, and the discovery of all of physics” in a post titled “The Intelligence Age.” Meanwhile, Hao wryly points out:
“The more OpenAI faced uphill challenges, the more Altman seemed to overcompensate with public declarations of its extraordinary success. The pattern was becoming so consistent it was turning into a signal: If Altman was being brazen and boastful, most likely something wasn’t going well.”
So, what do we do about the power that companies like OpenAI have amassed and their willful avoidance of reality? Altman himself may flame out, as more evidence of his fabrications — akin to the hallucinations of ChatGPT — come to light. For that reason alone, investigations such as Hao’s Empire of AI can help spark collective action and lawsuits, and I agree with her about general solutions: regulating what these companies have been doing to date in the shadows; insisting on transparency about the data involved; and scaling down the size of AI models rather than chasing a single, resource-guzzling answer for everything.
She tells heartening stories of community activists in Chile, for instance, stopping Google from building one of its data centers. Hao also describes a small, content-specific AI system for teaching and maintaining the Māori language, which had previously almost been lost, in New Zealand — an example of how AI use can be reframed as a tool that serves human goals and doesn’t waste resources.
But in nodding to such activism, including promising political shifts in Chile and one academic’s call for “another way to relate technological innovations with the earth,” Hao concludes: “It is a noble ambition, and the forces arrayed against it are mighty.”
These for-profit tech corporations control the digital platforms on which AI models are trained, which gives them unprecedented control over information and how the story is told. The ideas expressed by some researchers or executives in Empire of AI often sound like bad science fiction, complete with the cultural residue of colonialism and white man’s burden (AI will somehow “save humanity” from itself). The trouble is, these retro-ideas are now on the upswing in too many podcasts and social media. The temptation of that biblical apple — gaining the knowledge of good and evil — could well become the prospect of humans retaining no knowledge at all.
I remain troubled by how much I’m implicated simply by being here. I’ve used Microsoft and Google products for years; I’m testing A.I. for writing instruction, so I use ChatGPT. I don’t want to support any of these corporations, but the difficulty of unwinding my writing and teaching life from them illustrates the monopoly they hold.
I have to remind myself what’s at stake and (again) that two things can be true at the same time: digital media is a boon to my work — but the continuing harm done to the planet needs to be factored into our choices, individual and collective, every day.
Ironically, as I read Empire of AI, it brought to mind a cheesy episode of the original Star Trek titled “The Apple.” In it, Captain Kirk and crew land on a jungle planet in which a tribe of childlike indigenous people ritually feed energy rocks into the maw of a gaping snake monument. Turns out, the planet is protected by a super intelligence under the snake, providing for and protecting the humans there.
The episode is ridiculous, complete with exploding Styrofoam rocks, deadly flowers, and white people in brown paint playing the innocent villagers, who have apparently lost all motivation, creativity, and ability to think for themselves. I’m not the first to point out the recurring plot theme of “AI outta control” that kept popping up in the old Star Trek. Of course Kirk and Scotty, as heroic engineer, destroy the snake monument and the huge energy complex powering its artificial brain.
And yet, I think of the billions of dollars now being chucked into the snake maw of AI — and how much I don’t want Sam Altman deciding when to shoot phasers. The business forces are mighty indeed, but I thank Karen Hao and other journalists and activists who keep pointing out that these smiling snakes are not our friends.
Hao wrote one of the first profiles of Altman and OpenAI for the MIT Techonology Review, published in early 2020: “The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World.” Even back then, touring the San Francisco office, she realized the company was ill-managed. Here’s an excerpt from Empire of AI: “Inside the Story That Enraged OpenAI.”
Just a taste of what Hao’s book presents in detail about data centers:
“Hyperscalers call their data centers ‘campuses’ — large tracts of land that rival the largest Ivy League universities, with several massive buildings densely packed with racks on racks of computers. Those computers emanate an unseemly amount of heat, like a laboring laptop a million times over. To keep them from overheating, the buildings also have massive cooling systems — large fans, air conditioners, or systems that evaporate water to cool down the servers. The equipment all together creates a cacophony of humming, whirring, and crackling that can especially in underdeveloped communities — be heard for miles, twenty-four hours a day … ”
For shorter takes by Hao, see her Atlantic piece, adapted from the book: “‘We’re Definitely Going to Build a Bunker Before We Release AGI.’” She’s done many interviews recently, too. The one that follows with Rebekah Tweed of All Tech Is Human also highlights the kind of organizations that have come together to resist a narrow, business-oriented vision of AI.
In addition to everything else going on, Hao delves into Altman’s estranged relationship with his sister Annie. In January 2025, Annie “filed a lawsuit in a Missouri federal court … accusing him of sexually abusing her when she was a minor,” reports the New York Times. Sam and the rest of the family deny these accusations; Hao says the truth is “unknowable.” But last year the reporter spent time with Annie, who has struggled mentally and physically. This sister’s attempts to get her brother to listen, Hao writes, “became a microcosm to me of the many themes that define the broader OpenAI story. It also helped me solidify my understanding of how much OpenAI is a reflection and extension of the man who runs it.”