AI's Ascent in Heartless Times
With Sam Altman's return to OpenAI, he has more power than ever, while care for other humans fades into the background

Watching the fracas over Sam Altman and OpenAI’s board, I’ve had a sinking feeling. For business and technology reporters it’s been thrilling, the kind of crisis event — the king is dead! long live the same king? the king is back! — that spawns headlines, stoking interest in the plight of an individual hero.
With the OpenAI story dominated by the Return of the King, it’s a distraction from the actual crisis artificial intelligence is hyperfueling. That crisis involves a philosophical shift in how we frame human change and suffering. With the ascendancy of current tech masters — who are overwhelmingly male and economically privileged — I see a devastating decline in the ethic of care.
My heart is heavy today, partly because a good friend has died, but also because I see the long tunnel of a future I don’t like narrowing before me and, more important, my son. We need to talk about this stuff, we all do, but I find so many people confused by the technology or drawn to the hype or the hope of easy solutions to big problems.
The utopian dream animating OpenAI and other such ventures is artificial general intelligence (AGI): that is, machines surpassing human abilities. Yet in the rush to embrace more powerful forms of mechanical thinking, optimizing how we live and work, we may lose the qualities that help us connect with other humans.
With AI, I’m not worried about machines taking over and destroying humanity. What troubles me are the current and near-term impacts of a transformative technology: job losses with no backup plan; the misery of those already on the streets; a dumbing down of human expression; the destruction of serious media and journalism; just a few tech companies controlling how information is distributed globally.
There are winners and losers, something AI boosters obscure. Executives like Altman talk about real-world problems in general terms but with a slippery unwillingness to name how destructive major economic change can be. I sense a profound lack of empathy in such slipperiness.
That may be unfair, but I’ll risk calling out the emperor’s new clothes. As a public figure, Altman troubles me, partly because he’s been so elevated by others. In personal appearances, with his bright blue eyes and gray Henley shirts, he oscillates between charismatic and aw-shucks. At a Bloomberg Technology Summit interview with Emily Chang this past June, she introduced him as “the one and only person who’s going to be deciding our futures.”
With faux modesty, he replied, “I don’t think so.” Gosh, me?
Yet if the events of Thanksgiving week are any clue, much of the business world does believe he’s the “one and only” — and that’s a problem. In the interview with Chang, he referred to the “obvious benefits” of AI like an “end to poverty” or “the opportunity for everyone on earth to get a better quality education than basically anyone can today.” Such sweeping goals are meaningless without details and political action.
Just how we get to a better world is magicked over with references to regulation that he and his tech brethren don’t feel obligated to put in place themselves. At the time of the interview, he’d just completed a global tour and enthused about “the desire for the world to cooperate,” adding:
“Like the number of world leaders who would say things like, ‘I think this is really important, we want to get AGI right, tell all of the other world leaders I am in on it, we’ll work together.’ That came up maybe every time but one.”
Uh-huh. If this sounded absurd last summer, in the wake of the Israel-Hamas war, along with the continuing Ukraine war and Putin’s recalcitrance, it now clanks with barely disguised arrogance — or worse. Who cares once we get past this human mess? It’s collateral damage in the march toward a gloriously optimized future.
As a counterpoint, Amba Kak and Sarah Myers West of the AI Now Institute, have zeroed in on the way big tech companies “wave off concerns about their own market power, their enormous incentives to engage in rampant data surveillance, and the potential impact of their technologies on the labor force, especially workers in creative industries” even before the Altman saga began. In “The AI Debate Is Happening in a Cocoon,” an early November piece in the Atlantic, they call the much-hyped futuristic fear that machines will kills us all a big distraction. They write:
“Notably, many of the biggest advances in tech regulation in the United States, such as bans by individual cities on police use of facial recognition and state limits on worker surveillance, began with organizers in communities of color and labor-rights movements that are typically underrepresented in policy conversations and in Silicon Valley.”
Now there’s Altman’s return to OpenAI to distract the public’s attention. The business press has shredded the former board, although in his Bloomberg interview last June, Altman waxed on about being part of that nonprofit board for which he had no financial incentives (no equity stake) beyond a “tiny bit of investment” that “I trust the nonprofit to do a good thing with.”
Emily Chang, a sharp interviewer, interjects a “reality check” at one point about large language models such as ChatGPT being trained on data that’s “biased, racist, sexist, emotional, that is wrong.” She asks, “How do you safeguard against that?”
Altman’s response was to claim that one study (which he doesn’t name) showed GPT-4 displaying less implicit bias than humans. He added that they’ve improved earlier problems through feedback from human testers. However, the quality of this feedback and poor labor conditions for human testers has been questioned in 2023 reports such as Matteo Wong’s “America Already Has an AI Underclass” in the Atlantic.
Other recent studies, including one from Stanford, show racial bias in AI medical diagnoses and elsewhere. Meanwhile, the Brookings Institute and conservative commentary sites argue that ChatGPT has a left-leaning political bias.
All of which is to say that the bots can’t be cleansed of bias, as if bias is a binary matter of right and wrong. The issue is how such tools will be used (and misused) in a multinational, multi-religious, multi-everything world, when the harm of bias comes down to who has power to create the public narrative and who does not.
In firing Altman on November 17, the small OpenAI board clearly struggled, as many commentators have noted. At the very least, they communicated their reasons poorly. But the fact that OpenAI has a nonprofit board does matter, especially because it’s now been reported that several OpenAI researchers sent this board a letter about risky AI development before Altman got the axe.
With a research venture hurtling toward an innovation that’s been compared to the Manhattan Project’s nuclear bomb, OpenAI’s board is meant to provide disinterested oversight: “The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.”
To date, the two women on the board have left, one of whom, Helen Toner, is director of strategy at Georgetown University’s Center for Security and Emerging Technology. On the increasingly divided nonprofit board, Toner reportedly tangled with Altman over potential safety issues she noted at OpenAI in one of her research papers.
The current interim board includes Bret Taylor, former CEO of Salesforce; Adam D’Angelo, CEO of Quora (the only remaining member of the dissident board); and Larry Summers, U.S. Treasury secretary under Bill Clinton.
While I’d never claim that strict gender and racial representation guarantees fairness, this slate of white men tips in the wrong direction. Larry Summers, in particular, gives me pause. In 2005, Summers, then president of Harvard, speculated at a diversity session that men were biologically hardwired to be better at science and engineering than women, one of many reasons a hostile faculty forced his resignation a year later.
Tech business rhetoric is all about being smart and competitive rather than valuing the arts, education, the humanities, the social contract — any kind of messy human experience in which outcomes aren’t certain. Individual entrepreneurship triumphs, and damn all the hearts, bleeding or otherwise.
That the business-tech narrative for AI now rules should be obvious. Except I don’t think it is obvious to most online users. If I seem bitter, the death of my friend in the real world weighs on me. How much I and others cared; how alone she still felt. The spirals of despair and longing for something better can never be easily coded. The whole of a person’s life cannot be uploaded to a machine or culled down.
What remains with such a loss is human care and sorrow, until the machines decide it’s too hard to quantify. So clean up the files, as if they never existed, as so many human histories have been erased from the public record.
“If this really works,” Altman said toward the end of that Bloomberg interview, “you should not trust one technology company and certainly not one person.”
Oh, believe me, Sam, I don’t.
Martha,
My condolences to you at the loss of your friend.
I think you have a well developed vision of how AI might impoverish our humaneness. I haven't thought about it enough or studied it enough or tried to understand it well enough to foresee its specific harmful effects. I hope you continue writing about AI and some of the specific scenarios you fear.
In the meantime I streamed Oppenheimer this weekend and it brought back nuclear weapons fears. I thought about those fears when you mentioned the two current wars most in the news. It's been almost 80 years since 1945, and I know that when something hasn't happened in such a long period of time, we start to fool ourselves into believing it can't happen.
The only happy note I can end on is that I'm glad I get to read your thoughts.
Best,
David
Martha,
I'm so sorry about the sudden loss of your friend, what a shock. I'm unsophisticated about tech but have found myself curious (and increasingly concerned) about AI and surprised it's not being more widely discussed, at least in my circles. Hard I suppose for humans to conceive of the many possibilities we've not yet lived. Thank you for your thoughtful reflections. I'm sure there's always been some resistance to massive technological shifts and a desire to turn back toward what we know, and yes, what feels ultimately most grounded and human. AI seems of a magnitude greater than many changes before it, however, and you're right, the people at the top making these massively consequential choices for us aren't representative of the population and wield a scary amount of power.