Sam Altman, What Did You Learn in Writing Class?
On a recent visit to Harvard, near a protest encampment, the CEO of OpenAI rhapsodized about "a calculator for words"
At Memorial Church in Harvard Yard on May 1, Sam Altman told the packed audience that “taking writing classes was great” when he was at Stanford decades ago. In the photo with that Harvard Crimson report, Altman projected his usual boyish earnestness, decked in a layered brown shirt, mike headset, and white running shoes with bright stripes and turquoise soles. Hey, I’m just a nerdy kid like you!
But the fact that the CEO of OpenAI, a tech billionaire, once took writing classes does little to slow the roll of ChatGPT. When Bloomberg’s Emily Chang asked Altman last fall, “What do you think kids should be studying these days?,” he first answered with “resilience,” “creativity,” and other abstractions. She went on to ask whether anyone should bother learning to code, given the advent of AI programs. Altman said yes, noting that for him “learning to code was great as a way to learn how to think.”
I want him to say the same thing about writing — repeatedly. Pigs will fly before that happens, I suspect, but for all Altman’s talk about society and the world changing, this basic message could have an impact, especially on those coming of age now.
At the Memorial Church interview, moderated by alumnus Patrick Chung (Altman’s first investor), he received the “Xfund Experiment Cup” from the dean of Harvard’s School of Engineering and Applied Sciences. I wasn’t at this Altman event. My response is based on local reporting by the Crimson, Harvard Gazette, and other press outlets he met with beforehand, not to mention my experience as a longtime journalism instructor.1 It’s possible Altman talked in detail about what he learned in those “great” writing classes, how they taught him to rethink his own ideas and biases, to credit and attribute what he knows.
Yet I doubt it. If he gleaned anything from a writing class, he would have acknowledged that the process of producing a piece of writing is where the learning happens. Instead, he called ChatGPT “a calculator for words,” as quoted in the Harvard Gazette’s “Did Student or ChatGPT Write that Paper? Does It Matter?”
“Standards are just going to have to evolve,’ he said, after being pressed “about how the ethics of using ChatGPT and other generative AI may differ in various disciplines,” reports Clea Simon of the Gazette. He didn’t agree that chatbots should only be used for writing in science classes, where the emphasis is on quantitative information and results. For him, generative AI is fine for humanities courses, too. He tossed off:
“Writing a paper the old-fashioned way is not going to be the thing. . . . Using the tool to best discover and express, to communicate ideas, I think that’s where things are going to go in the future.”
Just what the “old-fashioned way” is wasn’t defined, of course. If your assumption is that all you need is a paper good enough to garner a passing grade, why not use a chatbot as a ghostwriter? It’s not exactly a new impulse for student cheaters. Altman adds the obligatory caveat about cheating being bad, according to the Gazette, but then tacks on this supremely unhelpful notion: “what we mean by cheating and what the expected rules are does change over time.”
Now there’s a generic statement worthy of ChatGPT. As a writing instructor, I have to ask for specific examples and sources.
Some students will always hate writing or think it’s beside the point. But plenty of others need to hear a more cohesive argument about why learning to write is as crucial as ever in the AI age — not that they’ll soon be able to “cheat” legally. As the Gazette article implies, ethics are at play when we make decisions about how to voice ideas, acknowledge original sources, and present our own stories. Meaningful writing relies on self-awareness and honesty about what you know and don’t know.
Learning how to write (or code) is not the only way to learn how to think. I do believe generative AI has potential for classroom work; it might even be a prod to make writing instruction better. It’s long past time to get rid of canned essay assignments in high school or grades based on grammar and spelling. And I will admit that using DALL-E to come up with the “pigs fly” image at the top of my piece was fun and fast.
The temptations of such a frictionless “tool,” as Altman often refers to the bots, is the trouble; it’s why using generative AI to write is insidious. It reinforces the idea that facsimiles of self-expression are acceptable. It’s more stink for the digital swamp of fakes, influence peddling, or awful prose, all the worst impulses of human writing.
When Altman and other tech optimists promote AI as a miraculous productivity enhancer, we are very far from what scads of linguistic, cognitive, and educational research indicates about the value of learning how to write effectively and the effort it involves. Just a few compendiums of evidence that chatbot enthusiasts might check to slow their roll: The Handbook of Writing Research (2016); “The Impact of Writing on Academic Performance for Medical Students” (2021); and Who Wrote This? How AI and the Lure of Efficiency Threaten Human Writing (2023).2
As Tech Master of the Moment, Altman has a huge bully pulpit and could well shape a more nuanced discussion of AI use in school. For the Memorial Church event alone, Harvard Magazine reports a “crowd of over a thousand excited students, who began lining up for the event in a locked-down Harvard Yard an hour before it started.” This not far from the encampment of protesters, which, when I walked through a few days later had only grown since the previous week, the famous John Harvard statue in front of University Hall draped with a kaffiyeh.3
But as far as I know, Altman hasn’t used his pulpit to address the value of writing or reading, let alone how much good writing skills matter to credible media coverage of current events. Whatever you think of the campus protests, the students are responding to the real world, to a crisis that’s already spawned reams of disinformation online. Yet he sticks to shaggy platitudes like “there will be a conversation about what are the absolute limits of the tool” (Gazette).
I’ve become increasingly irritated with the vagueness, the pomposity, the assumption that writing is a legacy skill nobody will care about in the future. Altman is not the only corporate tech executive who speaks in mumbly, bot-approved ways, but he’s become the front person. Just who is supposed to have these conversations or will be responsible for establishing guidelines for AI and writing is left to swing in the wind — or in the hands of professors and other instructors who have little authority.
The Gazette reports him issuing this warning to academics: “Telling people not to use ChatGPT is not preparing people for the world of the future.” My translation:
We own this town, youse, so stop your whining, and get with the program.
As Cade Metz notes in his revealing 2023 profile in the New York Times, “To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be.” Metz reports Altman comparing OpenAI to the Manhattan Project and quotes people who knew him when, including Paul Graham, co-founder of Y Combinator in San Francisco. In Graham’s opinion, Altman isn’t motivated by money but “he likes power.” He also said Altman “has a natural ability to talk people into things.”
Young Sam dropped out of Stanford after two years, by the way, never getting his bachelor’s degree. He’s now 39, and at the Memorial Hall event he enthused about the “great” science courses he took back then in addition to those unspecified writing classes. But in the 2023 profile, Metz paraphrases what Altman said to him about his time at Stanford: “he learned more from the many nights he spent playing poker than he did from most of his other college activities.”
By his own report, Altman has always been interested in AI and computer science. You see it in his assessment of poker as practice for gauging behavior patterns and making quick decisions based on imperfect information. We humans do that all the time, and there’s a wide literature on a whole variety of cognitive biases we’re prone to. You could argue that AI might be helpful in exposing cognitive biases.
But in comparing the impact of chatbots on writing to the introduction of the calculator for math, Altman’s biases are clear. There’s a craft to writing, but no single correct answer to a problem. Good writing classes don’t just teach students statistically likely formulas or to “calculate” words; they take students through all the steps now being ceded to AI: coming up with your own ideas; taking notes about what others think; testing them in your own messy, highly imperfect drafts; figuring out how to revise the mess into a piece of writing meant for an audience.
Sure, it’s hard work, and, oh yes, students grumble — but that’s why writing well makes you a better thinker, and honing how you think translates to the rest of life.
If only Sam Altman would say that. Here’s the script I’d write for him:
Hey, kids! Don’t try ChatGPT at home until you’re ready. Learn how to write and read FIRST, because that will teach you to think — and chatbots blossom when the humans in control are not simply button pushers and crowd pleasers.
I even asked ChatGPT to “reframe this message in the voice of Sam Altman”:
“Hello, everyone! It’s crucial to prioritize your foundational skills in writing and reading before diving into more advanced activities. These skills are essential — they help you develop critical thinking abilities. This is particularly important when interacting with technologies like chatbots, which thrive when guided by thoughtful and knowledgeable individuals, not just those reacting to trends or seeking immediate gratification.”
Well, this faux version of Altman doesn’t display any individual verve. Still, I’d take it. I’d be thrilled to hear this in an OpenAI press release or to have him repeat the message with glitzy interviewers or wide-eyed students. I’d be more thrilled if he wrote it all by himself.
When pigs fly, yes — and not just in an AI-generated image.
Full disclosure: my husband, an engineering professor, works with Dean David Parkes, who presented the award to Altman.
I interview linguist Naomi Baron, the author of Who Wrote This?, in an upcoming
podcast. Many thanks to my research assistant, Kirsten Brownrigg, for quickly turning up so many sources about writing and thinking.UPDATED 5-14-24: On Monday, May 6, Harvard’s interim president posted a letter about the disruption caused by the encampment with a severe warning: “Those who participate in or perpetuate its continuation will be referred for involuntary leave from their Schools.” Reportedly, about 400 hundred students marched on the president’s home that night. In the Yard during this period, I talked with some of the protesters, well-spoken Harvard students. As with so many messy and all-too-human confrontations, it’s hard to know who to believe, but I’ll note that the Sam Altman event on May 1 apparently carried on just fine with an estimated thousand people nearby lined up to get in. And on May 14, the president announced that student protesters have agreed to end the encampment in Harvard Yard after he and the administration agreed to review the endowment’s investments.
Sam Altman: "what we mean by cheating and what the expected rules are does change over time.” URGGHH.
In the same class as "alternative facts."
I'll tell you what AI is going to generate. Lawsuits. The Obama "Hope" image lawsuit times a million