The Myth of Easy Writing
How the influence of generative AI keeps slamming into creative writers
Over the past few months, I’ve been reminded that walking — smoothly, efficiently, effortlessly — can’t be taken for granted.
One morning in late June, after rising jabs of back pain, I woke up with a weak right leg. Then I discovered I couldn’t bend my foot, almost falling down the stairs outside my bedroom. For a number of weeks, I could barely limp around the block, huffing with the effort. But I kept at it, along with a daily round of physical-therapy exercises, and in the process recovered my gait. I relearned how to walk. Last week, I found myself marveling at normal walking, newly aware of my body moving through space.
If a quick fix had been offered when this first happened, would I have taken it? Of course. I’m human. But once I’d seen doctors, rattled by tests indicating the extent of damage, what was on offer amounted to pain pills or, if nothing improved, surgery. Given my mother’s travails with failed back surgery that put her in a wheelchair, I was and am wary of high-tech fixes. So far, my doctors agree.
Yet it’s a seductive dream, solving everything from physical deterioration to ADHD to climate change with technology. It’s not a bad dream, but tech optimism, often driven by business interests, can sweep away concerns about what’s getting fixed.
Consider generative AI and the push to use it for all sorts of writing projects. One of the most pernicious misconceptions promoted by AI enthusiasts is that writing doesn’t need to be hard. The craft so many of us have spent decades honing and figuring out how to teach can now supposedly be reduced to a machine tool that will “collaborate” with you in seconds. The writing that humans do, this thinking goes, will increasingly involve prompting the machine and editing what it comes up with.
This is backward. Writing starts with our own ideas and stories, not whatever a bot churns out. First drafts are often crappy, but I learn from the equivalent of my stumbles around the block.1 I often need to write through several drafts to get to what I want to say. I won’t argue here that AI is never helpful or that all kinds of writing are the same. I will argue that relying on generative AI to tell a story, especially as a shortcut, imperils how we learn to voice our flawed, quirky selves.
Finding your own voice has become such a commonplace for literary organizations it has the ring of cliché. And yet, the process of finding that writerly voice is one of the best ways I know for making meaning of individual experience. How we learn to be ourselves is mediated by other writers, books and music we love, the social media we inhale — but personal voices emerge via years of living, not by querying a machine.
Which brings me to how outraged many writers were last week about the bizarrely shambolic embrace of AI by the nonprofit NaNoWriMo (National Novel Writing Month). Since 1999, NaNoWriMo has promoted a popular challenge to write a novel (50,000 words) in November. The organization’s stated mission is to “provide tools, structure, community, and encouragement to help people find their voices, achieve creative goals, and build new worlds — on and off the page.” Many of us at other literary nonprofits have applauded NaNoWriMo’s growing online community project, as I did in the old days at
.At the beginning of this September, however, just as the school year was getting underway, NaNoWriMo laid out its “position on artificial intelligence,” stating that it doesn’t “explicitly condemn any approach, including the use of AI.” Worse, a bullet-pointed section in their initial statement (one worthy of ChatGPT and now deleted), highlighted the “classism” and “ablism” inherent in criticism of generative AI.2
This led to a social-media storm. In a Bluesky post, Roxane Gay said she was “embarrassed” by how NaNoWriMo was “trying to ennoble nonsense.” Even snappy YouTuber D’Angelo Wallace weighed in: “I think this is absolutely the worst AI take I’ve seen.” He called it “disingenuous,” noting along with other critics that one of NaNoWriMo’s sponsors is ProWritingAid, the company behind an AI tool.3
Several NaNoWriMo board members have resigned, such as YA author Maureen Johnson, who had long been involved with the organization. In a New York Times feature about the fallout, Johnson underscored the hard work of writing that originally drew her to the November challenge:
“It was a way of encouraging people to sit down and set aside a block of time to learn to build writing muscle by drafting, by writing badly, by getting over self-doubt and boredom and writer’s block.”
That makes her X message to NaNoWriMo doubly sad: “I want nothing to do with your organization from this point forward. I would also encourage writers to beware — your work on their platform is almost certainly going to be used to train AI.”
At the end of last week, in a “Note to Our Community About Our Comments on AI,” the “NaNoWriMo Team” apologized for making mistakes but couldn’t seem to resist mentioning the way “debates about AI on our social media channels became vitriolic.”
I’ll hold the vitriol, but the evolving tech-friendly approach to writing exemplified here troubles me beyond clueless references to classism and ableism (even if they were ridiculous). There’s more at stake than one nonprofit screwing up because of poor communication, although the lack of clarity on a platform for writers is ironic.
A writer’s voice is tied to the things they notice. A viral outpouring indicates that people care, but comments with the most hits, which an AI search would highlight, aren’t the only things worth noticing. My writerly notice takes in the research I’ve been doing about generative AI and its impact on writing and media — even an old TV series like The West Wing, which I’ve been watching over again this summer. So, I’ll nod to a few other pieces I’ve been reading, too, because that’s what writers do: they name their worlds, explaining where their ideas come from.
As creative writers were slamming back last week, “What Happens When the Bots Compete for Your Love?” by Yuval Noah Harari cut to the big consequences for political and social life. Historian and author of the 2015 book Sapiens: A Brief History of Humankind, Harari opens his NYT piece with “Democracy is a conversation.” Then he hit on why that conversation has frayed with the use of different technologies. As he puts it, “by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation.” He adds:
“In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people.”
I’d like to underscore Harari’s conclusions with anyone who agrees with NaNoWriMo’s executive team. In their defensive apology note, they still insist the organization shouldn’t be “at the forefront” of a conversation about AI. Why not? They claim to support writers yet don’t name who’s behind the ideas they’ve expressed, obscuring their sources. The mass production of intimacy is closely linked to fictions spun by humans as well as bots. In the video clip below, Harari talks about “weaponizing intimacy” with AI, and stories sway us, too. As do the biases, often unconscious, that frame the way we tell stories or make arguments.
The vehement response of authors to the NaNoWriMo mess has been framed as an “AI backlash,” although backlash implies that creative people weren’t questioning the value of AI before. Given the Hollywood writers strike, lawsuits about published work being used to train bots without compensation, Scarlett Johansson’s battle with OpenAI for appropriating her voice — and reams of commentary — that’s not true. Instead, abstruse tech talk, melodramatic fears of robotic doom, and vague claims about helping humanity have provided tech executives like Sam Altman with cover.
Avoiding what’s been going on amounts to gaslighting: individuals can do whatever they want — it’s not our job to talk about it— and if we point the finger at a bunch of culprits, maybe you won’t notice. Take NaNoWriMo’s deleted section regarding “classism”:
“Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.”
Mainstream publishing is indeed a bastion of privilege with many gates, but using AI isn’t a magic key to opening them. NaNoWriMo has been roasted for this passage among others, because the misdirection is obvious. It assumes writers are editing during November rather than creating their own stories — or that hundreds of thousands of writers doing the November challenge, many of them teenagers, are hiring editors for a mythical crop of bestsellers for a bigger mythical crop of readers.4
Let’s return to how easy any of this is, given that the novel-in-a-month premise has also stoked the myth. Writing a book is hard, and it’s harder than ever to get published by a traditional press. (If you’re relying on the next generation, see another recent opinion piece by Mireille Silcoff: “I Paid My Child $100 to Read a Book”). While many younger writers may have plans for their 50,000 words — hey, I can write a bestseller in a month! — the reality is that most won’t succeed. There are many obstacles, but none is an excuse for wishing away the grunt work involved. If it were easier (using “tools that leverage AI” in the hack speak on the NaNoWriMo site), completing the challenge would not feel like the personal triumph it is.
All this reveals the insidious way tech jargon and abstractions about doing good have crept beyond the libertarians of Silicon Valley. Buried beneath the virtue-signaling is the assumption that the tech companies themselves aren’t responsible for gross inequality of access or that they aren’t benefiting from the use of their bots. Never mind ripping off published authors and artists or deep-seated biases that exist in the large language models that underpin AI tools. Or the fact that tech companies have devalued the “content” of writers for years, operating as if anything filling the digital sphere is just interchangeable bits or outright slop that nobody needs to pay for.
Two other writers caught my attention last week in a seemingly unrelated NYT podcast: “On Children, Meaning, Media and Psychedelics.” For his show,
interviewed Jia Tolentino of the New Yorker, and AI barely got a mention. But these days, when two sharp-eyed media observers like Klein and Tolentino talk, they often dive into issues of personal authenticity.The wide-ranging conversation, Klein notes, originally took place in June but got pushed back on his schedule because of intervening political news. It’s ostensibly about children’s screentime, but he emphasizes how surprising the episode is: “It’s about the tension between pursuing pleasure, or what I might call meaning, and pursuing the kinds of achievements we spend most of our lives being taught to prize.”
Given the uproar over AI from other writers, Tolentino’s worries about writing being surveilled on digital platforms leaps out at me. She says of her tween self:
“The way I was processing my life in narrative, . . . or the way I was writing my life into its existence, was in a notebook, where no one could see it, and no one would ever profit from escalating or distorting it or testing it against anything. And so much of that seems tied, for me, to the lack of silent, invisible, constant surveillance.”
We give away our own voices at our peril. I think of all the paper notebooks I’ve stored away since high school, some of which I haven’t looked at for decades and maybe never will. But I used the words I poured forth then to travel to other stories and worlds. I haven’t forgotten the effort all the rough writing took, or the passion that made it feel worthwhile.
After a difficult summer, I’ve found renewal in my writing voice as a hard-won, precious thing, just as I revel in regaining my ability to walk. I can’t help recalling the afternoon, years ago, when my toddler first managed to hoist himself up on both feet, balancing for a few moments — and the look of surprise and joy on his face.
For crummy and shitty first drafts, insert obligatory nods to Stephen King and Anne Lamott.
NaNoWriMo has since updated its original “FAQ” notes several times. As of this date, it’s basically reduced the original statement to one opening paragraph with a token caveat: “the ethical questions and risks posed by some aspects of this technology are real.” No kidding.
If you’re not thoroughly sick of the NaNoWriMo saga, D’Angelo’s entertaining video — “‘criticizing AI is racism,’ says AI-backed writers group” — takes a deep dive into all the other messes the organization has been fielding, including accusations that its now defunct online forums had been used to groom young writers.
Maybe in November some NaNoWriMo writers are revising a draft they already have, and more power to them. I’d even venture that AI tools at this point in the process could help (although not as a quick fix). See the stacks of
and of for thoughtful approaches to using generative AI with students.
Beautiful piece, Martha, a passionate defense of voice during a time of peril for children, especially. Nick and I appreciate your shoutout and will continue to do our level best to safeguard authenticity and voice in our shared work. Real texts speak in reality to real people. This must be understood deeply and taught.
Take care of yourself Martha. You are needed. There can't be an AI Martha. Why? As Dr Seuss says, there's no one youer than you : )