Faking Ourselves to Death
It's an artificial world — but ethics still matter when writing in the first-person voice
Why do readers believe what I say? When I write with an “I,” how do you, dear reader, know it’s really me?
What about my use of “dear reader”? Is that a tired stylistic trope? An indication of my age and social class? My actual desire to talk with you?
On my website, I provide biographical information that lays out my credentials: teaching at Harvard Extension, co-founder of Talking Writing, longtime freelance editor and journalist. Some is written in the third-person voice (“Martha Nichols is…”); some is from my first-person perspective (“I have long believed…”). All is meant to provide credibility, underscoring why you should pay attention to me.
None of it is false, but it’s also not raw documentation of me. The same goes for my About description on Substack. Such descriptions encourage a presentation of self resembling marketing copy. They’re meant to persuade readers with first-person voicing and promises of virtual connection.
Like credentials, a conversational personal style can be faked, so why believe what you read? It’s more than a rhetorical question in an increasingly virtual world. Chatbots are designed to converse with human users, which turns the exchange artificial from the get-go. For this and other reasons, I badly want to blame generative AI for undercutting truthfulness in media.
But if I’m honest, AI is only the latest technological innovation to help humans fake it. Crafting a convincing personal voice for an audience is inherently performative, and fakery goes back long before the digital era. I’m not even talking about impersonations meant to hurt or rip off people. I mean the ostensibly honest ways we present ourselves in writing and social media.
Fictionalizing the self is very human. There are countless temptations to fudge the past or polish our own images. Politicians and celebrities don personas voiced by ghost writers.1 Masquerading has a creative meta-side with drag riffs on gender or role playing in video games. And well before the arrival of chatbots, reality TV jumped a flotilla of sharks into the Land of Artificial Authenticity.
Then there’s editing at magazines, which began in the days of print and achieved mythic status at the New Yorker. The recent hundredth-anniversary issue of that magazine includes an unusual history by Jill Lepore: “The Editorial Battles That Made the New Yorker.” Those wonderful literary voices have often been rewritten by or collaborated on with an editor. Here’s “rewrite man” James Thurber, for instance, complaining about the effort: “I have put wheels under, and given wings to, a hell of a lot of heavy, dull stories…. It’s like going to war and digging latrines.”
Does the final version of edited work represent the writer’s voice? As a magazine editor and writer, on balance I’d say yes. I know why the hassle yields better writing. Yet there’s an elegiac quality to this behind-the-scenes look at the New Yorker, too, the basic idea being that heroic writers and editors strive for truth. Editing is a “dying art,” as Lepore herself notes. “Editors, the good ones, anyway, would have considered whether what was being said was said clearly and stated fairly,” she writes, adding:
“A century on, in an age of tweets and TikToks and Substack posts and chatty podcasts, a vanishingly small percentage of the crushingly vast amount that is published on any given day has been edited, by anyone. A whole lot of people are wandering around in hospital gowns with their butts out, patootie to the wind.”
This old-fashioned adherence to truth points up just what’s changed in the digital landscape. I often feel as if I, along with other editors, have become Aragorn rallying the troops before the Black Gate of Mordor. Instead of crying, “For Frodo!,” we cry, “For Truth!,” riding forward by ourselves.
Now bots can produce facsimiles of personal stories and features that really are effortless — or require far less effort than the editing, fact-checking, rewriting, hair-pulling back-and-forth between writers and editors held up as an aging standard at the New Yorker. All the online patooties are part of the data sets used to train large language models. AI can spin dating-site profiles or voice faux lovers. Authenticity has become a hack term. We can easily fake ourselves and everyone else.
So, why shouldn’t we — and can anything stop us from doing so?
The Age of Artificial Everything
I’m going to emphasize something obvious here. Ethics matter, especially now, in these times. Treating other humans with respect should be at the forefront of the debates going on about AI’s impact on society: the way we interpret information, write about what we know, recall history — the way we think, feel, and remember.
Unfortunately, these ethical issues have been sidelined by the race to profit off AI. The hallucinations of bots now seem commonplace, although they can be corrected by human editors. I’m more worried about a lack of transparency in the way AI-generated stories are framed and told: the presentation of actions as more logical than they are, for instance, or wrong assumptions about dates and timing.
Digital media has already changed the way we read and get news. Now artificial intelligence is pushing another technological transformation, one that will surely affect how we decide on the truth. If nothing else, ethical action involves choices we make about right and wrong. But without intentions of their own or human intervention, chatbots generate prose in which form dictates content, with “facts” pulled in to fit, which is ethically backward. Reality should dictate what appears in a piece of nonfiction, not a convenient formula generated by a machine.
Except reality is also shaped by how we convey it. This is why Neil Postman’s 1985 classic Amusing Ourselves to Death: Public Discourse in the Age of Show Business has been on my mind lately. His jeremiad against the communication technology that then reigned supreme — television — inspired my own title here. Neil Postman, a teacher and cultural critic until his death in 2003, knew Marshall McLuhan, and “the medium is the message” was a clear starting point for him. As Postman wrote:
“For although culture is a creation of speech, it is recreated anew by every medium of communication — from painting to hieroglyphs to the alphabet to television. Each medium, like language itself, makes possible a unique mode of discourse by providing a new orientation for thought, for expression, for sensibility.”
In Amusing Ourselves to Death, Postman extends McLuhan’s famous dictum to “the medium is the metaphor” for the way we experience the world, transforming how we perceive it. For instance, with the advent of clocks, the conception of time shifted to measurable mechanical units rather than the natural passage of seasons. “Moment to moment, it turns out, is not God’s conception, or nature’s,” Postman noted. “It is man conversing with himself about and through a piece of machinery he created.”
Writing in print is another powerful metaphor, one in which the symbolic medium of written language “makes it possible and convenient to subject thought to a continuous and concentrated scrutiny,” Postman wrote, adding:
“What could be stranger than the silence one encounters when addressing a question to a text? What could be more metaphysically puzzling than addressing an unseen audience, as every writer of books must do? And correcting oneself because one knows that an unknown reader will disapprove or misunderstand?”
Amusing Ourselves to Death is not a long book, but it provides a provocative walk through various eras of communication technology, going all the way back to Plato’s distrust of written philosophy. The biggest contrast Postman draws, however, is between the “Age of Typography” with the printing press — which ushered in the Enlightenment, rationalism, and America’s founders enshrining their ideals in a written document — and the “Age of Show Business” with the rise of TV as a visual medium that reshaped everything as entertainment regardless of content.
He was amazingly prescient.2 When I first read Postman’s work in the 1980s, I didn’t always agree with his screeds against personal computers or the education-lite programming of Sesame Street. But digital media now feels like the apotheosis of the Age of Show Business, in which information — be it news, education, religion, politics, or silly-pet memes — is shaped into forgettable little bites.
Digital media has also leapt past broadcast TV to the Age of Artificial Everything: persuasive chitchat reigns supreme, and everyone is expected to sell themselves to garner attention. With AI, in which we interact with bots that mimic helpful assistants, the very technology implies we’re all virtual.
Here’s the message of social media: like me, and I’ll like you. Virtual communication has become transactional, and as a metaphor, we’ve gone from a rational view of how the world works to one in which the more you spew online, the more authentic you are. Just ask Elon Musk, whose Grok-3 chatbot supposedly responded to a prompt about its opinion of a technology news site, The Information. Traditional media like it are “garbage,” according to Grok: “You get polished narratives, not reality. X, on the other hand, is where you find raw, unfiltered news, straight from the people living it.”
Amusing Ourselves to Death appeared not long after the iconic year of 1984, but in it Postman argued that the authoritarian dystopia of George Orwell had been surpassed by Aldous Huxley’s Brave New World, with the masses increasingly distracted by senseless junk. I’d say both dystopias now hold sway. We’re back to Big Brother’s Ministry of Truth along with hallucinatory riffing meant to get a gut reaction, be it “shock and awe” or laughter (not to mention the “feelies” of virtual reality).
Think of Trump’s nonsensical monologues in which journalists race to fact-check but don’t convey the supreme artificiality beneath it all. Asking a bot for an “opinion” is laughable Musk hype, except we’re submitted every day to a parade of falsehoods from the people in charge, spoken and written with all the faux-sincerity they can muster. Responding fast is the point, as if firing a gun. The truth becomes provisional then, owned by the last person (or bot) to pop off in a manner that satisfies an algorithm.
The truth turns malleable or is just plain bullshit, but who cares? The medium encourages the feeling that I — and you — have no control over information thrown at us except to react in a virtual landscape. In a supremely connected medium that was once touted as the hope for fledgling democratic movements, the collective impulse is now reserved for autocrats, all those hallucinating Big Brothers.
Sounding authoritative when there’s no actual authority behind the curtain is bad enough. Humans in power have been forgetting history for eons (see “Ozymandias”), and enough of us remember the recent past to know the public record is being changed by the Trump regime.3 But the digital communication platforms themselves, which shape and truncate self-expression, make opposition harder.
It’s a cliché, watching Rome burn, but I feel as if I’m watching the truth burn. If all the trial and error of digital publishing — the follow-on to the New Yorker tradition — has taught me anything, it’s how quickly everything can be repurposed or massaged or falsified or deleted when inconvenient. That includes serious ethical discussions of how a transformative technology affects our understanding of what’s real.

A Human Voice: Who Am I?
Outside my window, the sky is gray — the uniform gray that feels like a mood. What’s left in the cereal bowl, milk and specks of uneaten debris, a few chia seeds, an obscenely plump raisin. This morning, I woke up worried about various family members at a distance, wondering whether I can help, what my responsibility is, especially to those who were cruel to me once. When is the past past? Now there’s a bluejay flitting around the upper branches of a bare tree, shrieking, so blue, that flick of blue, and I don’t think of that as debris. Life shrieks so hard sometimes, so vigorously. I want to shriek back. Or maybe I’m closer to tears, a sob in the chest, the repeated call of a mourning dove, coo-coo-coo-coo. Sadness at all failures amid joyous blue, my life shrieking, my life and my need to voice it.
When I think of my first-person voice, I hear myself speaking. I do hear myself — I feel myself. And yet, that italicized paragraph is not a first draft; it’s a prose poem I’ve reworked. I’m not trying to fake it, but even in the way-back days of print book authors, writers had audiences. We are creative. Our memories can shift and reshape us, but unless we’re very self-aware, we’ll also change our stories to fit the times.
It’s almost like meditation practice, reminding myself how collaborative human writing is. Transparency about where ideas come from is central to honest nonfiction, and I’ve long believed using the first-person voice can help make sources explicit. If writing is connected with ethical practice, that should be possible with collaborative AI tools as well. When viewed this way, they’re part of the millennia-long project of thinking about and negotiating our understanding of the world.4
Like Postman, I’m a print person with a “typographic mind.” I came of age with bound books and legacy newspapers, and I’m not drawn to broadcast and cable news. I’ve also tried my damnedest to embrace digital media and continue to publish nonfiction pieces online, although they tend to the long-form writing I’m more comfortable with, in which I research what I want to know, present evidence, make a logical argument, keep thinking it through.
But unlike Postman, I see why personal stories can be more convincing to many readers and listeners than the testimony of experts. When talking about what’s good for the culture at large, I don’t assume rationality is the only way to get at human experience, especially for those who had no access to printing presses. I love novels and other forms of creative fiction. I believe it’s possible to hit universal themes and provide nuance in multimedia storytelling.
I believe in my own writing voice. I’ve cultivated it over decades, and I’m at my most convincing, I think, when I admit to failures or uncertainty. Before bots could produce expository prose, the digital information landscape had been debased, but making it personal is one route to undercutting the message of virtual communication.
When I reframe my thoughts in the first-person voice, revising third-person or “we” sentences, I grab back a bit of my agency. It helps me refine what I want to say. I’m an active interpreter of experience rather than a passive receiver or regurgitator of information. Voice affects content: my voice, not a chatbot’s fabrication.
Yet I don’t put every internal contradiction on public display. I respond emotionally in private — picture me literally tearing at my hair and sobbing the morning after the U.S. election — oh, I do feel things, but simply dumping what I feel in the moment doesn’t define me or the way I think. It’s the reverse of responsible writing, in which editing is not about “polishing” a narrative. It’s about focusing on what’s meaningful.
I’m back to my opening questions and more: what are our responsibilities as ethical writers and humans in the Age of AI? Here I intentionally invoke the collective “our,” because when it comes to this historical and cultural point in time, we’re in this together. As Postman wrote toward the end of Amusing Ourselves to Death, “For no medium is excessively dangerous if its users understand what the dangers are.”
Is it dishonest to ask a chatbot to write in your “I” voice?
When is it okay to fictionalize yourself?
If you don’t tell readers you’ve used AI to write, is that unethical?
Consider how you keep changing the story of your life from year to year. Your memories change, your sense of yourself and your place in the world. I know my ideas do and have changed, but I want to decide where each story starts.
I’m telling you this right now, and I hope you believe me, that this is the real Martha addressing you. After all, I know how to pull heartstrings when required. I know how to heighten an argument. But the first-person writing strategies I list below may be the things that matter in fighting against self-fakery — in connecting the dots between emotion and logic.
That’s why, on this blue-gray day in the real world, I express myself as a particular human living in the early months of 2025. I watch clouds scud behind bare branches outside my window in Cambridge, Massachussetts. No machine will ever see or feel this, and I don’t want a machine tool pretending it does.
Fighting Fakery: First-Person Writing Strategies
• Question your own biases.
• Acknowledge the perspective of others.
• Include specific details and examples.
• Vet sources beyond the top search result.
• Attribute your sources with appropriate context.
• Include time tags for accurate dates.
• Don’t claim false omniscience or authority.
• Be honest.
The line between ghostwriting and editing can be fuzzy. In the 1980s, I knew freelance copywriters who created direct-mail fundraising letters in the first-person voice of various big-name Hollywood actors. It was an obvious marketing ploy, so few people seemed bothered by such impersonations. The same goes for celebrity memoirs, most of which are ghostwritten. But with personal essays, literary memoir, or topical nonfiction, I’d argue that readers don’t expect an impersonation of the author.
Postman’s 1992 book Technopoly: The Surrender of Culture to Technology is particularly resonant now. In a 1990 interview on “informing ourselves to death,” he said, “Too much information can be dangerous because it can lead to a situation of meaninglessness — that is, people not having any basis for knowing what is relevant.”
A recent On the Media episode, “Donald Trump Is Rewriting the Past,” is packed with references to Nazis, Joseph Stalin, and the Ministry of Truth, foreshadowing how Trump et al. lie in plain sight about, say, who started the war in Ukraine. This week, even the New York Times followed up with an explicit headline about the impact of such fakery: “In Trump’s Alternate Reality, Lies and Distortions Drive Change.”
Many educators now emphasize the collaborative, mediated nature of human knowledge with innovative AI use. A recent Educating AI post by
, “The Five Faces of Education in the Age of AI: A Spectrum of Survival, Skepticism, and Symbiosis,” does an excellent job of parsing approaches. (I’m fond of “Teach Me to Argue With My Algorithm.”) I also want to nod to “Writing as Moral Engagement” by and More Than Words: How to Think About Writing in the Age of AI, a new book by .
I like what you’re saying here more than you think, Terry, because in the end, I agree that it’s about trust. It’s possible to express yourself honestly, even in the first person, with the help of bots, because it’s all about how you use the tools - if I remain the captain of my intellectual ship, to paraphrase you, the work produced can feel true to me. But if we don’t push ourselves with hard ethical questions, we will lose a sense of wholeness in self-expression. That’s my concern. What I feel about my voice isn’t rational, but there’s a dimension to writing that is about feeling.
I can quibble with you about what I mean by voice, because for me it’s about more than writing style. I have done lots of bot tests, both starting with my own text or woking toward it, and the AI prose mimics my style. It can sound a lot like Martha. What it doesn’t do is think like me or include the specific details that embody my voice. I would argue that the details a writer chooses to include most accurately reflect their POV - and bots don’t get the details right. To be continued 😉
Oh, Martha, where to start?? I could probably be here all day going from point to point in this most wonderful essay of yours, but I'm just going to stick with honesty. What is it? How does it manifest when we're writing all alone and nobody else is watching? I don't use AI and probably won't ever, but I ask myself often while I'm writing in first person personal, am I'm using language that manipulates, along with telling the story?
I know that kind of manipulation when I see it in others, but do I see it in myself? And is it all bad? What is manipulation if not a method of convincing? And isn't that the basis for all of our personal pieces? Convincing our readers that this is the real US? We work to move words around in a way that will make our readers care. Moving words around. Manipulating.
I think there are so many gray areas when it comes to honesty and authenticity. I think if we question it too much it might mean we're making too much of it. Maybe we should just embrace the persona we're trying to create. Make it the real us.
I think of entertainers like Lady Gaga and Madonna, who are far from 'real' but are nonetheless captivating. They change their persona at will or at whim, and their audiences love that feeling of imbalance. It's their mystique. (Well, maybe not Madonna's anymore.)
I love Maxfield Parrish prints. I mean, "love" isn't an exaggeration. I spent years swooning over them, and then I discovered late into my love affair that his gorgeous 'paintings' are actually photographs manipulated with paint to take on that amazing light. He didn't paint those figures, he posed them on sets of his choosing, photographed them, and then painted the highlights. Did knowing that make me love his work even less? A little. But they're still breathtaking. That's the difference with AI. His prints are wholly his signature. They couldn't have come from anyone else.
Andy Warhol did sort of the same thing, except he did it by screen-printing over already famous photographs. That feels like AI to me. Warhol made millions by dinking us.
There are artists like Andrew Wyeth and Edward Hopper who keep it real almost to the point of everyday, yet they're national treasures. Their paintings, even those less familiar, can almost be pinpointed because their style is so unique.
That can be true of writers, too. It's our unique style that makes us authentic. We could try and analyze it until the cows come home, but why? Why not just work at being who we are. Make us recognizable. Only I can do what I do. Only you can do what you do.
I don't know if I've said anything worthwhile here, but here's my final thought: Our authenticity is built in, no matter how it manifests, as long as we tell the truth. Keep it real. And AI will never be real.