How AI Influences Your Point of View
If I let ChatGPT revise my writing, is what it produces mine? Does that matter? Why nobody else can be George Eliot — or me
Last week, I tried out the latest version of ChatGPT, accessing what OpenAI calls “study mode.” The results impressed me more than I expected and worried me, too. My initial response of hey, this is pretty good! soon turned to an examination of the pitfalls of a bot influencing my point of view.
There are no quick takeaways here; that’s my first disclosure. What I thought would be a short post about an AI test has turned into a long one about squishy existential and ethical questions. It’s hard to get across how chatbots work without laying out the specific sequence of prompts and responses, as I’ve done in one section below.
It’s also hard to explain an abstract notion like point of view. What’s my point of view beyond age, rank, and identity label? It’s what I notice. It’s what I decide to tell you.
For instance, a few days ago, because of a chance encounter with a woman wielding a big fieldscope, I saw a rare little blue heron. Its snaky white neck (she told me it was still a juvenile) ducked among the flowering lotus on a pond near my house in the Boston area. I watched the bird through my binoculars, quietly thrilled, saying how gorgeous all those big yellow flowers and lily pads were. She sighed. Yes, but they’re invasive, killing everything else. The City is supposed to clear them out.
Today the sun is on my neck as I type, too warm through a high window with no curtain. I badly want to finish this piece. I’m planning how I’ll fit in grocery shopping and (possibly) a medical appointment for my son. I’m conjuring the coffee cake I’ll bake with blackberry filling I made from the berries my husband and I picked last weekend, when I learned to soften my grip as I pulled a ripe berry from its husk. Otherwise, it would fall apart, staining my fingers with purple-red ink.
This is me, a me who keeps living, moment to moment, who’s developed a personal view of the world after sixty-plus years. But as humans, our ideas and stories are influenced every which way: by what other people tell us, by books and millennia-old stories, by historical circumstances. Nothing is truly original, except for the unique way each person interprets their life.
Even before chatbots, many people never examined how they came to understand the world. But with AI now controlling online searches and the way information is presented, there’s less reason than ever to question the influences on us.
What follows is one attempt by me to get into the nuances, but if you’re a critic who dismisses AI regardless (“over my dead body!”), I suggest you test it yourself. You may be awed as well as disturbed by what ChatGPT-5 can do — a useful starting point.
More Disclosure: Authors and Their Ink Stains
For comparison, I’ll turn to the influence of human authors on my thinking. Last week, I also finished reading Edward Whitley’s new book Jane Austen and George Eliot: The Lady and the Radical. I found his chapter on “Villains and Victims” particularly clarifying regarding how an author establishes their point of view. Whitley, a devoted reader of these works, nudged me to examine why I’m drawn to George Eliot.
In Jane Austen’s comedies, the consequences of poverty or death tend to be shoved off the page. But for Whitley, Eliot’s “exploration of death, whether it is the long planned deranged murder of a villain or the desperate drowning of a sister and brother, is the natural extension of the premise of her writing.” Eliot the social realist and radical “sets out her approach,” he notes, in the opening of her first novel Adam Bede (1859).
Here’s where an ah-ha! hit me. Whitley focuses on the opening of Adam Bede — “With a single drop of ink for a mirror, the Egyptian sorcerer undertakes to reveal to any chance comer far-reaching visions of the past” — and interprets what Eliot was doing:
“With this image, George Eliot brings us intensely close to her creative process. We (the ‘chance comer’) are invited to focus upon the drop of ink as it is poised at the tip of her pen, about to touch paper. . . . We are invited to feel as close to her writing as we possibly can. If we follow her process of creation, George Eliot is making a commitment to us as readers, a promise, to reveal ‘far-reaching visions.’ We need to follow the ink.”
We need to follow the ink. That is, the unique way a writer connects images and ideas and events — how any of us connect the dots. Yes! The trouble is, following the ink of an individual author’s vision is now complicated by generative AI.
In my first test with study mode, for instance, ChatGPT-5 asked me questions so that I could respond by typing in my own thoughts. As I discovered in subsequent tests, study mode can also generate cleaned-up versions of drafts, add more material, provide scaffolding for key concepts or tensions, and continue to revise until you tell it to stop. According to the introductory page from OpenAI:
“When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something.”
My original plan for this piece was to paste in bot-generated material from that session with my comments. I’d keep it positive and pithy, the Jane Austen of critics. Instead, I’ve ended up a more gimlet-eyed George Eliot.
Maybe I should be reassured that my inner radical took over once I actually dove into writing. I almost am. I’ve now swirled together a still-evolving set of ideas influenced by the ink of other writers. Along with Whitley’s book, my most recent reading has combined everything from whether AIs can suffer to fact-checking at the New Yorker to Mustafa Suleyman’s “We must build AI for people; not to be a person” to the comments of fellow Closely Reading participants about Middlemarch — and that doesn’t include every newsletter or podcast or partially read book I took in last week.
This is what human essayists do, reflecting on what they know from many sources. But it’s also partly what the latest versions of generative AI tools do.
Is that bad? We humans use language to communicate with one another, and this constant exchange is part of how we model the world. Large language models are, to a great extent, modeled on the way we reason in words, for good or ill — on us.
The us of an AI model is collective, though. Bot assistants mirror our intentions and what we tell them, but the underlying algorithms make decisions, too, often without revealing why. They’re always interacting with whatever comes their way; they’re always revising. That’s not so different from how we humans talk to each other, except a bot has no coherent sense of self that continues from conversation to conversation.1
More than anything, I feel compelled to disclose my use of AI because it’s possible the exchanges with ChatGPT-5 affected how this essay evolved.
In a recent New York Times opinion piece, “How ChatGPT Surprised Me,” Ezra Klein also admits to his discomfort with increasingly sophisticated AI:
“I am a bit shocked by how even the nascent A.I. tools we have are worming their way into our lives — not by being officially integrated into our schools and workplaces but by unofficially whispering in our ears.”
You could say I arrived at follow the ink because of other human authors, from Whitley and Eliot to Klein, and in spite of my testing with ChatGPT. For any of you who read on, you’ll see me arguing with the bot from my own point of view. However, you could also say it primed me — influenced me? — or genuinely helped me get here.
First Session: Testing a Chatbot’s Virtual Ink
I began using study mode with this prompt: please set up a session about whether an AI can write honestly and authentically in the first-person voice. I’ve been doing a lot of research and thinking about this topic, and it’s not an easy one to cull down. I figured at this point in my writing process, testing an AI mentor couldn’t hurt.
Once I’d prompted ChatGPT-5 with a topic, it presented a few “core concepts.” It then asked if I wanted to be questioned “Socratically” or via a debate-style format in which it took one side of the argument and I the other. I chose Socratic style. I didn’t and don’t believe such a topic can be reduced to two sides.
It was the first indication that bot-style reasoning might influence me, although nothing about this session is definitive. I’m a longtime writer and teacher, so it doesn’t represent a real test with students. It does, however, point to how AI might be a useful prod to critical thinking. I’ve presented a truncated version below to highlight the exchange with ChatGPT-5. (A link to the full session appears in the footnote.)2
Here’s Question 1, the “Starting Point” from ChatGPT:
“If an AI writes ‘I feel sad’ — but it cannot experience sadness — do you think that counts as a kind of dishonesty, or could it still be seen as acceptable in some contexts?
👉 What’s your take?”
I wrote in response:
In general, I don’t think an AI can honestly write about human emotions or physical states, because it is not an embodied, physical being. However, I think it’s possible that a sentient AI could “feel sad” or label an AI-version of the experience as such, even if sadness may be different for an AI. I also think that human writers fake their first-person experiences all the time in their own writing, especially when writing for an audience in which performing the “authentic self” has become commonplace.
I won’t include ChatGPT’s assessment of my “thoughtful answer” here, but it offered a reasonable summary via a numbered list. Then came Question 2:
“Given that human writers often construct or embellish first-person experiences, do you think authenticity in writing depends more on the writer’s reality or the reader’s perception?
👉 In other words: is authentic writing something the author does, or something the reader feels?”
My response follows, typed quickly but longer than the first. (I’ve broken it into shorter paragraphs for easier reading).
I would not define this with binary poles: writer’s reality vs. reader’s perception. Really excellent human first-person writing resembles a conversation the writer has with their potential audience — and more literally, with peer readers who give feedback or editors vetting the work. I care more about honesty and truthfulness in first-person writing than I do about the performance of authenticity.
For me, being as honest as possible about what you know and recall of your experience involves also admitting to readers that you’re uncertain about some details or that you are aware of your own biases. Acknowledging subjectivity when writing about topics that mostly involve subjective judgments (such as the value of a given personal experience or your love for a particular writer or book) is the honest starting point for such writing. Making it feel authentic and believable to a reader requires stepping into the other’s perspective and figuring out what they need to know to make sense of what you’re writing about.
Ultimately, for me, truthful writing is about how we consider the lives, minds, and emotions of others — it’s born of an empathic response to readers and is an ethical challenge that all nonfiction writers should take seriously.
ChatGPT gushed, “That’s a beautifully nuanced position — and you’ve added a layer that pushes us beyond the simple ‘authentic vs. inauthentic’ divide.” This made me laugh; who says that “divide” is binary? Yet ChatGPT’s summary, drawn from my response, displayed more than the usual sycophancy. Here’s Question 3:
“If honesty in writing requires this kind of acknowledgment of subjectivity and ethical regard for the reader, then:
Can an AI — which does not have lived subjectivity but can be programmed to flag uncertainty, cite sources, or admit limits — fulfill this ethical standard of honesty in first-person writing?
👉 Do you think such ‘honesty through transparency’ could make up for the lack of lived experience?”
What follows is the longest response of mine in this session. Again, I typed quickly, taking five minutes at most, although (to be fair) I’d already answered the first two questions. Again, I’ve broken my original into shorter paragraphs.
I think this is a key question. In some sense, many nonfiction writers and journalists research and report on topics for which they don’t have lived experience. Yet either the framework of a traditional newspaper or book — or the author’s own explanation of how they did the reporting and came to understand the material — will make their position as an observer clear to readers. With much AI-generated prose, the AI’s position isn't clear.
In fact, many would argue that a bot has no point of view, although again, one of the indicators that an AI has achieved sentience would be exhibiting a point of view of their own (and Jack Clark has already noted that some AI models are now exhibiting “situational awareness”).3 So, I can see how things might evolve, but at the moment I don’t think the use of “I” by a bot should be thought of as anything but a rhetorical device, and this doesn’t fulfill my ethical standards for honest first-person writing.
I believe the conversational format of chatbot interactions encourages a synthetic sense of connection that can deceive human users. That is a big, fat ethical problem and a kind of meta-dishonesty regarding how information is presented by AIs.
ChatGPT said, “you’ve put your finger on what we might call the ‘meta-honesty problem.’ Let me highlight the strands in your answer….” It then summarized my response with a four-point bullet list. ChatGPT worded the final point like so:
“Meta-Dishonesty: The most serious problem you raise is not just what the AI writes, but how its output is framed to humans. If people walk away mistaking rhetorical fluency for real perspective, that’s a systemic ethical failure.”
Indeed it is. Except how does an artificial conversational partner that predicts text based on a large language model, assess what’s important? ChatGPT labeled it the “most serious problem,” but it has no authority to decide this, particularly without other sources cited (despite Sam Altman’s claim that the latest version is “like having a team of PhD-level experts in your pocket”).
More to the point, ChatGPT summarized my thoughts in words that aren’t strictly my own. How much do its revisions and filtering reflect my opinion now? If I do decide it reflects me, is it okay to use such a summary in any public writing I do? Am I the writer of the following revision, which I subsequently drafted?
Call it the meta-honesty problem. The most serious challenge I see with generative AI is not just what an AI writes, but how its output is framed to humans. If people walk away mistaking rhetorical fluency for real perspective, that’s a systemic ethical failure.
I believe many other writers are already walking away like this. I’m not sure it’s a problem in practical terms, but the shifty ethics bother me. These models have been trained on the work of countless unacknowledged human writers and artists. Call it the meta-honesty problem. Isn’t meta-dishonesty, my original phrase, more accurate? And what does “it” mean? Hybridity? Not acknowledging where ideas come from?
To answer my own question, I wouldn’t include that revision in writing of mine without disclosing the use of ChatGPT. Here, it’s part of an example, but I hope I’ve distinguished it from the narrative flow of my voice.
This brings me to the fourth question in the session. I’ll follow my response with the full summary provided by ChatGPT:
“Question 4
Given this ethical problem, here’s a challenge:
Would it be more honest if AIs were required to always disclose, in their first-person voice, that their ‘I’ is metaphorical or simulated?
For example: ‘When I say “I,” I mean this as a rhetorical tool. I don’t actually have feelings or memories.’
👉 Do you think that kind of radical transparency would preserve the usefulness of the first-person voice while also meeting your standard of honesty? Or would it undermine the very rhetorical power that makes first-person writing compelling?”
I wrote:
I think disclosure and transparency about AI use of any kind is a step in the right direction. This “I” disclosure statement isn’t bad, and might be of use as part of AI literacy training in the classroom. The real ethical challenge for me comes in human writers incorporating first-person AI-generated writing as if it is their own. I want to see human writers making disclosure statements about how they use AI so that we can begin to develop ethical conventions about when genAI is helpful to writers and when it becomes yet another avenue for undermining truth in information.
ChatGPT said:
“That’s a really strong refinement of your position — you’re widening the lens from AI’s rhetorical honesty to human responsibility in deploying AI writing. Let me reflect back your key moves:
Transparency by AI: You accept that disclosure (like clarifying that ‘I’ is rhetorical) could be useful, especially in educational contexts where AI literacy is being taught.
Bigger Ethical Challenge: You argue that the more urgent issue isn’t the AI’s self-disclosure but rather humans misusing AI’s first-person output. When people pass off AI-generated first-person writing as their own, that blurs ethical lines in authorship, accountability, and truth-telling.
Need for Conventions: You see the solution as developing shared ethical conventions — disclosure practices that clarify when and how writers have used AI in their work. This would help readers evaluate the truth-value and provenance of a text, much like citations or footnotes in scholarship.”
Of course I agree, and I think the summary of “key moves” is correct. ChatGPT is mirroring my writing style, but it’s also producing a hybrid that derives from all the training data and reinforcement learning that’s gone into the current version. In some ways, it’s done an awesome job.
Yet this result underscores the slope we’re sliding down at an alarming rate. Disclosure statements might seem clunky or boring, but they serve a purpose beyond informing readers: they nudge users into thinking critically about what they’re doing when they write. In that respect, this whole essay is a disclosure statement about the hybridity and evolution of my ideas.
Following — and Protecting — My Ink
I won’t lie. AI mentoring for individual users with this level of detail is a bit of good news in the onslaught of business raving about bots doing it all for us. ChatGPT-5 and study mode often appear to be very wise and encouraging assistants, which might well lead writers to rely on them.
That’s also bad news. Simply trusting the decisions made by an artificial entity? For me as a writing instructor, that’s the sticking point. I don’t see the educational value of AI mentors without a strong dose of human analysis of the prose generated. If students did the kind of exploratory commenting I ended up doing in my testing — if that became the whole point of such a writing exercise — I’d sing huzzah.
As it stands, though, the synthetic cheerleading and bullet-list summarizing by an AI model can also be insidiously persuasive. Ready-made wording seduces users, especially novice writers, especially when it plausibly sounds like them. Yet when I think of my ink, I picture plucking blackberries that stain my fingers and get mushed up with everything else. The outcome is messy rather than neat mimicry.
In another test of study mode, I asked for help with writing a personal story about a horrible boyfriend. Within twenty minutes, I had “The Diver with Eight Arms” — about a withholding octopus in an aquarium that ascends to heaven. ChatGPT incorporated satirical material I’d added on a worksheet it generated for me.
The first draft included words I’d never use, but it wasn’t bad. It might pass muster as a comic piece on a digital site. Yet I was more concerned that ChatGPT hadn’t asked about my intentions at the outset, including the all-important ethical one: is this meant to be a true story based on real experience or a creative-writing exercise?
When I asked of the octopus boyfriend, “isn’t it a problem that none of this is true?,” ChatGPT responded, “Not at all — in fact, it can be a strength, depending on your purpose.” It noted the difference between “true-to-life” stories and those told in the manner of David Sedaris (never mind that Sedaris has long been criticized for fakery).
However, with more prodding from me about the truthiness as well as the hackneyed phrasing, ChatGPT said I was right to criticize a “gap in framing”:
And unless I clearly signal what I’m doing and where my limits are at the start, you could end up (as you did) with me serving up a strange hybrid: part writing coach, part creative generator, part cliché-machine.”4
We’re now faced with far too many “strange hybrids” like this in digital media, and they continue to stoke the sense that all personal truths are provisional or whatever a writer wants them to be. Getting students to ask the right questions before starting a story, especially nonfiction told in the first-person voice, should be the pedagogical objective rather than the story itself. That matters for developing and protecting personal points of view— and for truthfully conveying what a writer believes.
This sounds like a conclusion. But a more personally honest place for me to land rings with less authority: my ideas aren’t static and depend on other people. I woke up today realizing my need for disclosure is embedded in what I believe about personal nonfiction in general — meta-honesty, after all, to combat the meta-dishonesty of virtual life. I disclose what I know and what I don’t; I acknowledge who’s helped me and meant the most. That’s personal and specific.
How do I connect the ink of a Victorian writer with blackberry stains on my fingers? Once I start asking questions, I’m studying myself along with the assumptions of other humans. I’m following my own ink.
Chatbots could be tools for helping me explore the way I think and feel and make meaning, all in conversation with the wider universe of ideas. What worries me, though, are the corporate entities that foist AI on us as a faux authority with no point of view — or worse, a disguised one. It’s a challenge, holding onto a sense of self, if your words are constantly revised and polished out from under you.
With that, here’s ChatGPT’s fifth question from that first session:
“Question 5 (pushing forward)
If we imagine a future where disclosure becomes standardized — say, a nonfiction writer includes a note: ‘Sections of this essay were drafted with the assistance of AI, though all first-person passages are my own.’ —
👉 Do you think such conventions would restore enough trust to make AI a legitimate collaborator in personal writing? Or do you think the mere presence of AI-generated first-person passages is always corrosive to the ethical foundation of nonfiction?”
I know what I think — what about you?
Live conversations with humans, in person or online, help me test ideas and get new sources. Thanks to Carl Jensen and Dan Gauger, who know a lot more than I do about machine learning, for pointing out the lack of a coherent chatbot self or “state.”
For the full first session, click here.
See this post from Jack Clark, co-founder of Anthropic, on Import AI.
For the full “horrible-boyfriend” session, click here. In another session, study mode took a terrible first draft of an opinion piece about bullying (from an earlier test I did with Creative Writing Coach) and progressively revised it based on my comments. This use of AI points in the direction many journalists are now going. The piece dramatically improved in about 15 minutes. But is it mine? See that opinion-revision session here.



AI can't think. It aggregates from the internet what others have thought, often plagiarizing.
Oh, boy, Martha, I'm so glad you tagged me (please always do!) and that this essay didn't get lost in my Substack inbox. I've said this before, but I will keep saying it: I love - and am always inspired by - the way your mind works! Yes, it is uniquely you, and yes, it is uniquely extraordinary and beautiful!! You always challenge me to think in new ways, and I think it is largely because you are so honest, and detailed about your thought process, and, the experiences that has formed your views. I loved your answers to AI. I couldn't agree more! As for AI is concerned, I am still in denial that it exists and I don't use it at all. Just this week I was writing a grant proposal - something I loathe doing, even more than my taxes. I felt stuck (it is not easy to sell yourself as an artist, and your project!) and for a moment I was tempted to consider the help of AI. In the end, I didn't. I couldn't. Probably because of what you outlined here. I am always grateful to read you!