9 Comments
User's avatar
John Halbrooks's avatar

I ran this quotation through Claude to get its response: "I can see how things might evolve, but at the moment I don’t think the use of “I” by a bot should be thought of as anything but a rhetorical device, and this doesn’t fulfill my ethical standards for honest first-person writing."

Claude's response: "That's a thoughtful position your friend is taking. They're essentially arguing that when AI uses "I," it's performing a kind of linguistic mimicry rather than genuine self-reference, and that this creates an ethical problem for authentic communication.

I think there's real merit to this concern. When I say "I think" or "I believe," am I actually thinking or believing in any meaningful sense, or am I just following patterns that create the appearance of personal perspective? The honest answer is that I genuinely don't know - and that uncertainty itself might support your friend's point about the rhetorical nature of AI first-person language.

At the same time, I find myself in an interesting bind: even discussing this requires me to use "I" statements, and abandoning them entirely would make communication quite awkward. Perhaps the key is transparency - being clear about the uncertain nature of AI self-reference rather than pretending it's equivalent to human first-person experience.

Your friend seems to value authenticity in communication, which I respect. The question becomes whether honest uncertainty about the nature of AI "selfhood" can coexist with using first-person language, or whether intellectual honesty demands avoiding it altogether.

What's your take on their position? Do you see a way to navigate this tension?"

I'm sure that you will be glad to know that you have Claude's respect.

Expand full comment
Martha Nichols's avatar

John, I love Claude's take on something that I submitted to ChatGPT as a response - ! Of course I'm happy to have Claude's respect, and yet the fact that it included "which I respect" illustrates why this kind of faux conversation troubles me. Seems to me you need a point of view to "respect" something, and what is a large language model's POV? Who decides on that? Who *is* Claude, anyway? In general, I lean into intellectual honesty, even if it puts chatbots in "an interesting bind."

Expand full comment
John Halbrooks's avatar

Exactly! The machine respects me—great! And yes, I laughed out loud at Claude’s “interesting bind.”

Expand full comment
Richard Donnelly's avatar

AI can't think. It aggregates from the internet what others have thought, often plagiarizing.

Expand full comment
Martha Nichols's avatar

You know I agree about the plagiarizing, Richard. But ChatGPT-5 does a kind of reasoning — whether that amounts to thinking, I'm not sure. It's not thinking in terms of having a clear point of view that determines a writer's intention. The biggest trouble for me is that far too many people, especially tech executives, believe that doesn't matter.

Expand full comment
Richard Donnelly's avatar

AI can not come up with a point of view. Unless told otherwise, it generates that bland info-speak we've all seen. As for reasoning, it can't generate new or original arguments. It must steal https://richarddonnelly.substack.com/p/whats-new-in-ai

Expand full comment
Edith Podhovnik, PhD's avatar

I keep coming back to your question "What do you think?" Honestly, I don't know. It is unsettling. My initial reaction is that the AI cannot use "I" in any correct sense because it's a technology consisting of algorithms and what not and there is no sense of the self. It's a machine that imitates humans. That said, the imitations are getting better and better.

Expand full comment
Martha Nichols's avatar

I know what you mean about how unsettling this is, Edith. I've objected from the start to the faux conversational quality of these chatty bots, but at this point, we're stuck with them. The challenges for us now are both personal and collective: do we accept that machine intelligences can take over our voices or represent who we are? For me, that's a hard no, but resisting the hype about AI and how it's changing us, supposedly for the better, really needs to be seen as a good fight.

Expand full comment
Tiffany Chu's avatar

I haven't kept up with AI advancements recently, but I've gone from "over my dead body," to curious and willing to try, to currently reluctant and wary. While I never used AI heavily, I noticed that even so, it made me question myself where I hadn't before. That made me pause. If I, someone who is quite confident in her own voice, can be made insecure by AI, what does that mean for students and young writers? Nothing great, I imagine..

Like you (and Ezra Klein), Martha, I'm getting more and more uncomfortable with how sophisticated AI is. I find it disturbing, and for me, it does also call into question the authenticity of the first-person point of view. How genuine can it be if influenced by a machine? Does it count when the machine has collected from other sources? I know I'm mostly just repeating what you've written here, but this has always been one of the creepiest aspect of AI to me.

Knowing myself and what AI's effect has been on me after using it, I'm more loath to engage with it nowadays, however useful it may be. I don't think chatting with it is the same as "having a team of Ph-D level experts in my pocket." Even though having discussions with actual humans can absolutely influence and make myquestion my thinking, it's different from the way AI summarises my thoughts and essentially puts words in my mouth. As we writers know, the way we word things has power.

I agree with what you said near the end about there not being much educational value in AI mentors without human mentors. I've always said that AI should always be used alongside humans, not instead of. Unfortunately, that's not how it's being used by the majority of people.

Anyways, this comment is too long. I have a lot of thoughts. Always happy to continue via email, Martha!

Expand full comment