Mechanical Ghosts
How I'm feeling about the invasion of LLMs

Ghostwriting has been, for some time, one of the highest-paying jobs in the literary world. Journalists, novelists, and screenwriters often pick up gigs to finance the reporting, research, and writing they actually want to do. Others do it full time, supporting themselves and their families with their talent, the way John Cheever once paid his mortgage on short story commissions from The New Yorker. Sometimes ghostwriters are credited, but often they are not. However, in some books—say, a celebrity memoir or business title—you may sense the presence of a generic and somewhat disinterested intelligence organizing the work. That’s the ghostwriter, doing their work, nudging the language toward greater coherence and accessibility.
Now the ghost is an LLM. Probably Claude. When I was working on my last novel We Were Pretending, I read a lot about how AI was being used in mental health fields. I was doing most of this research around 2019 – 2021, before ChatGPT went public. I had a vague sense that LLMs were coming and guessed correctly that people would use them in a therapeutic way to talk about worries and troubles. I suppose I also understand that LLMs would be used for Power Point presentations, agendas, and other types of business writing. But it honestly never occurred to me that people would want to use an LLM for creative writing and personal correspondence. Nor did I realize that tech companies would steal writing—including my own books and probably anything I’ve ever put on the internet—to train these mechanical ghosts. In retrospect, I feel very naïve.
My husband Mike, who works with corporations to organize teams, has been learning how to work with an agentic version of Claude. He tells me it’s not actually that easy to get the machine to be helpful, likening the experience to learning to drive and repair a bespoke car. I observe a kind of wired, anxious energy emanating from Mike after sessions with Claude. It’s not Mike’s usual vibe and it’s made me a little skeptical of Claude’s influence. I was also annoyed to learn that Claude seems to be exactly as good as I am at coming up with words and concepts. I know this because Mike asked me for some alternate phrases for something he’s writing for work. The two I offered after a minute or so of thought were exactly what Claude had suggested—except Claude didn’t have to think about it. (But, also, I did not consume massive amounts of fossil fuels while my brain whirred.)
I haven’t tried using any LLMs or agentic AIs. I prefer to use my puny mind. I’ve tried to divest from AI tools wherever possible, even though I know it’s kind of pointless. I turned them off in Microsoft Word and unsubscribed from my annual plan, so that next year I will be forced to find something new. I turned off AI training in Substack[1], and I stopped using Google search a few years ago, switching to Duck Duck Go, which allows you to turn off those AI summaries that appear at the top of the page after you enter a query. I turned off all AI tools in Gmail, which brought a flood of emails into my primary inbox, which I have been slowly unsubscribing from. I might just declare bankruptcy and start fresh in Proton Mail. (Who knew I got so many marketing emails? And what is the point of these emails, some of them likely sent by AI assistants, if they are only going to be filtered by AI?)
I see a lot of anger toward LLMs in the literary world. And I’ve felt it myself, if that’s not already obvious. But lately, I’ve been noticing a kind of bewilderment setting in. A couple of weeks ago I was in Amherst, for Litfest, and after a day of celebrating literary fiction and nonfiction—the kind of writing that is distinctive because it comes from one person in particular—I retired to a speakeasy-style bar with friends, where talk turned to LLMs. It felt inevitable that we would discuss their invasion. We talked about how the self-published fiction market was already crowded with AI-authored novels, and we wondered if any AI-assisted works would come out of the big publishing houses. The publishing world uses contracts but it’s basically an honor system: authors attest that their work is their own, and non-fiction writers do their own factchecking and citations. It’s up to editors to flag anything that seems bogus—but sometimes AI-authored works are published anyway. Just this week the New York Times reported that Hachette pulled a new thriller off the market [gift link] after discovering that the book was likely written with the assistance of a chatbot.
In the dark bar, we debated whether, in the future, we would be able to tell if a literary work was authored by an LLM. We all felt that we could sniff them out now but wondered if this would always be so. And even if it remained so—that we could spot the human fingerprint on prose—would future generations be as adept? More to the point, if AI-assisted writing continued to improve, would future generations care?
For those who read my post last week, these questions are similar to ones I had at AWP, after I attended a panel about the future of book reviewing. After hearing the panelists extoll the value of professional reviewing—a value I share—I was left with the uneasy feeling that book reviewing was on its last legs. A lot of readers seem content with online user reviews, influencer content, marketing copy, and AI summaries of all of the above. That doesn’t mean I’m going to stop writing or reading in-depth reviews, but it does leave me pondering very human questions, the kind that I assume LLMs do not ask of themselves, such as: Does anyone care about the things I care about? Does what I do have any value? Am I totally irrational?[2]
You can drive yourself crazy, speculating about what is going to happen next. I think I worry most about a completely hollowed-out publishing process where a book is shepherded through every stage of production by AI agents. I value the feedback from my agent and from editors I’ve worked with, not only because their edits are helpful and make my writing stronger but because—and I can’t believe I need to write this—they are human beings. Will this be something I need to specify, in the future—that I prefer human editors? It seems unlikely that publishers will go full robot, given the importance of relationships in the industry, but I honestly don’t know. We are living in very fast-moving times.
A friend who made a good living as a ghostwriter got out of the business a couple of years ago when he saw what ChatGPT could do. Another friend told me that someone in her writing group recently shared developmental editing notes from an LLM and wanted the group to respond to the LLM’s suggestions. I would be annoyed if someone in my writing group did this, but is this going to become normalized? Will some writers not even bother with a writing group and instead construct four LLM editors with different personalities who will then review the story and immediately provide feedback? Will specialized editorial LLMs arrive in the marketplace for authors to use before they query agents? Will this layer of robot/LLM editorial review become expected, part of an author’s due diligence before submission? Or, conversely, will publishing houses try to distinguish themselves from bland, AI-authored content by investing in original, human-made writing? (I hope so.)
Writing this post has not been especially uplifting, but I wanted to record my thoughts on the subject at this time—March 2026—to remember this apprehensive mood, which I don’t think I am alone in feeling. After two weekends in a row among writers and editors, I sense that the publishing world is spooked. We don’t know what’s coming, exactly, but given what tech advances did to newspapers and magazines, we have no reason to think that anything on the horizon will be in the best interest of writers, readers, and human beings, generally.
[1] There’s a button in settings that you can toggle to indicate you don’t want your posts to be used as training data.
[2] “It is through poetic and irrational means that the unseen world of your story gets radically illuminated.”—Mary Gaitskill


I definitely want my escape into reading to be written by a human. I want the quality of what is published to be assessed by a human as well.
My work is in supply chain and I use AI to try to make my work less manual. I keep using different agents to try to collate data from multiple sources to enable a more automated approach for analysis. So far I have spent longer time talking to the agent and critiquing its efforts than doing the work.
I totally feel you on this--a puny human mind is my favorite. (Also, I don't see this opt out button on my Substack privacy settings!)