A woodworker's workbench viewed from above, with traditional hand tools like chisels, planes, and saws arranged alongside a glowing tablet and a translucent holographic AI interface. In the center, a beautifully carved wooden book lies open, its pages showing handwritten text and illustrations. Wood shavings curl around the edges, and warm golden afternoon light streams through a workshop window.

It’s a Poor Craftsman Who Blames His Tools

A craftsman’s workbench where traditional tools meet digital ones, a carved wooden book at its center. The old and new sit side by side, each essential to the work.

I was around eight years old, standing in front of my third grade class, holding a neatly printed report on Scotland. We had an IBM PC Jr. in the house, one of the few families in the neighborhood who did, and I’d used its word processor to write my family heritage project. When a classmate asked how I’d made it, I told him the truth. I did it on the computer. His response was immediate: “That’s cheating. All you had to do was tell the computer to write everything about Scotland.”

That moment has stuck with me for over forty years. Not because my classmate was right, but because his assumption felt so deeply unfair. He looked at the output, saw something cleaner than what he could produce by hand, and concluded that the tool had done the thinking. The work I’d put in, the reading, the organizing, the rewriting, became invisible the moment the medium changed.

I thought about that kid this week when Hachette pulled Mia Ballard’s horror novel Shy Girl after AI detection tools flagged up to 78% of the text as machine-generated. The story has all the ingredients of a proper scandal: a buzzy debut, a viral three-hour YouTube analysis that racked up over a million views, an author claiming an editor used AI without her knowledge, and a publisher left scrambling to explain how it slipped through. I don’t know whether Ballard used AI to write her book, and this post isn’t about relitigating her specific case. What interests me is the reaction. The assumptions that surfaced, the lines people drew, and what those lines tell us about what we actually value in creative work.

We’ve Been Here Before

The outrage around AI-generated books carries an implicit premise: that the value of a book is inseparable from the human effort of writing it word by word. But we’ve never really believed that, have we?

Ghostwriting is one of the oldest traditions in publishing. The term itself was coined in 1921 by Christy Walsh, but the practice predates the modern publishing industry entirely. Ancient Greek and Roman scribes wrote speeches for public figures. In the early 1900s, Edward Stratemeyer built a literary empire by creating plot outlines for children’s book series and hiring writers to turn them into finished novels. The Nancy Drew books, beloved by generations of readers, were written by a rotating cast of ghostwriters under the pseudonym Carolyn Keene.

Today, estimates suggest that 60 to 80 percent of business and self-help nonfiction is ghostwritten or co-written. More than 80 percent of celebrity memoirs are ghostwritten. Andre Agassi’s autobiography, widely praised as one of the best sports memoirs ever written, was ghostwritten by Pulitzer Prize winner J.R. Moehringer. JFK’s Profiles in Courage won a Pulitzer Prize despite strong evidence that Ted Sorensen wrote most of it.

We know all of this, and we’ve made our peace with it. When you pick up a celebrity memoir, you don’t assume the celebrity sat alone in a cabin writing every sentence. You assume there was a collaborator, and you’re fine with it, because the ideas, the stories, the perspective still belong to the person whose name is on the cover.

So what exactly changes when the collaborator is a machine?

The obvious objection is that a ghostwriter is a human being. Moehringer spent months interviewing Agassi, made narrative choices rooted in empathy and decades of craft, brought the full weight of his own life experience to the project. Today’s language models do none of that. They have no understanding of their subject, no lived experience to draw on, no creative judgment in any meaningful sense. That’s a real difference today, and I don’t want to pretend otherwise. Whether it remains a permanent difference is a question none of us can answer yet.

But here’s what I keep coming back to: the reader’s relationship to the work is the same either way. You don’t read Agassi’s memoir for Moehringer’s prose style. You read it for Agassi’s story, told well. The question for the reader was always “is this book worth my time,” not “who exactly arranged these sentences, and what was their inner life like while they did it.” The collaborator changed. The contract between author and reader didn’t.

The Real Question

The Guardian piece on the Shy Girl controversy quotes Mor Naaman, a professor of information science at Cornell Tech, asking a question I think cuts to the heart of this: “We all work in an AI-hybrid world now. When does something become an AI-generated book, rather than just using AI like I use a spellchecker, to fix my grammar or maybe spark ideas?”

This is the right question, and I don’t think it has a clean answer. The spectrum of AI assistance in writing is wide and continuous. At one end, your word processor underlines a misspelled word and you accept the correction. Nobody calls that cheating. At the other end, someone pastes “write me a horror novel” into ChatGPT and submits whatever comes back. That feels like fraud. But between those two poles is a vast grey area where most of us who write with AI actually live.

I write extensively with AI, and in the interest of practicing what I preach, let me tell you exactly how this post was made. I read the Guardian article in my feed reader. I opened up an AI conversation, dropped in a link to the piece along with several hundred words of reaction, half-formed arguments, and personal anecdotes. Then I went back and forth with the AI to build out the structure and prose. I read and edited the draft several times. I then used the AI to find weaknesses in my own arguments, and methodically worked through each one, discussing my views and refining the piece as we went.

Every idea in this post is mine. The IBM PC Jr. story is mine, pulled from a memory no language model has access to. The ghostwriting parallel, the monoculture pushback, the phonograph analogy, all mine. But the process of turning that raw material into the thing you’re reading right now involved a collaboration that didn’t exist five years ago. So, is this an AI-generated blog post? I’d say no. But I couldn’t have written it this way without AI, and I don’t see any reason to pretend otherwise.

Here’s the thing that doesn’t get said enough: I would not have written this piece at all five years ago. Not because I didn’t have opinions, but because I didn’t have the time, and the tools available to me didn’t match the way I think. I’m a conversational thinker. I work through ideas by talking them out, testing them, pushing back on them. Five years ago, I would have read the Guardian story, thought “I disagree,” and moved on with my day. AI gave me a way to turn that disagreement into something I could share. It didn’t lower the quality of my writing. It made the writing possible in the first place.

I’ve written before about how AI is changing the craft of software engineering, and I see the same dynamics playing out in writing. In my open source projects, I have no problem with AI-generated code. What I care about is whether the author has tested it, understands it, and can vouch for its quality. The tool doesn’t matter. The accountability does.

Authorship in other creative work should be no different. The judgment should be reserved for the end product. We should hold authors accountable for delivering something we want to read and that we enjoy reading. There were plenty of formulaic, poorly written books on the shelves before AI, and there will be more after authors embrace it. It’s not the tools. It’s the people and how they use them.

The Monoculture Was Already Here

The other argument that comes up in these conversations is that AI will drive us toward a cultural monoculture, a flattening of creative output into algorithmically averaged blandness. Naaman makes this case in the Guardian piece, warning that “AI nudges users into a bland monoculture” and that it “could never generate the truly diverse creativity of the human mind.”

I have two responses to that.

First, we’ve been driving toward monoculture for a long time, well before AI had anything to do with it. Walk down the high street of any major city in the world and tell me what brands you see. Are they really that different from London to Tokyo to São Paulo? How about the movies in the theater or the books on the international bestseller lists? Globalization and the internet have been blending and merging cultures for decades. AI isn’t the cause of our growing monoculture. It’s a reflection of it. We are all more connected today than we have ever been, and that connectivity, for all its benefits, inevitably smooths out some of the edges. Can AI accelerate that flattening? Absolutely. If millions of writers lean on the same models trained on the same corpus, the pull toward sameness is real. I don’t want to minimize that risk. But the answer is better use of the tool, not rejection of it.

And this is the more important point for writers: AI is a prediction engine. Given a sequence of text, it generates the statistically most likely next token. Left to its own devices, it will absolutely produce the most average, most expected, most median version of whatever you’re writing. That’s not a flaw in the technology. It’s a feature you have to work against. It’s up to the author to bring their own voice, their own weird obsessions, their own hard-won perspective into that conversation and make it interesting. The AI will always pull you toward the center. Your job as a writer is to pull it toward the edges.

This is no different from any other creative tool. A guitar doesn’t make you a musician. A camera doesn’t make you a photographer. And a language model doesn’t make you a writer. But in the hands of someone who knows what they want to say and has the skill to shape the output, all of these tools can produce extraordinary work.

I keep thinking about the early days of digital photography. When digital cameras started displacing film, there was a vocal community of film purists who insisted that you weren’t a real photographer unless you were shooting on film. Digital made it too easy, they argued. You could fire off hundreds of frames and increase your odds of getting a lucky shot, rather than developing the discipline to compose and expose a single frame correctly. The process was the point, they said. The limitation was what made it art.

Sound familiar? We’re hearing the same argument about AI and writing now. The tool lowers the barrier, so the gatekeepers question whether the output counts. But digital cameras didn’t kill photography. They democratized it. They opened the craft to millions of people who would never have had access to a darkroom, and the best photographers in the world today shoot digital without anyone questioning their artistry. What matters is the image, not the medium it was captured on.

Judge the Work

There is a legitimate concern buried in this debate that I don’t want to dismiss. If AI takes over the entry-level writing work (the copywriting gigs, the formula genre fiction, the content mill assignments), where do emerging authors get their reps? If the lower rungs of the ladder disappear, do we end up with fewer masters at the top?

It’s a fair question, but it’s not a new one. When the phonograph and the player piano arrived in the early 1900s, they devastated the livelihoods of working musicians. Every restaurant and pub used to have live music because there was simply no other way to fill a room with sound. Those technologies thinned the ranks dramatically, and what followed was a greater concentration of attention on the most talented performers. They weren’t playing the pubs anymore. They were playing the concert halls. Then radio and recorded music concentrated things further still. John Philip Sousa warned in 1906 that mechanical music would be the death of the art form.

He was wrong, of course. The tipping point, in my opinion, came with the democratization of music-making and music-publishing tools. GarageBand, SoundCloud, Bandcamp, Spotify for independent artists. The pub gigs never came back, but the ability to create and distribute music became more accessible than it had ever been. The pipeline didn’t disappear. It changed shape entirely.

I think writing is headed somewhere similar. The old entry points may shrink, but AI is simultaneously creating new ones. Editing AI output, directing it, curating it, knowing how to coax something genuinely good out of a collaboration with a machine. These are new skills, and they’re the early rungs of a different ladder.

There’s a saying in woodworking: it’s a poor craftsman who blames his tools. I think we need to keep that wisdom close as this debate continues. If Shy Girl turns out to be a case of someone submitting raw machine output as their own creative work, then the problem isn’t that AI was involved. The problem is that the author didn’t do the work. But if it turns out to be something messier, a human writer who leaned on AI more than the industry is currently comfortable with, then we need to ask ourselves what exactly we’re punishing and why.

But let’s not make the same mistake my third grade classmate made. Let’s not look at the tool and assume it did all the thinking. The question isn’t whether an author used AI. The question is whether they wrote something worth reading.

Leave a Reply