Sentences that no one wrote
On "A.I." and humanity
Today Keith Plocek writes about Kurt Vonnegut's Breakfast of Champions, solipsism, and the permission structure "A.I." gives us to doubt and devalue one another's humanity.
"Lately I have found myself wondering more and more, however, if I’m talking to a machine, particularly when dealing with the written word," he writes.
"I teach journalism at the University of Southern California, and I enjoy working with students on their writing, but I get a pain in my gut, and my heart, imagining how much time I have spent in the past couple years, and will continue spending, on suggesting edits for sentences my students did not write. Sentences that, in fact, no one wrote."
Please help pay our writers. Thanks for your continued support.
Read this one by me about "A.I." if you haven't.

This probably applies to all of you. People in possession of human souls who have at least on occasion felt the divine in a work of art. Everything I write myself and everything I love to read or listen to or watch has one bedrock component to it which is this:
Jesus Christ I am alive right now and you are alive right now and someday we will not be but for the duration of this we are both stupidly and beautifully alive.
Or this one.

There was an article in the Times yesterday about how many students and teachers alike are using “A.I.” to both complete and grade assignments
“Writing is one of the most challenging tasks for students, which is why it is so tempting for some to ask A.I. to do it for them. In turn, A.I. can be useful for teachers who would like to assign more writing, but are limited in their time to grade it.”
I can’t come up with the correct metaphor for how that makes me feel. I already used the thing about maggots earlier.
For some reason I keep thinking about an inert sex doll using a dildo. No not in a horny way.
Technically a kind of sex is happening there right? But it sort of removes the main point of the enterprise.
Sentences that no one wrote
by Keith Plocek
Dwayne Hoover goes berserk. He is a car dealer. A local celebrity in Midland City, Ohio. A regular successful guy. Then he comes across an idea so dangerous that it breaks his brain:
“Everybody on Earth was a robot, with one exception — Dwayne Hoover.”
Hoover was already suffering from hallucinations, but the idea that he was the only thinking and feeling person on the planet was just too much, and so he started punching.
Hoover, it should be said, is a fictional character in the Kurt Vonnegut novel Breakfast of Champions, but Hoover doesn’t know that, which is good, because he already has enough problems. Hoover’s main delusion — that he is the only real person in a world full of automatons — is called solipsism, and it has a long philosophical history, going back at a minimum to Rene Descartes, who tried to doubt everything but eventually came around to the idea that he at least existed. (Descartes also tortured dogs and cats, so sure he was that they didn’t have emotions. Real great guy.)
I do not suffer from solipsism. I care, for better and worse, what other people think.
Lately I have found myself wondering more and more, however, if I’m talking to a machine, particularly when dealing with the written word. I teach journalism at the University of Southern California, and I enjoy working with students on their writing, but I get a pain in my gut, and my heart, imagining how much time I have spent in the past couple years, and will continue spending, on suggesting edits for sentences my students did not write. Sentences that, in fact, no one wrote.
Many of my students love to write (in as much as anyone loves to write), and I saw reflections of them in the recent story about the college student who pulled out of the running for a job at the Cleveland Plain Dealer because they disagreed with how the paper used AI to write stories. But not all journalism students are so into the process, and many in other majors care even less. I learned this when helping another department with grading final projects, where I found among their work quotes from Bloomberg, Forbes and the Wall Street Journal that fit the essays in question perfectly, but could not be sourced to anywhere else online.
I had to send those projects back so the machines could try again.
I am starting to ask the question, “Human or bot?” more and more. I know I am not alone. (Like I said, solipsism isn’t my thing.)
The question has permeated the internet. If we don’t agree with what someone has written on social media, we suspect they might be a “Russian bot.” If video evidence contradicts our narrative of a war, we tell ourselves it was generated by AI. Sometimes we are right in these suspicions. Sometimes not. What bothers me most is that we now doubt each other’s very existence in ways we didn’t before. AI has given us permission to do that.
The more we suspect that each other’s words and images have been generated by machines, the less we value each other’s humanity. And the more we dehumanize each other, well, you know where this is going, because in many ways we’re already there.
In the book Dwayne Hoover ends up wreaking havoc on his small Ohio town. He attacks his son, his mistress and others. What actions will we take — have we already begun to take — when we no longer see each other as human?
It was easy to appreciate Vonnegut in high school. He questioned authority. He drew pictures of sphincters. He wrote short. It was the kind of writing that was perfect to read after a Sunday afternoon in the park smoking schwag with friends on a metal merry-go-round. Like many literary kids, I often fell behind in my AP English class because I was too caught up reading other authors instead. On those stoned Sunday evenings, it was almost always Vonnegut.
Out of all his novels, Breakfast of Champions was my favorite. It had fewer subplots about Tralfamarodians or the Dresden firebombings than many of the others, and it was super meta, with an author, Kilgore Trout, who was not the author of the book I was reading but did write the book inside the book I was reading that blew poor Dwayne Hoover’s mind.
Trout appeared in several other novels by Vonnegut, and the real author had lots of fun summarizing the fictional one’s stories. It was a trick he devised after an editor told him, “You know the trouble with science fiction? It's much more fun to hear somebody else tell the story of a book than to read the story itself.” Vonnegut didn’t have to write Trout’s books. He just needed to tell us the summaries.
Speaking of not reading and not writing, I just asked ChatGPT to come up with a couple of Trout plots. ChatGPT is good at summarizing, right? It’s also good at making shit up. So maybe it could make up some good stories for me. The first one was a miss; it was too saccharine, with a plot line about a mysterious department in the sky where lost objects are categorized, and the most important lost object of all was humanity’s hope for a better tomorrow. The second story I barely started before my vision blurred and I closed my browser window reflexively.
What would’ve happened if I’d asked students to write Trout plots for me? How long would I have read about those lost objects in the sky before realizing something was off? Would I have even noticed at all?
In writing about (and railing against) artificial intelligence, the art historian Sonja Drimmer has returned again and again to the phrase “permission structure.” The phrase was popularized in the early 2010s by the Obama crew, and it referred to the rhetorical strategy of giving an opponent enough reasons to change their mind while also allowing them to save face. In Drimmer’s usage, the meaning seems to slide into giving someone an excuse to do what they already wanted to do, and I like that connotation.
If a newspaper publisher wants to lower headcount, they can now use AI to replace reporters while telling themselves they’re riding the wave of the future. No matter that the product will suffer. No matter that other humans will suffer too. If some DOGE shitheads want to hurt people just because, the “because” can be justified by AI; computer says no. This is the kind of ethical slippage that’s easy to spot in others, but I can feel it happening to me too.
Now that anybody can make and distribute a fake image, Drimmer writes, “this ability has given everyone a permission structure to doubt. Everyone, in other words, has been granted license to choose which images they will and will not believe, and they can elect to unsee an image simply because it doesn’t confirm their priors: the mere possibility of its algorithmic generation opens it to suspicion.”
The same permission structure goes for words, and in my case that includes anything submitted by my students that I have not actively seen them writing in class. I have permission – maybe even an excuse – to doubt it all. But I don’t want to doubt my student’s humanity. They don’t want me to doubt it either, and not just because they’re worried about grades. Many of them, writers that they are, want someone else to understand what’s inside their heads. They want to be read.
The merry-go-round in the park from my youthful Vonnegut reading days is gone. I drove through my old neighborhood a few years ago and learned the whole park had been razed. You know how old guys love to say “This used to be a field”? Well, now this place had become a field. I got out of the car and stood where the merry-go-round used to be, and I thought back on what it was like being an angsty teenager, spinning slowly, smoking schwag with friends.
One of those friends texted me in November: “Yo, I’ve curated an article using chatGPT (with ALOT of my input). Maybe you can help me edit and rewrite in human form?”
He pasted the whole article, adding that it might help take my mind off things, and by that he meant the death of my father, just two weeks gone. I felt so angry. The way you can only get angry at someone you’ve known and loved for a long time. What was this? Did he even know me at all?
I waited a while before responding, because I knew I would be screenshotting the exchange and sharing it with others, and I wanted to sound like a reasonable person. Here’s what I texted back: “Hey man. I know I’ve helped you edit some of your own writing in the past, and that is something I’m game to keep doing. Writing is thinking, and editing is thinking along with someone. They both can be very therapeutic. But I don’t really get anything from editing a machine.”
What I didn’t ask, but I do wonder, is what people who let machines write for them think they’re getting out of it.
My friend doubled down and said he hadn’t explained his process clearly. He hadn’t just asked ChatGPT to write the article, he said. There had been some back and forth. I got shittier and shorter in my next response, and I somehow texted him the screenshot I’d been planning to share with others. He later texted an apology, which I appreciated, even though his phrasing was a little too pat, maybe even artificial.
Was I texting with a human or bot at the time? Now I’ll always wonder.
Keith Plocek teaches journalism at USC Annenberg. You can find him around.

