Hari Kunzru is right to point out that “the ‘can machines do creative writing’ thing is mostly a distraction from the use of the machines to go through text and images to cancel grants and put people on deportation lists.” So the best way to understand OpenAI’s recent claims to have trained a new model that, according to CEO Sam Altman, is “good at creative writing” and “gets the vibe” of “metafiction” is that the company is running interference for the authoritarians using similar technology to automate surveillance, circumvent human scruples, and do away with due process.
It’s no surprise that the example that Altman provided in an X post was terrible (“We spoke—or whatever verb applies when one party is an aggregate of human phrasing and the other is bruised silence”), but that is because he and his collaborators undoubtedly have terrible taste. With sufficient massaging and competent editorial judgment, an LLM could probably be used to generate passible, plausible fiction of whatever sort one wanted, though this would obviously reveal nothing about the machine’s talent or sensibility (it’s not sentient) and would be interesting only for the insights it afforded into the people who iterated on the prompts and, most important, decided to share the generated output with others. It’s not “machinic creativity” so much as found poetry on demand. That this seems oxymoronic and self-negating is perhaps indicative of its intrinsically limited appeal.
In a recent paper, Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans make the case that LLMs are a “cultural technology” that can “abstract a very large body of human culture” and mediate that culture to users in novel ways. As Farrell summarized in this thread, generative models “take incomprehensibly enormous bodies of textual and visual information, and create imperfect but much more manipulable statistical summarizations of it, that can be used at scale to summarize, rephrase and remix.” But a LLM remains a “social technology”:
its uses involve human relationships mediated through technology. When someone uses ChatGPT to craft a resume, they are engaged via an LLM in a social relationship with all the other humans who wrote the resumes etc. in training set and also with the humans who trained the LLM through reinforcement learning, or generated the synthetic questions that were used for [reinforcement learning]. Like Soylent Green, or more prosaically, the vast swathes of culture it summarizes, it’s all made out of humans!
To produce machine-generated fiction is likewise to enter an extremely distant and highly mediated relationship with “fiction writers” as well as the beleaguered contract workers paid to evaluate samples of generated text in accordance with various contrived and deaestheticizing rubrics. Whatever unique conditions provided a creative impulse or a compelling moment of recognition to a particular person in a particular instance are averaged away so that the user can play around with the various forms in which creativity has appeared and resonated with others — i.e. one can toy with the “vibe” of creativity without any inspiration or understanding of how it works. Then, once the user shares some results of this process with someone else, the text is recontextualized by that gesture, and the user effectively becomes its author.
In his recent book-length poem Context Collapse, which examines how media technologies reshape the kind of poetry that can be made, Ryan Ruby offers this paraphrase/extrapolation of a 1966 paper by computational linguist Margaret Masterman about “toy models of language”:
Are two identical sentences, the first produced by a human being and the second by machine intelligence in fact indiscernible? Answer: No … if two subsequent conditions obtain. (1) The recipient of those sentences is human (the criterion of care) and (2) said human does not know whether the sentence was produced by a human or a machine (the criterion of context). If the same sentence can have different valences — whether semantic or perlocutionary — in different contexts of production or reception, it follows that this will also be true in the noncontext of a machine-produced sentence, or when awareness transforms that noncontext into a paratext which provides a frame for interpreting and understanding it.
I would paraphrase that paraphrase as: machine-generated texts are meaningless in and of themselves without the context of human exchange, regardless of whether they can be syntactically parsed. The meaning of communication, the value of what is communicated depends on, as Farrell puts it the “human relationships mediated through technology” and not the technology itself, whether that is a pile of circuit boards or a pile of phonemes.
The “scene of reading” — the situation in which a text is presented to you — establishes horizons of meaning as well, and this scene will also have irreducible human relationships at its base. If some researcher tries to fool you by presenting you a LLM-made text as human-made, it doesn’t matter if you take the bait: The researcher is now the writer. Max Read argues here for “close reading” of LLM output, but it seems like the wrong tool; maybe a phenomenology of prompting, or statistical table of word frequencies, or a network analysis of Sam Altman’s tweets makes more sense. Or some sort of Franco Moretti-style “distant reading” in reverse. Maybe reader-response theory. Trying to parse LLM text closely without prioritizing the context of who made it and distributed it and why someone opts to engage with it seems pointless, like trying to understand why q comes before r.
When language circulates, the presence of another person is automatically implied; even LLM text implies human intentionality at some level. But it is maybe a mistake to think that the point of machine-generated text is to simulate the thought and feeling of another person. The point may not be to elevate our appreciation of the machine’s creativity to a human level, but to allow a human to degrade their encounters with language to something more predictably mechanical.
A critic on Bluesky pointed out that entrepreneurs like Altman “fail to understand that creative writing isn’t a slop bucket that needs refilling by any means possible. The reason creative writing is beloved is because it gives us insight into the thoughts and imaginations of fellow humans, not homogenized and plagiarized slurry.”
That’s probably true, but it may be that people are sometimes seeking an escape from “the thoughts and imaginations of fellow humans” because they are threatened or inconvenienced by having to take other people into account, and what they want is an endless stream of content that negates human creativity and frees them from having to live up to it. In other words, LLMs promise to turn language — ordinarily polyphonous and uncontrollable and irreducibly social — into something more like machine gambling, a solipsistic flow experience that provides an illusion of control by impoverishing the range of experience and reducing it to refilling the slop bucket, over and over again.
But how can you refer to the enormity of all that training data -- the curriculum (admittedly always partial and problematic - but still, the sheer enormity of it!! of digitized humanity!!) -- as slop?! Slop that is made up of "the thoughts and imaginations of fellow humans"? The algorithmic pathways across this vast data return these figurations of ourselves back to us... if you just thought about that alone for a moment, without having to champion *human* creativity against the threat of automated creation.... is there a little room to be kind of moved and amazed by it? Even if the output is a shitty line of poetry? Think of all the horrible lines of poetry that are written by human beings, that are maybe also part of the collective effort of the beautiful lines also written by human beings... I just wonder if individual humans can claim sole authorship of their ideas. Haven't we always been sharing intelligence with each other, the things we have read, the data of our lives, the flight path of birds? Thank-you for the provocations.