Rae Armantrout’s short poem “The Craft Talk,” from her 2018 collection Wobble, launches in medias res into this one-sentence stanza:
So that the best thing you could do, it seemed, was climb inside the machine
that was language and feel what it wanted or was capable of doing at any
point, steering only occasionally.
Because I’ve condemned myself to interpret everything for the time being in terms of the possibilities of “generative AI,” I immediately read this not as craft advice for writing but as a description of what it could be like to prompt an LLM — a way to “climb inside the machine that was language” to see what it could do and to see if it could take over for you and at some point begin to steer itself. That is, in my initial reading, the “it” in “feel what it wanted,” referred to “the machine” and not to “language” and not even to “language” understood as a generative engine, machine in its own right, even before it is reduced to a programming language for a separate digital machine that can be made to contain it.
In my reading, I immediately collapsed the difference between language as a machine and a machine producing language. I began to transfer some of the agency or geist that seems embedded in language — its latent capacity to have wants or intentions that express themselves through a speaker — to something that has none of those things and which supplants a speaker. It seems easy to conflate the properties of language with the capabilities of a machine that outputs language and attribute to that machine all those complexities that language itself manifests. But that gets things backward: Machine-generated language is stripped of the complexities that derive from a subject inhabiting it — “ not like a knight in armor exactly, not like a mascot in a chicken suit,” Armantrout writes — and is flattened into a form of statistics.
I know that is a tired point, that LLMs work mathematically, as if language could be purged of what Kristeva called the “semiotic,” which grasps some of how human embodiment informs language’s potentiality, as well as how communication depends on an inarticulate stratum; on prelingual, preindividuated experience; on meaning’s ultimate elusiveness; on connotations that aren’t suggested semantically, that can’t be fully captured in concepts or data; on failure.
“Climbing inside language” points to how language constitutes subjectivity, sets up coherent limits so we can try to comprehend ourselves (and never quite succeed), whereas prompting a machine for language can become a means of disavowing that inevitable failure, a way of trying to crawl out of the language machine ourselves. LLMs promise language seemingly purged of ambiguity, of slippage, of the “movement of the trace,” of all its self-consuming contradictions, of lack. Prompting LLMs appears to involves a use of language that isn’t intersubjective, that is purely functional and imperative and in that sense fully present. LLMs would thereby offer an invitation into the “machine zone,” where subjectivity has a clarity and an immediacy in becoming automatic, of locking into a feedback loop that leaves no remainders.
***
“The process of writing is unpleasant and tiresome,” Boris Groys announces, with incalculable irony, at the beginning of this recent piece for e-flux. “Personally, I hate it.” It not only “leads to scoliosis” but is a “lonely activity” in which one is confronted with the futility of trying to convey what they mean, of how one never really connects. “At least since Derrida it has become untenable to believe that a writer can stabilize their ‘intention,’” he writes, but with generative models, writerly intentions need not matter anymore. “From the perspective of readership, the difference between human-written and machine-written texts is completely irrelevant.”
Instead of depending on writers, readers take their place, becoming prompters. In a prompt, the problem of conveying “intention” becomes purely technical; it is no longer the irresolvable aporia inherent to social communication but appears to be a mathematical problem that can ultimately be solved. Where before a reader couldn’t “look into the brain of the author to see an initial intention there, and then compare it to the text that this intention generated,” Groys suggests that “machine-written texts make this operation possible.” But I won’t understand your intention any better just by looking at how a machine interpreted it unless I believe that the machine is infallible, that its parsing of language is final. It shows me what you really meant. In that case, the model serves as a means of eliminating ambiguity from the language of your prompt and purifying it. That intention, those words, those concepts, become a command that directs me to see what the machine produced. It is not an invitation to offer an interpretation or begin a dialogue, or develop new concepts.
With that sort of faith invested in an LLM, one can make it a mission to try to methodically close the gap between what one prompted and what the machine outputs, to purify your own way of thinking. The machine could train you into having a clear and distinct concept of what you want, expressed in the correct language. That is, prompting could be understood not as an effort to get a machine to produce in richer resolution something you are already imagining in a half-assed, unformed, general sort of way, but as an exchange process through which one learns how the model has “solved” language for good, wrestled all its ambiguities and inferences to the ground, and fixed the meanings of all strings of words so that there can be no question of being misunderstood or of intending something that you are blind to before you articulate it. This would entail absorbing the model’s structure of language into oneself, adopting it as the contours of one’s own subjectivity, so that its connections and associations become one’s own, an automatic sort of second nature. The mascot in the chicken suit.
This raises the question of whether all prompting tends inexorably toward redundancy. Are we prompting LLMs for things we can conceptualize but can’t imagine? What can that even mean? Groys argues that people prompt LLMs in hopes that they will generate “an answer that reflects an already accumulated mass of writing better than any individual writer could.” He thus claims that “prompting takes the form of dialogue between an individual author and the zeitgeist,” as it’s been reified through the model’s processing of its mass of training data, which he characterizes as “a huge garbage pit into which every new text is thrown as merely an additional piece of garbage.” This perhaps makes us see our prompts as a kind of meta-garbage rather than a form of garbage writing itself. The prompt is a trash bag, if not a suit of armor.
***
In an essay about an exhibition of art forgeries at the Courtauld Gallery in London, Rosemary Hill notes that “it is one of the peculiarities of fakes that they sometimes reveal themselves simply through the passage of time. Some of Han van Meegeren’s ‘Vermeers’, painted in the 1930s and 1940s, with their angular faces and hard shadows, now look positively Art Deco.”
Roland Meyer, extrapolating on a note by Nils Pooker, makes a similar point about generative models, which proceed according to the same logic as forgers, producing images from scratch that pander to our historically conditioned expectations of what some concept looks like. This raises the other familiar question with respect to AI models, whether they are capable of halting history and making the expectations “we” (i.e. the statistically average subject) currently have permanent, so that no one’s idea of a Vermeer can ever begin to migrate again. In prompting for the zeitgeist as Groys describes, we help build a fence around it, capture it as if it had already become absolute. We can use concepts without ever making them new, without ever threatening to negate them.
Hill points out that in the echelons where provenance and authenticity have economic ramification “technology is becoming both more useful in detecting forgeries and more trusted.” Perhaps we would all like technology to be capable of pronouncing on authenticity, maybe once and for all. It could supplant the unstable, fully unreliable technology of language for identity. But even technology can’t seem to save us from tautology: “Describing a fake Rodin drawing, the Courtauld points out that the ‘lumpy female nude’ is ‘awkward and wooden’ and so not by Rodin,” Hill writes, “a judgment that depends on there being enough real Rodins to know that, even on an off day, he was never as bad as that.”
Generative models invite us to believe there are enough real Rodins, enough real everything, as well as enough confirmed expert opinions to validate all that realness. We don’t need any more. The models tell us that there really is a straightforward match between a concept and its depiction, that the logic of forgery is actually just the logic of cognition made more efficacious. (Are dictionary definitions forgeries of words?) The real fake would seem to be the indeterminate space in Kant’s notion of reflective judgment — the idea that we can see something that we don’t want to resolve into concepts but want instead to hold open and universalize, not as something specific but as the “beautiful,” the shared human capacity for thinking feeling, for having experience. Who needs that transcendental nonsense when you have the empirical garbage pit? When you go looking for beauty there, you’ll find exactly what you’d expect.
I love the part about Rodin and Vermeer.