Anything I might write in response to Open AI’s release of its ChatGPT chatbot for public beta testing is likely to come across as something that a large language model would itself generate. The matrix of positions, the arguments and specific phrases, have certainly appeared frequently enough in the training material to be reproduced with as many novel tweaks and wrinkles as you could want: A critique of AI, but in the style of a Henry Fielding novel. A critique of AI as a Pinter play. Pause. A critique of AI in the form of a dialogue with a chatbot. See? It already anticipates what you think about it, and every possible way you would think to say it, and far more besides.
This deflating feeling can seem like part of the point of the generative “AI” hype cycle, which exhorts us to be excited about collaborating with machines and getting them to flesh out and execute even our most half-baked and inarticulate prompts. If with the popular image generators, we have to articulate our notions in the joyless, “optimized” language that the models respond to, ChatGPT lets us take a more playful and casual tone, one that can feel more interactive and collaborative. But at the same time, it still insists on the model as the source of invention and authority and the human interlocutor as a child dreamily asking questions without worrying about having a means for assessing the answers. Hence many react to these models by rejecting that infantilization and looking for ways to prove the model is the unintelligent one, and capable of being abused by other people who want to exploit its competency to mislead the gullible.
The fantasy driving the development of large language models, or at least how they are marketed to the public, is of a machine that a human can tell to do anything in ordinary language and it automatically and intuitively complies, sometimes going beyond what is expected. But this inverts of what automation actually accomplishes. In practice, people level down their expectations and demands to what machines can handle, learning to accept that as the most that one can expect from the world. Much as automated customer service makes it seem like a luxury to get human assistance, the ubiquity of automated content will gradually make the demand for thoughtful human-made content seem more extravagant.
If humans think by feeling our way through different combinations of words, ideas, and concepts, the large language model presents itself as the entire field of possibility, without edges or frontiers, already mapped out and waiting for us. We need to use our imagination only to chart a course through this field and ignore that it is designed to enclose us within it, that our interactions with it further reduce language and images to a set of statistical relationships that can be infinitely recombined without ever producing anything new or potentially destabilizing, foreclosing on the sense of an open-ended future.
Instead of our having to confront the unimaginable void of the not-yet-thought, generative models let us encounter and consume ideas passively. A chatbot offers the semblance of live reciprocal conversation with none of the risk of what the other person might think of you or expect. It reminds me of when I play chess against my phone because the thought of playing an actual person seems too stressful, and what I really want is to be cocooned in a few moments of distraction. There is enough of an illusion of “play” to disguise what I am really doing, which is prodding a machine to see how it has been programmed to respond. I always lose at the chess game, but I always win at having an uncontested emotional response about the outcome.
In seeming to understand the gist of prompts and confabulating accordingly, large language models reassure us that the formula for communication has already been worked out and coded; our own thinking is itself a mathematical function that can be continually re-executed with different sets of data without altering the underlying equations, and this function can be extracted from brains and embedded in circuitry. The chatbot will write limericks for you; it can seemingly count to five and seven to make haikus. It will produce poetry on demand infinitely to effectively try and demonstrate that poetry is well and truly dead, or at least flarf.
In an essay for The Hedgehog Review, Richard Hughes Gibson argues that large-language-model technology “is a sophist, at least on Plato’s understanding—an ‘imitator of appearances,’ creating a ‘shadow-play of words’ and presenting only the illusion of sensible argument. In the matter of seconds, it can produce a case for and against giving dictators gold medals.” But the fact that AI generators don’t care about what they produce is integral to their appeal. It makes consumption the locus of creativity, because it is produced by agents without intention. Whatever meaning one perceives in it can be felt as authoritative.
One could try to re-objectify generated content by trying to understand it in terms of the statistical weights and measures the model develops, or the biases of the data sets used to train them. But such calculations are fully unimaginative, occurring at a scale that is unimaginable. They are performed without any leaps of intuition that would open them to interpretation. So all the imagination can only be added after the fact and serves to elucidate not how the content was made but how ingenious the consumer is in responding to it in some way, picking it out from amid the all the other generated material for special attention.
That helps explain why AI-generated content is readymade for social media posts. It allows people to show off that creative consumption, how clever they can be in prompting a model. It is as though you could have a band simply by listing your influences. You don’t even have to absorb those influences in practice; you can just name-check them. You just need to be familiar with the signifiers. But even that level of knowledge is perhaps unnecessary. AI can take a nothing prompt and suddenly make it seem worthwhile, giving anybody something fun and surprising to share. You just have to be willing to see the world as made up of pieces meant for AI to recombine, and that AI will Cyrano de Bergerac for you.
In posting AI-generated content, we signal that willingness, which seems also to indicate a willingness to accept the established parameters of culture, to accept the world as given, to embrace the status quo as the horizon of imagination. At the same time, it underscores the emptiness of generated material. As engrossing as the back and forth with a computer can be — as diverting as playing chess against a machine can be — the generated output is ultimately interesting only insofar as you can imagine entertaining some other person with it. AI generation almost requires social media to gain any traction (let alone the data required to make the processes work). Continual interaction with generators reinforces a certain technology-dependent approach to interacting with other people, obliquely, with an eye to the status scoreboard rather than intimacy or reciprocity. It may be that posts of generated material do well on social media because many of us are reassured by seeing one another reduced to that level.
The novelty of showing off one’s skills in prompting generative models seems to quickly fade. One can imagine artists using AI tools to make pieces that will repel or baffle most consumers and thereby be accepted as art, but consumerism itself — its modality of conspicuous status display — remains the most banal form of creativity. Generative models can induce users into abetting their mass production of text and image commodities as endless variations of the same, while allowing us to think that we are having it our way, that we are getting a unique sandwich tailored to suit our idiosyncratic selves and not fast food. Does it matter if the burgers are flipped by a robot?
Generative models produce images and fragments of text that are superficially novel and surprising but comprise a readily recognizable and predictable genre of intrinsically meaningless content. This material is not only perfectly suited to filling out templates with placeholder images and text; it is also well-suited to the master template of our time, the social media feed. It is calibrated to the amount of curiosity we bring to scrolling, to the desire to be distracted while procrastinating or waiting for something else to happen. Tech reporter Kashmir Hill tweeted that “About half of my feed now is text and art generated by AI. At what point do we just go 100% prompt?” The obvious answer — one that ChatGPT might even offer — is that feeds have always already been filled with algorithmically produced content, and generative models’ content just reflects a slightly different hybrid of human and machine than the one made up of humans and recommendation algorithms. The feed has always been a kind of chatbot responding to the prompts of our past interaction data. That is to say, feeds can be understood as AI-generated works, and vice versa: The output of models is always implicitly a kind of endless feed.
Generated content seems destined to fill and possibly break any distribution channel that profits from being regularly updated with whatever. How long would I want to talk to a chatbot once I accept that it will never shut up? Much of the visual clutter in our lives will likely become AI-assisted too and mimic what has always been there but at a density we have not before confronted. Generators can so readily sate the demand for novelty and distraction with routine reshufflings of the deck that I wonder if they threaten to abolish that demand altogether. Novelty is compelling as long as it serves as a low-stakes proxy for the presence of the other, a quick fix of mimetic desire that can disguise itself in the novel thing’s arbitrary particularities. AI generators make those particularities too obviously arbitrary, shifting our anxieties back to what they were standing in for, the attention and approval of others. AI could eventually fake that, but it will never make it.
"In practice, people level down their expectations and demands to what machines can handle, learning to accept that as the most that one can expect from the world"
I fell in love with this statement. You have depicted the perfect deception; like a sleight of hand. We think the machine is so sophisticated when in fact, we become less sophisticated so that we can become one with the machine.