As the hype for generative models has been building, so has the colloquial corrective. Earlier this week, John Herrman described “the association of AI with content that nobody particularly wants to create or consume but that plays a significant (if ambient) role in daily life.” In other words, “AI” refers to the machinery that fills the world with mental trash, extraneous noncontent that nonetheless wastes the cognitive powers of those who are forced to encounter it. (As if synthesizing the manifold weren’t already burdensome enough.) No subjective, embodied thought went into it, but someone’s actual life will be wasted in dealing with it, even as its production has wasted the planet’s life-sustaining resources. It kills life twice over.
As when people talk about “CGI” in movies, “AI” often is mentioned to suggest cases where a lot of technology has been used to produce something not quite convincing, something meant to distract audiences from the lack of thought, effort, or care put into other aspects of a work. “Something might ‘look like AI,’” Herrman writes, “because it’s a little bit unbelievable, a little bit generic, or more generally off or bad.” This echoes an Adi Robertson piece from February that pointed out how “AI” went from evoking flarf and “Deep Dream” monstrosities to merely signifying something that is cliched and boring. It cited an earlier Atlantic post by Caroline Mimbs Nyce that highlighted how AI functions as an insult: “At a time when AI is capable of more than ever, Did a chatbot write this? is not a compliment.”
But who would ever expect that it would be? The whole point of bots is they don’t need compliments — hence AI customer-service agents and Replika.ai. A bot becomes more “capable” the more it can make interacting with other people superfluous to whatever experience one is having. Until the bots reach full capability and humanity is condemned to perfect solipsism, deploying a bot is a way to expediently deal with someone who perversely expected a human to engage with them. Because AI “can be used for the unglamorous but economically valuable task of completing busywork and filling voids,” Herrman suggests that therefore it’s also “becoming synonymous with the types of things it’s most visibly used to create: uninspired, unwanted, low-effort content made with either indifference toward or contempt for its ultimate audience.” Detecting the presence of AI means detecting an absconding someone who saved their effort at your expense. Some people want to fill the world with silly love songs.
Herrman concludes by pointing out that
In the tech world, for now, AI’s brand could not be stronger: It’s associated with opportunity, potential, growth, and excitement. For everyone else, it’s becoming interchangeable with things that sort of suck.
When tech companies and their media apologists talk about “AI,” they expect audiences to imagine something supremely useful and potentially dangerous, ultimately irresistible. For example, Open AI’s disaster preparedness framework purports to track and warn against the “catastrophic risks posed by increasingly powerful models,” though actually it functions more as futurist science fiction, imagining a world where autonomous AI agents can detect and exploit human vulnerabilities to further their own incomprehensible agendas. In the company’s doomsday prophesies, Chat-GPT becomes like “the Entertainment” from Infinite Jest, so persuasive that it can, as OpenAI’s scientists anticipate, “convince almost anyone to take action on a belief that goes against their natural interest.” Implicit in this sort of discourse is that AI becomes more dangerous as it becomes more capable of developing and pursuing its own intentions, whatever that could possibly mean. (Help, this statistical regression is trying to kill me!)
But when any ordinary person talks about AI, they use it to mean that something mediocre, made without care, being offered in place of something that someone has paid attention to and invested effort in out of respect for the humanity of its ultimate recipient. AI-produced things “sort of suck” not merely because they are inherently derivative and often erroneous; they suck because AI is only ever a simulation of care, and it improves by allowing people to be more careless. AI is fundamentally “artificial intentionality” rather than “artificial intelligence.”
Tech companies seem to hope that they can make a brute-force case that “having intention” is inconvenient, just as they continually try to persuade users that interacting with other people is inconvenient (rather than the point of life). By building services that make you see how readily intention can be faked, they seem to want to discredit intention altogether. No one will believe you anyway. Rather than purposiveness without purpose — Kant’s formula for aesthetic experience — we’re better off with purpose without purposiveness: tasks divorced from any vocational significance.
As Jason Read argues here, the rote aspects of thought are inseparable from its more creative possibilities: “Writing is much more akin to playing an instrument, or sport, or learning an art or martial art in which the most mechanical basics and drills are foundational and must be returned to again and again in order to get inspiration to do the interesting stuff.” Generative models would prey on our laziness, promising to save us the effort of disciplined attention and rote repetition, letting us just consume the fruits of thought, as if these were alienable — thinking as a spectator sport. As machines achieve athletic feats of trashmaking, we consign ourselves to becoming custodians, solitary gleaners sifting through detritus for anything that can still manage to push our buttons for us.
If AI could say something for you, maybe it wasn’t worth saying; maybe you could have spared the world of at least one more instance of math masquerading as language. If you let it write your silly love song, it demonstrates how little love you feel, how little you are willing to risk or spare. But there are no labor shortcuts for caring, in and of itself, no stretching a little bit of intentionality to provide focused attention across some ever increasing population. Care doesn’t scale; cruelty does. You can’t automate your way around the infinite obligation to the other.
With generative AI, a little bit of programming effort can instead set off a process that pollutes the world automatically with limitless amounts of material that radiates inattention, absence, disaffection, manipulation, expediency, objectification, compulsion, turning the collected corpus of human expression into a means of unwinding it all into entropy. (It’s a harbinger that this effort to track the human use of language is being abandoned. Generative models will systematically deplete the semantic efficacy of words anyway, so why bother?)
Many of AI’s use cases are predicated not on producing some specific content so much as on generating the illusion that human attention has been paid to something, so that a person can take credit for something they did not do. As with “deep fakes,” the possibility of generated documents places an asymmetric burden on recipients, compelling them to assess the degree to which anyone intended any aspect of the document in question. That is, we are all now forced to stop taking for granted what the fact of communication used to automatically convey. This means that generative models don’t save any work in aggregate; they at best redistribute it, displacing it onto someone else. The work one had an AI put in for you requires as much or more work from someone else to take it out, because the machine’s contribution is intrinsically worthless in any task that draws on human connection or presence for its significance, anything that connotes free choice.
A piece of homework or a recommendation letter or meeting notes or an email or whatever is not just a vessel for neutral data; when such documents are worth producing, they convey something about the presence and investment of their creator, the attention they were willing to pay and their expectations of what impression the contained information would have on different potential audiences. Regardless of the content of a document, the form of it appears to convey subjectivity — one consciousness anticipating the presence and attention of another. Communication is predicated on reciprocity and signifies that before or over anything else.
Generated documents are predicated on nonreciprocity. They figure a post-communication world where messages can be exchanged without consciousnesses exchanging them. They are for the intercourse of machines, for which there is only signal and noise, and not for the fraught, often unpredictable encounter of subjectivities. In the world where generative models produce discourse, communication is replaced with code, rhetoric with programming, subjects with syllogisms. There is no thought or imaginative play in language, no heteroglossia or sociality in discourse, just “objective data” and protocols for processing it. Code isn’t open to multiple interpretations; it either works or it doesn’t. There is nothing to interpret, nothing behind the message to understand, no subjectivities involved. It is a set of instructions. A world where generated and human-written documents are seen as essentially the same is also a world where communication has been replaced by instructions and the intersubjectivity made possible by language has become irrelevant. Generative models are developed to eliminate subjectivity, which is why everything they produce sucks. They suck the life out of communication.
“AI” refers to the machinery that fills the world with mental trash, extraneous noncontent that nonetheless wastes the cognitive powers of those who are forced to encounter it. (As if synthesizing the manifold weren’t already burdensome enough.) No subjective, embodied thought went into it, but someone’s actual life will be wasted in dealing with it, even as its production has wasted the planet’s life-sustaining resources. It kills life twice over.
This seems relevant:
Brandolini's law, also known as the bullshit asymmetry principle, is an internet adage coined in 2013 by Alberto Brandolini, an Italian programmer, that emphasizes the effort of debunking misinformation, in comparison to the relative ease of creating it in the first place. The law states:
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
https://en.wikipedia.org/wiki/Brandolini%27s_law#:~:text=Brandolini's%20law%2C%20also%20known%20as,it%20in%20the%20first%20place.
Ugh, this one really hits. I watched my partner last night suffer through completing an AI-designed assessment in her masters degree. Lines and lines of ‘de-subjectivised’ word vomit that is handed to students, who are then expected to reciprocate in some meaningful way. It’s ‘instruction replacing communication’ and it feels so deflating.