Concern about algorithmic culture often recurs to the fear of a doom loop, the idea that nothing new will be possible because we will be fully embedded in a predictive simulation that has become the horizon of our reality. Feedback loops will preclude the experience of originality; everything will be a rehash of what has already existed; filter bubbles will confirm our biases; algorithmic feeds will reify our tastes in catering to them; generative models will reproduce blandly average versions of what we’ve already decided to look for.
Matthew Kirschenbaum’s warning in this recent Atlantic piece of an imminent “textpocalypse” — “where machine-written language becomes the norm and human-written prose the exception” — fits this theme as well. In the scenario he posits, language models prompt other language models to produce more and more text on which the models retrain themselves, contaminating and overwhelming all the available spaces for text with statistically plausible but intrinsically meaningless verbiage. This mass spamification event will render readers unable to differentiate between human writing and its simulation, between truth and generative AI hallucination.
In some respects, this mirrors earlier fears that the full range of human possibility would be abolished by conformist mass culture, hedonistic consumerism, or postmodern vertigo: “Real” experiences appear to be on the verge of being rationalized out of existence, supplanted by mechanically fabricated substitutes that serve the purposes of social control and sustained exploitation.
Often the doom loops work much the same way as the routinized invocation of “late capitalism,” diagnosed by Rachel Connolly in this 2020 Baffler essay as producing “a note of knowing resignation” that masks “the very uneven way” that people are exploited by it. Such critiques impose what Mark Fisher called “capitalist realism,” a hopeless sense that there is no longer any way to imagine an alternative to an ever-adaptive capitalism. Fisher’s book is a doom-loopy argument about doom loops themselves.
That is not to say that capitalism, the ultimate doom loop, is not the issue. Ted Chiang, who recently offered the clarifying “generative AI is a blurry JPEG of the internet” thesis, pointed out in 2021 that “most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.”
If capitalism dictates what technology is viable for development and imposes constraints on how it is implemented, whenever a technology begins to affect us, we absoultely must give immediate consideration to how to resist or subvert it and to interpret its advertised benefits as efforts to distract us from that. But the “fears” that technology as a capitalist weapon can prompt may take many forms; technology can be used against “us” in different ways, just as who “us” is can be configured along different lines.
In Cannibal Capitalism, Nancy Fraser identifies as one of capitalism’s core features “the ‘self’-expansionary process through which [capital] constitutes itself as the subject of history, displacing the human beings who have made it and turning them into servants.” One can interpret generative AI as one current mode of this displacement and an ideological disguise for it at the same time — “AI” is a misleading name for capital itself and the “invisible hand” that purportedly guides it. It is presented as a productive subject, capable of expanding itself (much as spam will somehow take it upon itself to flood the world) as a supposedly irresistible consequence of its own nature. AI’s apparent reference to a technology is really more of a reference to scale; generative models are not based on new innovations for “displacing human beings” so much as massively capitalized firms now having access to enough data and computing power to execute them. This means that AI is better understood as a stage in the concentration of capital (not a technological breakthrough) and an expression of a certain level of leverage over the living labor it necessarily exploits and the populations it expropriates (to pick up another of Fraser’s analytical points).
Arvind Narayanan, who is working on a book called AI Snake Oil with Sayash Kapoor, pointed to the quote from Chiang cited above in a recent tweet thread about how generative AI fundamentally depends on expropriation (stealing and repurposing existing works) and exploitation (the free labor inputs of those beta-testing the models, the moderation labor imposed on those being harmed, and the underpaid workers tasked with labeling data, as detailed, for example, in this recent essay by Adrienne Williams, Milagros Miceli, and Timnit Gebru). Cory Doctorow, who is also fond of con-man metaphors, here describes generative AI as a “pump-and-dump” scheme to take the place of crypto and warns against being taken in by “criti-hype” that overstates its capabilities. “The important material question raised by these systems is economic, not creative,” he argues. In other words, the sanctity of human creativity in the abstract is not the stakes; rather it’s the livelihood of various human creators.
The “textpocalypse” framing seems to emphasize the technological threat to words and meanings, as if these hadn’t already been instrumentalized and devalued by the existing modes of textual circulation. As Ian Bogost points out (also at the Atlantic), the “massive deluge of text on websites and apps and social-media services over the past quarter century” suggests that “the textpocalypse has already happened.” If anything, making “human-written prose the exception” would seem to highlight its value rather than diminish it further.
What makes text into “spam” isn’t its unoriginality or the mechanical procedures that generate it; it’s the malicious intent behind it. It is unwanted and intrusive, exploiting vulnerabilities in the means of circulation in an effort to deceive or manipulate its recipients. Similarly, the problem with generative AI text is not that it is nonfactual or of poor quality; the problems with it are in the social relations it will be used to perpetuate. The demand for chatbots, such as it is, would be a reflection of those relations, much as the demand for mass culture and mass-produced goods always have been. It has never been the point to denounce that stuff as qualitatively “bad,” as if everyone merely needed to improve their tastes to halt capitalism. Rather, that demand indicates something about social conditions and how they are being structured through relations of production and the means of circulation that support them.
I am tempted here to make an accelerationist argument and claim that generated text could potentially expose those circulatory vulnerabilities and require that they be addressed. It will heighten the contradictions in our current media environment, reveal the incompatibilities between for-profit media companies and the ideals of free speech and open dialogue and so forth. That seems more plausible to me than the possibility that generative text will obliterate the difference between truth and lies and will make all intentionality inscrutable.
I’m certainly not as deep and smart about this as you but I wrote about the real consequence to culture as AI keeps us stuck in the “doom loop.”
It will keep us continually stuck in the past and doomed with the inability to move on. Anything that it creates, literally anything, by design, is a revival of past cultural forms.
Always appreciate your deep thinking on this revolution of culture.
https://americanjitters.substack.com/p/lost-futures-of-art-and-being-courtesy
I'm hoping for the same thing, but oddly enough, so is Google Search. This seems apparent in their shift away from banning AI content as spam to refocusing on their goal of demoting spam and spammy writing. AI writing may reach the "adequate" level of mediocre human writers driven by SEO analytics. Whoever/whatever writes it, this "content" may be useful but not as helpful as writing with uniquely human qualities: experience, expertise, authoritativeness, and trustworthiness. Supposedly that is what Google wants.
However clearly this policy might be stated and restated, search remains dominated by those who buy and interrupt their way to the top. It's a handful of media companies, like Hearst, that dominate "organic" search results. Maybe those companies are better at understanding (and seeking out in writers) what you referred to on Twitter as "the nuances of conveying intention," but I think I would've noticed it if they did. That suggests to me there is an intractable human insistence on using media for dehumanized and manipulative communication that really may just come down to a matter of what you seem to be dismissing as "taste."
AI writing comes out best when a knowledgable person creates a very good prompt that establishes context, specific details, and multiple directions a decent human writer would have to understand to write something on the same subject. So even if AI ever gets past the problem of "hallucinating" in long form writing, it still needs to operate as a tool of a human author who supplies the communicative intention. If that intention is saddled with the need for every bit of content to be directly monetized, that's where the real problem lies, I think. It's the oldest problem in publishing, but it's only intensified with time. It's not just "capitalism" that drives that overriding need to commodify everything and extract profits — it's capitalism mediated by people whose "taste" allows them to do that kind of work or press others to do it, with or without AI tools, without any resistance. Do the minimum and get out — that's all you're paid to do.
Contrary to what you say about "taste," it's hard for me to separate that from what one thinks about the boundaries of appropriate communication and relationships, including those between a brand and its audience or customers mediated by writers and other "creatives." It's taste but also wisdom, certain values, an ethic, and a business pragmatism that sees a need for creative freedom and personality (in both marketing and public-interest media) while also curbing micromanagement. The proliferation of crap content (in any era) may have more to do with editors and collaborative writing being replaced not with committees or machinery but with beancounters of some kind, search analytics, etc. And that's in poor taste. It's bad for brands that aren't already identified with crap. Of course it's death for anything in the public interest or not alignable with a corporate profit motive. AI might help scale up the crap production to the point we're forced into the kind of dilemma you've described, but AI is incidental — it's just the lastest tool people will use to dehumanize each other.