There has been a push recently to christen unwanted generative content as “slop”: This New York Times piece by Benjamin Hoffman, for example, defines “slop” as “a broad term that has developed some traction in reference to shoddy or unwanted A.I. content in social media, art, books and, increasingly, in search results,” typified by the now infamous glue pizza. The term feels more or less synonymous with “spam,” but seems to suggest there is something worse about using “AI” to pollute communications, as if there were now something artisanal and dignified about human-crafted spam.
It’s not as though spam itself hasn’t partly been driven by algorithms and automation all along. And in a sense, spam gives birth to slop: The scourge of spam has long helped rationalize the implementation of “AI-driven” content moderation, which feeds into an arms-race dynamic driving the development of ever more resistant spam strains.
What distinguishes slop from spam is that tech companies themselves are pushing it onto users rather than trying to protect them from it. Slop, after all, is what is fed to pigs. This gives the invention of the term “slop” a kind of critical edge, potentially an aspect of the emerging idea that anything labeled as AI is inferior and demeaning and to be avoided, as Brian Merchant details here in a piece about the surge in marketing for “AI-free” products.
In a digital ecosystem overwhelmingly controlled by monopolistic tech companies such as Google and Meta, each of which is bent on deploying new AI products whether users want them or not, even these small declarations are ways to register a protest, signal discontent, and wave the flag for other AI skeptics to rally around.
Unlike “AI-free” — which implies a determined rejection of the technology, similar to the anti-DRM campaigns of the past — “slop” could work to delimit and demonize only a type of content rather than an overarching business practice, dissipating responsibility for it. Both the Times article and this Guardian piece quote Simon Willison, who wrote a blog post pushing slop as a term: “Having a name for this is really important, because it gives people a concise way to talk about the problem,” he told the Guardian.
Before the term ‘spam’ entered general use it wasn’t necessarily clear to everyone that unwanted marketing messages were a bad way to behave. I’m hoping ‘slop’ has the same impact – it can make it clear to people that generating and publishing unreviewed AI-generated content is bad behavior.
I’m more than a little skeptical that it “wasn’t necessary clear” that forcing marketing messages onto others is bad behavior. Advertising has been understood as unwanted and intrusive from the moment of its invention; to accommodate it, media had to become “captivating” and entrapping to force people to consume marketing. Of course, it should be understood by anyone with any kind of moral compass that forcing unwanted anything on anybody, machine-generated or not, is coercive.
But “people” are not really the problem with slop. As Merchant noted, the companies developing the models and going to great lengths to embed them into products that people already use are the problem. Slop is perhaps better construed not so much as “unwanted marketing messages” but unwanted tools occluding and interfering with the processes that we have already figured out to use (email clients, word processors, anything on your phone, etc.), often in subversive, semi-defensive ways.
Tech companies would like everyone to use their products in the ways they expect, so that we become dependent on them and more predictable, more exploitable — more under their control. Part of mastering some piece of software is learning in part how to counter that control. The forced implementation of AI into applications is meant not to make them easier to use but to break our resistance; “AI” not only requires and rationalizes far more intensive surveillance of users, it takes users’ agency away on principle and tries to sell us that submission as convenience.
In general, automation typically works as deskilling, concentrating agency in the hands of the few to wield over the many. Generative AI is build on the premise of appropriating people’s past work, abstracting some governing principles from it, and using those abstractions as a template to stamp out subsequent work. As with previous forms of automation, it allows management to turn more human workers into machine minders whose creative input is not permitted, who are debarred from any leverage over the production process and reduced to another machinic input into it.
Fixating on slop as a form of content, as a kind of necessary evil, a shipwreck that is invented with the ship, helps naturalize the inevitable proliferation of “AI” technology that doesn’t serve to improve the lives of most of the people exposed to it. It implies that only a subset of what AI wreaks is bad, and not enough to derail its eventual infestation of everyday life. “Society needs concise ways to talk about modern A.I. — both the positives and the negatives,” Willison tells the New York Times, but coming up with a specific label for one aspect of its harmfulness can function like a quarantine, as though we’ve isolated the risk and don’t need to interrogate where “modern AI” comes from or for whose benefit it is being deployed. There is no good generative content; it’s all predatory slop. The positives, if there are any, would have to be snatched away from the negatives. Email accounts are not all intrinsically spam makers, but every corporate generative model is nothing but a slop machine.
Another recent nomenclature piece in the New York Times, this one by Jessica Roy, examines the term “brainrot.” According to Roy, the term
refers primarily to low-value internet content and the effects caused by spending too much time consuming it. Example: “I’ve been watching so many TikToks, I have brainrot.”
Clearly the term owes something to the old-time colloquialism that “TV will rot your brain”; it makes more intuitive sense then something like “social media meth mouth.” The main symptom of brainrot, the article suggests, is using lots of internet-derived slang like the term “brainrot” itself, hence the article’s title: “If You Know What ‘Brainrot’ Means, You Might Already Have It.”
The pathologizing of the condition pivots on the idea that “being online” is diametrically opposed to real life, so that, according to pediatrician Michael Rich, “you have shifted your awareness over to the online space as opposed to IRL, and are filtering everything through the lens of what has been posted and what can be posted.” Brainrot is when you interact with other people in a way that is conditioned by the protocols, proclivities, fads, and limitations of social media and can’t (or won’t) switch out of it. It might be understood as a kind of etiquette failure that positions your interlocutors as your followers.
But brainrot doesn’t seem much different from borrowing catchphrases from TV shows and commercials: “Wazzzzup?” Online platforms don’t invent this phenomenon, but their mechanisms of virality might be held to exacerbate it. Users are incentivized to create and spread memes, which can be verbal tics as well as visual cliches. The way memes spread is also changed by platforms; it is largely detached from the kinds of geographical or subcultural limitations that may have once had more of a role in their distribution, becoming more a matter of gesellschaft than gemeinschaft, though TV-spread memes have always seemed like a monocultural imposition effacing local idiosyncrasy too. One could argue that algorithmically distributed memes on mega-scaled platforms are more redolent of anomie. Nothing roots the circulation of these specific turns of phrase to any particular set of embedded cultural values; they seem to derive directly from the platforms’ imperatives to hoard cultural practice and profit from it. (Brainrot is an advanced expression of capitalism’s melting everything solid into air, or of modernism’s evisceration of the grounds for sustaining traditions.)
Roy quotes medical experts who want to frame brainrot as a form of self-soothing, “a coping mechanism for people who may have other underlying disorders that may lead them to numb themselves with mindless scrolling or overlong gaming sessions.” Even so, brainrot seems like a pejorative invented mainly to critique individuals who seem like they are trying too hard and to replicate social hierarchies. predicated on “fashion sense” or “cool.” It characterizes certain forms of mimesis drawn from media as excessive, a mark of having fallen slightly behind trend. It contrasts with other forms of conformity, fashionability, or imitation and helps establish them as pro-social, as appropriate forms of collective participation. Brainrot seems like a term to use to designate others as weak-minded fashion victims and to help disavow one’s own mimetic tendencies: I am an authentic individual with a sound brain; you are a brainrotted TikTok addict.
In discussing the article with Nathan Jurgenson, he asked, “What’s the correct pace of imitation? What rate makes it seem desperate? What is the healthy and normal, and thus not shameful or “rotting,” version of mass culture?” In other words, what prevents you from being influenced by media in the wrong way, particularly as one comes to feel oneself as living completely within it? Do we feel as though we pull ourselves out of the mediated world by figuring it as an alien invader that is rotting us from the inside? Jurgenson suggests that construing “a form of mediation as harming you from the inside out” is a kind of pharmacological prerequisite, “a core part of continuing to use it.” So the term itself is self-soothing in a sense, despite its derogatory implications. Like “slop,” it implies a kind of resigned acceptance.
“sludge content” is another one
wazzzup?