I’m not sure that there is much of a need for “slop” as a special term for AI-generated content. We already have “content.” Is there material we are content to call “content” that isn’t “slop” made to fill bottomless feeds? It’s not like the problem with AI content is just that it’s not “good” and it would right to embrace it if it begins to make content that people really like.
Earlier this week, Ryan Broderick tried his hand at defining slop not as AI content but any kind of content that sucks. He gives “slop” three defining characteristics: (1) people don’t want it, (2) it’s forced on people despite that, and (3) it seems undesirable by design. (Like capitalism.) As Broderick puts it, “It not only feels worthless and ubiquitous, it also feels optimized to be so.”
That didn’t make sense to me at first. What is the point of optimizing content to be flagrantly annoying? Broderick attributes it to “algorithmic feedback, a desperation for mass appeal, and a void of content that needs to be filled,” but I still don’t quite understand how that would work. The “mass appeal” would seem to indicate that it was not “worthless” to large audiences and that algorithmic sorting was effectively steering content makers toward successful strategies, not forcing it on captive audiences.
Perhaps the point is that a lowest-common-denominator situation takes hold on content platforms so that they end up efficiently providing the least amount of joy to the largest amount of people while still assuring that they keep supplying their attention. “We’re the little piggies and it’s the gruel in the trough,” Broderick writes, but it’s not clear if the gruel turns us into pigs, or if we are always already pigs and tech companies are just finding cheaper ways of making our gruel.
Broderick posits a “fear of the content void” that is metastasizing through the body politic, but that “void of content” seems like an inside-out way of describing efforts to manufacture demand for media, to produce audience metrics for advertisers or investors. “Fear” and “desperation” and “slop” seem like loaded ways to describe how media companies go about their ordinary business of capturing attention, shaping it into measurable forms.
But why is some popular content “slop” and some content not? Without criteria for why certain kinds of attention paid on platforms are better than others — why some turn us into pigs, and others don’t — it seems like it reduces to a matter of personal taste: My commodified media products are health food; yours are pigfeed.
It is like when the popularity of any given pop cultural phenomenon is attributed to a conspiracy — as with the theories about Spotify pushing Sabrina Carpenter and Chappell Roan on users described in this Vox piece. What is supposed to be the right amount of times those artists are played? There is no fair and correct distribution for media products, all things being equal and undistorted by socioeconomic forces; those forces constitute what pop culture is. It’s not a thwarted meritocracy in which the “right” songs are getting marginalized through the machinations of evildoers who hate aesthetic quality. It’s an industry that conjures and concretizes blocks of attention into profitable forms. Which of these forms of attention should be considered illegitimate, and why? Where does pop culture stop and coercion begin? Who gets told they are pigs in slop, and who gets to think they are engaged in something more dignified?
The conspiracy theories about culture rely on a fantasy that there is a natural and organic culture, a correct distribution of attention that reflects how things would be if no one were motivated by the wrong reasons. But it’s not clear what the right reasons could be, or how we could know them, especially if popularity itself is viewed with suspicion or seen as disqualifying. A related idea is that pop culture is a kind of marketplace of ideas that should be governed by norms of fair competition, but then attention would become an incoherent kind of currency and culture would be mischaracterized as a matter of calculating individuals rather than collective practice. Part of why Spotify feels like a conspiracy is because users expect it to be targeting music at them as individuals and allowing them to escape from the culture of conformity. It is supposed to manipulate us as atomized guinea pigs, not as members of an easily led herd.
But there is a different way to make sense of why AI content would be forced on people. Sociologist Dave Beer points out in this post how “generative AI facilitates massive overproduction,” but this could be understood creating a “void of content” at the same time. The AI companies need to make overproduction — creating more content than ever would be demanded under any circumstances, the only thing generative models are good for — into something that is profitable.
One way of doing that is to use generative models to create not “slop” but digital pollution that people would be forced to pay to get rid of. Generated content can be used to create an artificial scarcity of useful content; it can be used to make the existing ways information is organized, some of which have served for centuries, suddenly obsolete. Generated content can be patterned on materials of existing value, making them hard to find and distinguish.
Beer’s metaphor is that generative models create haystacks that hide the needles. “The current haystacks of content are managed, with various levels of effectiveness, by algorithms that locate the needles within them.” Generative models can make it so that we must pay more for access to those algorithms, like an antidote to a poison fog that has been released.
“Another problem,” Beer notes, “is that the hay will also be made to look more like needles too. Generative AI will bring the problem of volume and combine it with the problem of similarity. There will be content replicating content, replicating content, and so on.” If a company doesn’t profit from the existing “needles,” they can flood the market with fake needles and their “real needle” detector. Google’s forcing AI into search results would seem to have this quality, but they haven’t revealed the detector yet — perhaps some “preferred user” tier where you pay to keep the AI out.
The point is that generative models should be thought of as adding more information, more content, more organization, more knowledge, but as subtracting all of those things. The material it creates is like antimatter — voluminous entropy.
Maybe there is a clue from the world of spam. I used to smirk at the poor spelling and obvious giveaways in spam and phishing emails, but someone pointed out that they are entirely deliberate. Spammers are not interested in people who notice mistakes. They are interested in the people who don't. What if AI "slop" was intentional, to select a market segment who might pay for services that look impressive without actually needing to deliver?
Content.
Satisfied, even.