After Buzzfeed announced last week that it would use ChatGPT-derived tools to “enhance” and “personalize” some of its content, its stock price immediately rose from under a $1 a share to nearly $4 — it has since dropped back to $2. That trajectory perhaps reflects something of the hype cycle with generative AI, which seems to provoke an overreaction that is then met with only a partial correction; there remains a stubborn sense that LLMs really will add some tangible value, and who knows? Maybe that value really will take the form of “custom Mad Libs, if your book of Mad Libs were a bit more sentient,” as Madison Malone Kircher put it in this New York Times article.
In the memo making the announcement, Buzzfeed CEO Jonah Peretti (who, lest we forget, wrote this essay about “the issue of how identities can be fostered that resist the logic of commodification,” citing, among many other theorists, Fredric Jameson, Jean Laplanche, and Gilles Deleuze) claims that “the future of digital media will be defined by two major trends: creators and AI.” You would think that these “trends” would be inversely correlated, with generative AI undermining the bargaining power of “creators,” if not replacing them altogether. But Peretti seems to take the view that AI models will produce so much mediocre material that the work of human creators will become more distinctive and valuable, its necessity sharpened for media consumers and distributors alike. People will still want to consume celebrity, will still want connection to things that other consumers demonstrably care about.
Most of Peretti’s memo proceeds as though creator content is on an entirely different track from generated content. It touts all the wonderful brand partnerships and initiatives that are part of Buzzfeed’s “creator-powered business,” and then promises more news later about the separate developments in AI that will aid in “enhancing the quiz experience, informing our brainstorming, and personalizing our content for our audience.” In a paragraph that itself reads like ChatGPT bloviation, Peretti seems to suggest that generative AI will enable the company to generate content more in sync with the demands of algorithmic recommendation:
The creative process will increasingly become AI-assisted and technology-enabled. If the past 15 years of the internet have been defined by algorithmic feeds that curate and recommend content, the next 15 years will be defined by AI and data helping create, personalize, and animate the content itself. Our industry will expand beyond AI-powered curation (feeds), to AI-powered creation (content). AI opens up a new era of creativity, where creative humans like us play a key role providing the ideas, cultural currency, inspired prompts, IP, and formats that come to life using the newest technologies.
This is a good reminder that just because an era of creativity can be described as “new,” we should not take that to mean an era of more or better creativity. This particular new era will be one in which “creative humans” appear to funnel their creativity into “inspired prompts” and “IP” that can be automatically iterated. This will allow the “industry” (presumably the culture industry) to move beyond the algorithmic feed, which is currently faced with the problem of running out of content that matches what the algorithms have calculated a particular person to want. With generative AI, feeds should be able to produce that content directly.
As users are inundated with that bespoke content, will they be programmed to demand more and more of it? Unlike creator-driven material, AI content seems like flotsam intended to soak up the excess attention of users, when they are not motivated enough to seek out something for themselves. It would, in theory, thereby be optimized to induce surrender to algorithms, or to make consumers feel as though they have an excess of attention to squander, which amounts to the same thing. Generative AI would be so capacious and anticipatory that it would abolish curiosity, an ideal that has always been implicit in the concept of a personalized feed.
From this perspective, feeds don’t merely reflect but reproduce compulsion, and generated content will be used precisely to intensify this process. In that scenario, branded content from creators can be phased out in favor of content that allows users to experience themselves as a brand of a sort — the specific set of proclivities that conceptually holds together whatever content the machines throw at them.
But such a theory seems a bit one-dimensional, as though consumers never tire of being pandered to in the most obvious of ways — ways that become even more explicit by design as they proceed. It also draws on doomsday theories of media undermining the mind’s resilience that can be traced back to the printing press, or perhaps all the way back to Plato’s argument against writing in the Phaedrus, as Joe Stadolnik explores in this “history of distraction.” Such theories propose that as media tricks us into relying on them for things our mind used to have to do, we surrender certain capacities (memory, focus, agency, comprehension, and so on) without gaining any compensatory ability. “There’s a yearning here, after some lost yesterday, when the mind worked how it was meant to,” Stadolnik writes. “When was that, exactly? Seneca, Petrarch, and Zhu would all like to know.”
The yearning is very evident in this recent Atlantic rehash of Neil Postman’s Amusing Ourselves to Death as a critique of the metaverse (remember that?), Megan Garber warns readers that there is too much “immersive” entertainment available to people, and this has made us all increasingly incapable of coming to terms with anything that isn’t presented as entertainment, including our own lives:
Dwell in this environment long enough, and it becomes difficult to process the facts of the world through anything except entertainment. We’ve become so accustomed to its heightened atmosphere that the plain old real version of things starts to seem dull by comparison. A weather app recently sent me a push notification offering to tell me about “interesting storms.” I didn’t know I needed my storms to be interesting …
Such examples may seem trivial, harmless—brands being brands. But each invitation to be entertained reinforces an impulse: to seek diversion whenever possible, to avoid tedium at all costs, to privilege the dramatized version of events over the actual one. To live in the metaverse is to expect that life should play out as it does on our screens.
Of course, I don’t agree with the implication here that life “on our screens” is fake, and that there is a “plain old real version of things” that excludes all forms of mediation. But that is nothing new; feel free to consult reallifemag.com (if not Critique of Pure Reason) for any number of essays that debunk that idea.
Also suspect is the proposition that “each invitation to be entertained reinforces an impulse to seek diversion whenever possible.” This presumes that there is some exalted form of attention that is “real” and we know it apparently by feeling bored by whatever we are attending to. At the same time, feeling entertained by something should be taken as a condition of self-delusion, proof of our own stupidity or narcissism. This argument depends on a distinction between engagement (being interested in something in the right way) and “entertainment” (an attentional disorder that progressively becomes an addictive compulsion). How do you distinguish between these, especially when we must mistrust our own inclinations and instincts? The implication seems to be that we should simply trust the experts and consume only what they approve. Then we can be sure our minds are not rotting. Then we will be free, by doing what they say.
I’m intermittently attracted by this sort of argument, particularly in its “fun is a form of tyranny” guise. I tend to be skeptical when “people like it” is offered as incontestable proof that something is benign. But I find I am less sympathetic when this argument takes the behavioristic form of “you are helpless to control yourselves or your lizard brain.” That seems to lend support to what it ostensibly critiques. Rather than audiences or consumers being inert material that can be processed into whatever form, there might be a process of reciprocal interaction. Using AI to produce "personalized" content will drive a rapid change in what being a "person" feels like, in terms of how the AI approximations fall short.
What people find interesting or boring is altered by the very process of catering to it. Optimizing for “entertainment” doesn’t solve for it once and for all or negate whatever its opposite is supposed to be. Engagement and “diversion” necessarily co-exist as the conditions for each other. The idea that the media creates an atmosphere that consists entirely of diversions is conceptually incoherent; in that case, what would they be diverting from?
Curiosity is not like an appetite that can be sated with "content"; it is a process that seems to renew itself by seeking obstacles. When commercial forms of entertainment fail, it’s often because they have removed these obstacles; when they succeed, it’s because they have found a novel or satisfactory way of introducing them. When generative models produce automatic content, they offer bare variety, the sheer fact of novelty as a compensation for the absence of obstacles, which follows from the bad infinity in how they proceed. Gimmicks like “Watch Me Forever” — an endless AI-generated episode of faux Seinfeld — illustrate this well.
Of course, one can bring curiosity to AI-generated output — even to “Watch Me Forever.” You can enjoy watching the program bump into its limits and enjoy its glitches, or read profundity into its randomness. One can posit an obstacle in trying to understand how the human intention behind whatever it produced came to be mediated in that particular way; you can wonder what someone was trying to accomplish by prompting models with whatever specifics and what sort of experimentation was necessary to produce results they wanted to share. Rather than see it as good or bad or true or false, we can, to borrow a phrase from Adorno, “appreciate its truth content as one that contains its own untruth.”
Models can make an infinite series of stories that are in a sense “based on true events” (actual data processed statistically) but are also specific to the moment of their generation, a kind of reality TV from an alternate universe. Only humans can identify where the limits of a model's capability are; that may be what "being human" will increasingly feel like: a life spent proudly identifying the extra teeth and fingers in generated images.
But many commentators seem to feel threatened the possibility that AI will induce passivity and apathy in us against our will, that it will train us to be incurious. AI models would seem to tempt us with their immediacy, which would then deplete our capability to be satisfied with even the interesting content it makes, so that we could experience nothing but contentless diversion, a kind of analogue of the slot player’s “machine zone.” I tend to read that fear as an inverted wish: We secretly hope that AI will free us from our curiosity and liberate us into a pure passivity that we can never actually attain.
In Garber’s argument, the dangers of entertainment are being compounded by the temptation to become entertainment — to understand reality as a story and oneself as a character. Again, it’s not clear what the alternative would be to making sense of our lives through narrative and media tropes, or what form of collective sense-making couldn’t be characterized in those terms. Must we instead insist on a direct unmediated experience of the real and exist as a chaos of raw disorganized sensation at all times? In that case, I guess we could read Peretti’s Deleuze essay and learn how to deterritorialize ourselves.
It’s not hard to assimilate generative AI to such concern about the fragility of reality. Not only can AI produce deepfakes and do all sorts of lying where it ought to be truthing, but it can insert us into fictions and diversions much as Peretti promises in his Buzzfeed memo. Each time we are “immersed” in entertainment this way, we would supposedly become that much less interested in accounts of reality that aren’t warped to make us the “main character.” Our “impulse” to escapist diversion will be “reinforced.” Garber cites as an example the automated tools turn photos on your phone into would-be highlight reels: “What better way to encourage customers to be loyal than to tell them their life should be a movie?”
I’m dubious that anyone feels more “loyal” to any corporation after having these corny videos foisted on them, even if they are momentarily amused by them. And certainly no one is mistaking them for a compelling documentary about their lives. Who are they supposed to be fooling? Not the expected readers of Garber’s essay, who clearly know better — no diversions for them! Instead there is some vulnerable “customer” out there who is going to be tricked into thinking that they are important.
So while some of us will feel important because we read articles that mock other people’s need to feel important, generative AI will go on helping people imagine that someone wants to pay attention to them. As Sophie Haigney points out about Lensa — an app that generates images of ourselves in stock costumes — AI is capable of “feeding a wholly private fascination with ourselves.” Buzzfeed’s personalized quizzes and such will likely do the same, applying familiar formulas to us and giving us an approximation of what it feels like to be less insignificant, similar to how a care robot or chatbot might provide a passing sense that you were no longer lonely. (Cue the fears of a mass Capgras delusion.)
Much as algorithmic recommendation offers us an idea of “who we are” that we can consume (and contrast with our own sense of ourselves), generative AI will invite us to play the same sorts of games. This is not so much a matter of “inhabiting entertainment” as being presented with a version of oneself as a commodity, which feels good since, after all, it is the form of value in our society. AI can turn us into good content for ourselves, which allows us to believe we might be good content in general, a content that can and will be recirculated, algorithmically recommended, sponsored, and so on. It is a source of validation, of belonging, of perceiving oneself to be normal and accepted as well as exceptional at the same time. It provides the thrill of being seen at a seemingly safe remove from social consequences.
When algorithms commodify us, it may allow us to momentarily think that we are otherwise not commodified, that the process of objectification occurs not continually in society but only as a gimmick in some fake screen world. That, unlike “main character syndrome,” does seem a dangerous delusion. Rather than being upset at “entertainment” or “AI” destroying reality, one might more coherently direct their anguish at the commodity form and the sort of actuality it already structures for us.
I see a future where, like AI companions, you are given an AI audience and you, knowing its an AI audience, still feel elated. When I was younger, I was an only child who got lost in the one player worlds of the Nintendo Entertainment System. Don't be surprised if more people choose to withdraw into similar, quasi-parasocial realms.
I say quasi-parasocial because the AI will listen more closely to you than your closest friend possibly could.