Shrimp Jesus
Who among us will cast the first stone at shrimp Jesus? I hesitate to talk about him because I believe that AI-generated content is categorically no better or worse than other clickbait, and the best way to reckon with clickbait is to deny it that for which it seeks: attention. Writing to marvel at or deride AI clickbait seems to invite more of it, which in turn will entice us to write more critiques about it, which will only further feed the downward spiral.
The bot that invented shrimp Jesus has no doubt procedurally generated thousands of other equally zany would-be memes, but it requires scholarly attention, like this from Stanford University researchers Renee DiResta and Josh A. Goldstein, and media attention like that provided here by Jason Koebler of 404 Media to make shrimp Jesus into something culturally relevant. DiResta and Goldstein write:
The magnificent surrealism of Shrimp Jesus—or, relatedly, Crab Jesus, Watermelon Jesus, Fanta Jesus, and Spaghetti Jesus—is captivating. What is that? Why does that exist? You perhaps feel motivated to share it with your friends, so that they can share in your WTF moment. (We encourage you to share this post, of course.)
And I encourage you to share this post too. Anyone who wants to circulate content on social media has a touch of shrimp Jesus and the purity of his cynicism in their heart.
The Stanford researchers want to use shrimp Jesus to examine, in the words of their post's title, “How Spammers, Scammers and Creators Leverage AI-Generated Images on Facebook for Audience Growth.” Of course, spammers and scammers would basically leverage anything for growth on Facebook, so the stakes of this analysis are in the composition of Facebook’s recommendation algorithms: Shouldn’t Facebook shadow-ban AI images (since they are a kind of “inauthentic behavior”), especially given the company’s recent announcement that it would seek to label the generated images it hosts as “imagined by AI”?
Facebook claims that it is “working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.”
But Facebook is apparently already able to detect AI-generated images well enough to boost them in people’s feeds when they show any interest in any other AI-generated material, as Koebler and the Stanford researchers point out. “We don't know why this is happening exactly,” Koebler writes, “but something is happening where, when you interact with one AI-generated image, you will be recommended other ones regardless of what type of content is being shown.”
That is not surprising, since that is algorithmic recommendation is designed to work. The algorithms generate the spammers who make the shrimp Jesus images, and the spammers use AI because they are incentivized to make the most content with the least effort. They can use AI to churn out images arbitrarily and then optimize for the ones that gain traction, much as Facebook itself does with everything on its platform.
But why should it be more concerning that Facebook treats “AI-generated” as a formal category to guide algorithmic recommendations, given the innumerable other undisclosed, nonintuitive correlations it identifies to classify and condition its users? The algorithms predict what users are supposed to like, and spammers/“creators” find ways of providing fodder for fulfilling the predictions. When I use Facebook, Facebook effectively makes a fantastical and bizarre AI-generated image of me that I can’t see directly but is refracted in everything it chooses to feed to me. In other words, I don’t just look at shrimp Jesus; I am shrimp Jesus.
The AI-generated identities that platforms make for us seem more problematic than anything that might appear in a given AI-generated image. Weird images like shrimp Jesus seem to reflect the underlying weirdness of submitting to algorithmic control. it doesn’t seem useful to act as though there is some form of “authentic” image that is appropriate for algorithmic circulation or virality, or that algorithmic recommendation is justified as long as the content is “real.”
In this thread Roland Meyer speculates that the religious flavor of these images suggests that “we could be witnessing the birth of a cult that is not only largely driven by generative AI and recommendation algorithms, but which also manifests the cult-like structure of social media itself, with its ritualized interactions, its dynamics of followership and its aesthetics of persuasion.”
It may also be that scammers and spammers target religious people as being particularly gullible and susceptible to superstitions, making them more likely to “amen” some religion-flavored content just in case, so that don’t jinx themselves. (I believe that if I check hockey scores on my phone, my team will have lost.) Maybe shrimp Jesus presents itself as a small miracle to people who want to believe in miracles without having to consider the implications of such a belief. Maybe it proves to them that the Lord works in mysterious ways to get his Word out. There are as many ways to authenticate content as there are to save souls.
“Vicarious valuation”
I don’t know when in childhood my aversion to laugh tracks fully coalesced, but I figure that was also when my narcissism became pathological: If I laughed along with the laugh track, didn’t that mean I no longer knew for myself what I found funny? Didn’t it threaten my tenuous hold on my own identity? Isn’t this why I am scared of liking shrimp Jesus? It’s as if my sense of self can’t survive the most trivial gesture of conformity.
I am similarly perplexed by “reaction videos,” which purport to capture someone’s authentic response to some interaction or piece of content. Why am I supposed to care about that? What am I missing that everyone who enjoys this kind of content apparently takes for granted? After all, as William Davies notes in this paper about reaction media, “the reactions which are sought and shared online are eminently predictable and unsurprising. The child filmed unwrapping a present gets happy and excited when they see it; the woman interrupted in the street with a bunch of flowers looks confused and slightly creeped out; the teenager watching Silence of the Lambs for the first time looks scared.”
What is compelling about reaction videos, then, must not be the singularity or novelty of the reactions themselves but the way they seem to confirm that everyone is “wired” in the same way to have the same feelings and same processes for expressing them. More generally, they confirm that everyone can be understood to be “wired” at all, that we consist of emotional circuitry rather than emotional reasoning, and so we are not ultimately responsible for our feelings and can just enjoy them instead.
Davies suggests that reaction videos are popular because “the familiarity and predictability of these responses are a grounds for empathy and feelings of common humanity, grounded in something that is deemed ‘authentic’ (as opposed to ‘fake’).” That is, they establish an easily digestible standard of authenticity: What is “authentic” is how someone reacts to something without being able to situate it culturally or historically, or to develop a more thorough or nuanced appreciation of it through deliberation. What is “inauthentic” is using consciousness to sustain a critical distance and develop a particular response with a particular aim potentially in mind.
In reaction videos, which “artificially engineer” situations “with a view to capturing the reaction as content,” Davies argues, “behavioural impulse is being deliberately provoked as a means of valuation.” The videos reinforce the idea that an immediate reaction should be taken as the realest and purest response, and that only things capable of eliciting such responses should be seen as valid. As Davies puts it, valuation is “mutating from judgement (a critical capacity) to feedback (a cybernetic capacity),” which in turn authorizes a greater amount of surveillance (recording people’s externalized impulses without their knowledge) in order to generate and track value.
(This linking reaction with valuation is reminiscent of Anna Kornbluh’s Immediacy or, the Style of Too Late Capitalism: “Immediacy is out there everywhere: the basis of economic value, the regulative ideal for behavior, the topos of politics, the spirit of the age.” In her view, the cultural emphasis on immersion, reaction, flow, etc. all reflect capitalism’s increasing dependence on accelerating circulation.)
Because immediate reactions don’t depend on anything about the particular human subject experiencing them, one person’s immediate response can stand for everyone’s:
Reaction videos take critical reaction, and aestheticize and somatize it, such that it becomes a spectacle. Thanks particularly to platform infrastructures, that allow for dramaturgical divisions between ‘stage’ and ‘audience’ to be suspended, these videos produce what might be termed vicarious valuation practices, which a particularly expressive or extrovert individual comes to emote, judge and react on behalf of others, who watch this performance in the manner of a more conventionally passive ‘audience’.
This passivity can be construed as the reward for evacuating the subject position, for abandoning consciousness as an end in itself. I smile when you smile, “mirror neurons” rule everything around me, and a warm sense of belonging envelops TikTok feeds everywhere.
Nonetheless, I still feel like there is something reactionary in my reaction to reactiveness. I want to believe that I am immune, or even better, that I can, in a unilateral act of the will, elevate myself over the cultural climate of immediacy and take a deeper satisfaction in the righteous practice of critique. “Where the ideals of critique and judgement assume a liberal subject, capable of critical distance,” Davies writes, “the ideal of reaction assumes an embodied cyborg (or what Deleuze terms a ‘dividual’), which communicates impulsively and non-verbally.” Must these be opposed as alternatives? Or is there some way to retain critical distance without disavowing our “cyborg” dependencies, let alone our necessary entanglement with other people?
Re: reaction videos, I think it's worth noting that they're an artificial shared experience -- you can watch something alone, or you can watch it with a goofy little guy who's REALLY excited to see it. Or you can rewatch an old favorite movie, replay an old favorite videogame, etc but with someone who's never experienced it before, and vicariously re-experience it for the first time through their corny reactions. It's a watered-down, one-way empathetic connection that any lonely kid with a cell phone can access for free.
I think you hit the nail on the head when you describe our algorithmic identities as AI generated images. The true risk from AI is individual. A fake song in some singer's style isn't going to change the world of music. It fails to interest me, except very occasionally as a brief curiosity, because I know it lacks individuality. What will change that is a song created for me, because no song to date, no matter how much I love it, was created just for me, using knowledge of all of my preferences. When AI is able to target us as individuals in this way it will change everything, become truly addictive.
Also, I cited some of your very perceptive music writing in this article a while back, hope you like it.
https://georgedhenderson.substack.com/p/if-i-had-a-million-hearts-to-give