As images of the recent aurora began flooding onto social media sites, Meta posted some fakes on its Twitter clone Threads, demonstrating that those who “missed the northern lights IRL” could create their own dramatic aurora images with generative AI. I saw screencaps of the post on Bluesky, where people were recirculating it to laugh at and moralize over it. Some commenters were ready to take Meta at face value and wondered at the company’s cluelessness: How could anyone think that people want to use AI to fake something that is entirely about really being there to witness it? It would be like posting a generated sunrise photo while you slept in.
At first I assumed Meta’s aim was simply to troll users into promoting Threads and its generative models across multiple platforms. Since the idea of tricking people (and perhaps oneself) with generated images is so inherently pathetic, it would have be promoted in this backhanded sort of way: You’d never do this (wink-wink). Meta’s post lets people screen-cap it and advertise how they are better than that, even while participating in the same contrived, FOMO-driven metricized attention economy.
Meta could be seen as taking a triumphalist posture here, as if the company knows that generative AI can’t be mocked out of existence and has instead already established itself as one of the inescapable nuisances of networked social life. As the post would have it, this is not because tech companies are forcing it on us; it’s because the technologies lend themselves so readily to new attention-seeking ploys. The post suggests that generative AI is downstream from irrepressible human vanity, not insatiable corporate greed. Meta doesn’t need to promise that AI will bring unimagined possibilities to life because the technology has already apparently proved its value as a cheaper way to make whatever’s already popular.
But perhaps that reading is too hasty. Even if the post is a joke or a flex, it still contains ideological notions about the nature of experience that may be gaining traction under the cover of disavowal. It feels too easy to believe that no one could be so craven as to want to use AI in the way that Meta is proposing; perhaps there is something defensive and desperate in that dismissal.
As Nathan Jurgenson pointed out when he mentioned Meta’s post to me, “there’s an underlying philosophy here that you should have and could have taken every photo you see.” Everyone should be allowed to participate in any trend, regardless of whether they are really present or not, because the logic of social platforms is that everyone is always present everywhere.
With generative AI, we can abolish FOMO. No one is obliged simply to be an audience but can find their own technologically abetted way to perform the trend and include themselves on their own terms, as with any other viral challenge. Every shared piece of media is an invitation for mimesis by any means necessary. The generated image is not a falsification from this perspective but a visual metaphor for a deeper truth of interactivity, that everyone can participate in anybody else’s personal experience through a kind of performative mimicry.
To put that another way, no one is entitled to a unique experience, or at least to a unique representation of it. The photo that one takes of some event doesn’t belong to the picture taker; it belongs to the “field of what is representable,” which once was the shared possession of an entire culture but is becoming the property of AI companies, who are trying to turn that field into a closed, proprietary model. Then, every image you take of “your” experience will be quickly demonstrated to be derived from an existing idea already latent in the total model of all possible experiences. The AI aurorae will be primary and the the random individuals’ pictures of them meaninglessly contingent, shadows on the cave wall.
One can imagine meta believing that “experiences are worthy to the degree others are generating them with AI,” as Nathan put it. If no one is trying to generate it, does anyone really care about it? The generated images would be canceling out the rarity that makes some events seem significant and photographable to begin with, which seems counterproductive and unsustainable. (But then again, AI in general is counterproductive and unsustainable.) By making any documentary image into something anyone can simulate, generative AI saps the will to photograph, rather than generate, anything at all. Why photograph a sunset? Instagram has seen too many sunsets already, so shouldn’t Meta just generate the last sunset for everyone to post whenever they have sunset vibes? Couldn’t that save us time and effort so that we can concentrate on photographing something more original?
The commenters mocking Meta thought the company was missing the point of taking and sharing images. Of course, people don’t take pictures of the sky because they think no one else has seen it before. Rather they want to capture a moment where they saw something specific — that is, they want an image to share of their own subjectivity in objective form: I saw this. I noticed something interesting. I captured a “decisive moment,” à la Henri Cartier-Bresson. The point of sharing it is in large part to share the singular irreplaceability of a specific person’s point of view, their unduplicable presence in the world.
The Meta post assumes that subjectivity is unimportant, boring in other people. What’s important is the content of the image: someone saw this. Here is another interesting thing for your feed that’s already full not of decisive moments but interesting things selected for you that come from whomever. What Meta recognized in the aurora images was not a burst of individuals sharing their unique space in time but a bunch of users all posting the same sort of image, a pattern of behavior that could be easily copied. “The northern lights images were so same-y and predictable that Meta correctly identified it as NPC posting,” Nathan suggested. When Meta posted its generated images, “AI wasn’t failing at providing a bad version of witnessing but succeeding at identifying human-made slop.” Why not use a slopmaster general to add to the slop pile, if you are already going to be one of the faceless millions who posts that way anyway? If a generative model makes you efficient enough, you might steal some of the traffic that the algorithm would have directed to any of the interchangeable others, now that the intrinsically limited pool of friends and family content has been massively diluted with any content that proved capable of distracting anyone.
Meta’s post speaks to an emergent way of seeing, typified by the sense that “a model could have generated this” whether it had or not. It projects a subjectlessness onto a scene, subtracts the specific intentionality from any point of view, and sees the average, the predictable, the over-seen, exalted into a kind of glossy, mediocre sublime. No one is missing out on anything. Behind everything distinctive is a ordinary pattern if you scale up enough. And that is the scale at which tech companies want us to situate our consuming selves, where we plug into a feed not to “connect with people we know” — a local and financially inconsequential level that can only sustain so much time on device, and which is readily revealed as an inadequate substitute for better and more secure ways of keeping in touch — but to connect with the machine and to learn how to see the way it sees and enjoy its endless bounty.
It makes sense for Meta to denigrate subjectivity in favor of content, because the models it owns can create content endlessly but can never produce a single instance of subjectivity. (“Granted, the tapes themselves — you own them, okay. But the magic that is on those tapes, that fucking heart and soul that we put onto those tapes, that is ours and you don't own that.”) And it makes sense to market the primacy of content as something that levels the playing field: You don’t have to be interesting to post because Meta can give you tools that will make interesting stuff for you. Thanks to generative AI, no one will be silenced. Although, actually, no one will need to speak. The machines will be doing all talking, so much talking, and all the human voices will be drowned.
There’s also the fact that the photos themselves, for the most part, aren’t what people are seeing with the naked eye but are made possible through long exposures and other camera tech. If you can only see the aurora through your phone screen, using gen AI is a logical next step…
Thanks for writing about these AI ads and social media posts recently…they’ve been really disgusting to view, and I’m glad you are creating discourse around them. I hope we, as a society, will be able to fight back against this AI shiny object nonsense.