Art critic Julian Stallabrass has an essay in the most recent New Left Review about machine-generated images and their relation to conventional photography. He begins by pointing out that the distinction is somewhat arbitrary, given how much software processing takes place in the background of the most immediate-seeming phone images:
‘Photographs’ taken by phone cameras are already extensively governed by ai processes, of course. The user’s choice of when to press the shutter marks only a mid-point in a burst of images, taken before and after, that are melded to make the resulting ‘photograph’, using HDR effects to increase tonal range and resolution, and to decrease ‘noise’, or lower entropy.
These automatic processes constitute a background ideology of image-making that we consent to and absorb when we unreflectively use phone cameras — they impose a certain idea of what a “photo” should be. The “photo” in digital imaging is not simply a recording of light on photosensitive film. “The raw images produced by the tiny sensors and (mostly) plastic lenses of phone cameras are processed by algorithms that recognize the generic subject — person, landscape, food — and tailor the images accordingly, adding sharpness, emulating differential focus, smoothing surfaces and increasing color saturation,” Stallabrass writes. The software imposes certain ideas about what reality is supposed to look like, what consumers want it to look like. It generates an idealized version of what the camera’s sensors have captured. (This, of course, is not so different from what generative image makers do, only they may draw on lexical descriptions rather than an array of data about local light sources.)
Once you detach the word “photo” from the limits imposed by the processes of film photography, it becomes open to a wider range of uses and implications. This recent piece by the Verge’s Nilay Patel outlines the different ways phone manufacturers envision “photos.” Samsung vice president Patrick Chomet argues that because of how the camera software works, “There is no real picture, full stop.” This is my view also: that all pictures, all photos, all forms of mediation are not copies of reality but additions to it. (The representations can’t reproduce reality without also supplementing it.) This is not “pure nihilism,” as Patel claims, but points to acknowledging that reality is ultimately formed through consensus and not merely imposed by technology. Not that Samsung cares about that consensus; Chomet also claimed that “making” an image and “capturing the moment” are different intentions, and Samsung products can do both.
Google, in Patel’s roundup, encourages users to see photos as memories, with the company giving them tools to make visual representations that match their feelings: “create the moment that is the way you remember it, that’s authentic to your memory.” As I detailed here, I think it’s incoherent to treat memory proleptically and try to constrain it in advance; documents from the past at best evoke memories, but they don’t contain them and can also inhibit them. Memory is the effort of remembering, not some content, some restored quantum of information from the past. But this at least acknowledges that photos are constructs and promulgates the idea that digital images should be understood as rhetorical.
Apple, the third example in Patel’s piece, pushes the opposite ideology, calling a digital photo “a personal celebration of something that really, actually happened.” They may as well have added a few more reallys and actuallys for emphasis. Recognizing that the ease of “making an image” means that “found images” become more valuable, Apple’s sales pitch here seems to be its devices are truth-makers that produce photos that are somehow more real than the other devices — maybe we are supposed to think their cameras have higher resolution or something. But as Patel points out, Apple also includes image editing tools that allow users to alter “reality,” so it is hard to put much credence in this. Nonetheless, Apple seems to commit to a fairly restrictive ontology: It will provide tools, platforms, systems that can appear to guarantee the fidelity of a document to what it represents. In other words, it expects its users to entrust the company to decide what reality really, actually is in order for them to “personally celebrate” it. If you don’t have an iPhone, nothing will have ever happened to you.
Each company defines image-making in ways that suit their marketing strategies, so it’s probably simplest to understand the definition of “photo” as whatever a company believes will help sell more phones. But there is probably no credibility left in absolutist definitions that link photography to documentary reality or indexicality. In the context of these various definitions of a photo, Nathan Jurgenson forwarded me this Wikipedia link about a 20th century art collective called Group f/64, who were rigorously against “pictorialism” (manipulating photographic captures with some aesthetic intention in mind) and whose proto-Dogme manifesto defined “pure photography” as “possessing no qualities of technique, composition or idea, derivative of any other art form.” That sounds less like an autonomous fine art and more like surveillance footage.
This way of conceiving “purity” is not only dubious as an ideal (“depth of focus is immoral!”), but it seems entirely untenable, a fantasy of achieving some kind of godlike pure perspective beyond subjectivity (which is not unlike the non-viewpoint that generative models could be conceived as offering). There are no pure documents. Yet this collapse of representation into things in themselves, of subjectivity into objecthood, is implicit in every concerned article fretting about the fate of photographic evidence in the age of AI, like this one by Benj Edwards about “deep doubt.”
Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.
That is, lying when you ought to be truthing remains a social problem, and the existence of AI tools provides new ways to lie. But “real media” has always been something of an oxymoron, and it shouldn’t be held up as an epistemic model that we should work to restore. That effort leads only to infeasible Ministry of Truth–style projects like C24A (described here), which seeks to replace the irreducibly necessary work of constantly re-establishing the terms of consensus reality with industry-imposed unilateral solutions like watermarks, “nutritional labels,” and “provenance technologies” that would supposedly remove the politics from the idea of validation. These truth initiatives treat validation as something that doesn’t need to be negotiated but can be imposed by a conglomerate of device makers, as though they could be entrusted to place society’s best interests in front of selling more phones.
Edwards links “deep doubt” to “dead internet theory”:
Deep doubt could erode social trust on a massive, Internet-wide scale. This erosion is already manifesting in online communities through phenomena like the growing conspiracy theory called "dead Internet theory," which posits that the Internet now mostly consists of algorithmically generated content and bots that pretend to interact with it. The ease and scale with which AI models can now generate convincing fake content is reshaping our entire digital landscape, affecting billions of users and countless online interactions.
It seems like a leap to go from “convincing fake content affects users and interactions” to “the internet is mostly fake.” Every kind of media affects people and “reshapes the landscape.” The tenor here almost seems to imply that when people are affected by something, it is fake, whereas “real media” would let people obey the official truth-meting authorities without any troubling self-reflection. (It also seems questionable to label as a conspiracy theory the idea that the “internet” is largely driven by algorithmic feeds.)
Both “dead internet theory” and “deep doubt” seem better understood less as epistemological claims and more like metaphors for how the means of circulation have changed faster than the norms governing them. There is no stable relation between the degree of distribution of information and its veracity; no relation between the ubiquity of an idea and its vetting through some comprehensible social process. On Bluesky, Jacob Bacharach noted that “It's fascinating how the dismantling of a genuine mass media and the recreation of premodern purely person-to-person information networks via social media has transformed Americans back into medieval villagers in the span of about a decade.” The “dead internet theory” is a funhouse-mirror version of the death of mass media, referring in a veiled way to the depopulation of 20th century media businesses, not the internet itself.
Max Read also brings up “dead internet theory” in this piece about “AI slop,” which recaps and extends some of the reporting on how platforms incentivize its production.
Beneath the strange and alienating flood of machine-generated content slop, behind the nonhuman fable of dead-internet theory, is something resolutely, distinctly human: a thriving, global gray-market economy of spammers and entrepreneurs, searching out and selling get-rich-quick schemes and arbitrage opportunities, supercharged by generative AI.
This is a helpful reminder that humans make AI slop; it does not spontaneously generate itself. Its production is driven by ordinary economic incentives and not some manifest destiny of machines by which they will inevitably grasp the totality and index all possible realities within their vast numeric matrices. If the internet is dead, it’s because capital is a vampire, not because machines have taken over.
“Dead internet theory” sometimes stands in for cultural stultification, a uniform conformity, a universe of discourse from which all human imagination and capacity for novelty has been subtracted. In his essay on generative images, Stallabrass describes Vilém Flusser’s idea of photography as a quasi-autonomous apparatus intent on “filling in the latent blanks” of its “extensive field” and suggests that “AI imagery might seem to be a realization of Flusser’s view.”
Flusser also believed that society needed to be saved from photography’s looming threat of imposing “eternal, endless boredom” on the world by a group of “envisioners,” described by Stallabrass as “a quasi-Nietzschean elite” who would rescue society from the “panoply of standardized photography.” Stallabrass writes:
In Flusser’s emerging future, as seen from the 1980s, if the ‘envisioners’ are allowed their way, people across the world will be hooked into a rich, unifying, dialogic cultural feed, so absorbing that they will allow their bodies to become etiolated as they experience a gigantic and continuous mental orgasm.
Where are today’s envisioners, who would save us from bot monotony and provide us with the infinite jest? Obviously they are not the “international sloppers” whose practices Read details. It’s probably not artists, humanities professors, and cultural critics, who by and large no longer have any professional purchase on the world and will soon become figments of nostalgia.
Tech companies apparently assume that generative AI is not a force for universal darkness but is, in fact, the ultimate envisioner. How else to explain Meta’s intention to cut out the slopper mediators and push AI-generated content directly into user feeds? As Jürgen Geuter points out here, this abolishes the idea of “social media” — that content feeds should connect you with friends and reveal anything about what they are doing or thinking:
No longer are posts about a thing someone wants to say (even if just to sell you something) but just there to keep you occupied. It’s not about giving you opportunities to engage with other people and their experiences and thinking (“bring the world closer together”) it’s about making you a passive consumer of slop that ads can be put next to.
Isn’t that the “gigantic and continuous mental orgasm” that media have always promised? Not some worldwide rave in which everyone can participate but a hermetic and perfectly passive solipsism in which the desire of the other no longer seems to exist. Generative AI can perfect the soul-disclosing effects of recommendation algorithms by dispensing with correlative content and directly producing a world “imagined for you” without your having to have any imagination. Meta even promises to insert your likeness into the content it makes for you.
Stallabrass argues that tech companies are “monopolists are selling us a terminal sense of déjà vu,” and to fight them we must stop ourselves from “fantasizing into existence a patrician class of cultural saviors.” But the monopolists already see themselves as the cultural saviors, and they may have already accrued enough power to make that lie seem true. It’s only a matter of time before the C24A will verify it.
Excellent article...So much to think about here. Love the description of Americans regressing into medieval villagers within a decade ;-) It seems we might need to invent a new descriptor for digital photography, as the term photograph doesn't really seem adequate anymore?
This is deeply subversive. Great work.