In the wake of generative AI, the camera maker Polaroid is hoping to re-establish itself as a producer of realness. According to this Fast Company promo piece, the company is pivoting from “a brand for creatives, where the Polaroid camera was to be seen as a tool for self-expression and creativity,” to being “the go-to camera for people who want to take meaningful photographs.”
Both of those statements seem like they could indicate the same thing or nothing — “we are shifting from being creative to being meaningful.” But I suppose the point is to cash in on the changing perception of what makes a photo “meaningful.” Before, a photo was meaningful because it captured a particular kind of content: It was a picture of someone, or some event, or some place that meant something. But supposedly since AI can make a more or less convincing image of anything (and since digital photography has long since lowered the bar for what is worth photographing), a photo is now meaningful because it it contains something that hasn’t been digitally synthesized. The content has become subordinate to the method. Any Polaroid is mostly about the film stock.
“The value of a Polaroid camera is authenticity,” the company’s chief marketing officer claims, and it goes without saying that this authenticity refers less to what is in an image (people posing, typically) than to how the image was made. Taking a “real” (nondigital) photo of something authenticates it not so much because it proves something happened at a specific moment in time, like a Polaroid of a hostage holding today’s newspaper, but because it gives a specific image an aura of singularity. Many images of a moment may have been captured, but only one person can own the Polaroid of it, making what it specifically captured distinct and “meaningful” as property, as an alienated thing.
The gimmick of an instantly printed photo used to cater to convenience and impatience — you didn’t have to wait for the Fotomat to develop the film to have a memento; you could start consuming the documents of an experience as it was happening and engage with the novelty of how that changed things. The whole world becomes your photo booth. Of course, digital photography made that mode of experience entirely commonplace. Cameraphones made images into a form of conversation rather than strictly a means of commemoration. As a result, certain kinds of events came to be sanctified by disallowing photography, and others were totally reshaped by the expectations that all participants would constantly be taking pictures. In that context, film cameras of any sort started to evoke material scarcity and the value that is supposed to confer. Anyone using a film camera can now feel like an artisan, making something relatively handcrafted and vulnerable, invested with the lost arts of a fading tradition. The ritual elements of the practice are foregrounded to the point that they seem to be for their own sake — taking pictures with a film camera to experience the discipline of a less forgiving technique.
Contrasting photography with generative AI (rather than digital photography) brings forward different concerns. The stakes are not what it is like to still work with the limitations of film, but what it is like to work within the constraints of reality itself as a medium. With Polaroid, the idea of “reality” itself is reframed as a gimmick, just as it was for BeReal. You may live an entirely phony and artificial life most of the time, but thanks to this technological intervention, you can can reach out and make contact with reality for just a moment, in an unadorned image — “an opportunity to interact with something real in a world where we are increasingly losing grip on reality,” as the Fast Company piece puts it (because “we” all had such a great handle on reality before). If we fall for Polaroid’s marketing, film will now evoke reality’s passing into the museum of outdated media. Reality will be a niche taste, an artisanal craft practice.
But that is not anything new — that is just the upshot of all authenticity-based marketing: Access to reality is exclusive, and these products or these experiences can provide it. Generative AI, in what may be its most far-reaching commercial use, will make that sales pitch clearer by casting doubt on all forms of media and revaluing different means of signifying presence, signifying “reality.” Generative AI will be unevenly imposed on society, and the burden of doubt won’t be equally shared. Class divides will be sharpened, articulated as usual in terms of whose experience is accepted as real and whose isn’t. Some people will have their experiences cast into suspicion, potentially invalidated; other people will have their experiences feel more real.
Polaroid wants to sell us the illusion of access to an unmediated reality in a tangible piece of media that resists circulation. The Polaroid is really there. It’s no longer primarily an image of something else, but a piece of reality itself. But this underscores the degree to which reality is ordinarily understood to be located in images, how images are not of reality but are reality. To that habit of mind, reality is not unfolding experience, lived bodily, in a consciousness; instead something becomes real only when its documented. The “authentic” is not typically the ineffable stuff that resists documentation — the aura of a Polaroid itself rather than the data it contains — but the stuff that can be pinned down as objective data. The “real images” vs. “fake images” framing invites us to forget the intrinsic “fakeness” of all images or, what amounts to the same thing, to believe that we used to have some privileged access to truth and reality before we went image crazy as a society but a special few of us could have it again.
Polaroid wants to offer a more authentic authenticity, whatever could be conceived as the opposite of a deepfake — a deepreal that draws on the classic tropes of authenticity, only further inflected by the threat of generative AI. Hence the familiar attempt to ground authenticity in spontaneity, as in a recent Polaroid ad campaign described in the article:
To coincide with the launch of its most high-tech, most expensive camera to date—Polaroid launched “The Imperfectionists.” For this campaign, they chose three photographers who celebrate randomness in their work. One of them, Coco Capitán, shot scenes from a sailboat, including a knotted rope, and wrote, “I am not interested in perfection. Chaos. Spontaneity. Randomness. That’s where reality exists.”
Reality exists in images of randomness. If Polaroids used to give you immediate access to a deliberately staged memorialization, now they give you material access to a reified immediacy.
This kind of rhetoric about authenticity as spontaneity, as something inherently unplanned, has long served as a reaction to the pressures of neoliberal subjectivity (if not capitalist social relations in general): Rational calculation, conscious planning, strategizing, self-promoting are all “unreal,” even though that characterizes much of our conscious experience and is what economic survival requires of everyone. Reality itself is made into a gimmick — that is, a commodity — that we consume as a product or as a service mediated to us through a technological process.
At the same time, you are at your most real when you are simply responding to stimuli, when your behavior is a reflex and not something willed, when you are “in the moment.” You can’t access your own authenticity directly. But any technology that inhibits or prevents self-reflection will make us more authentic. It would allow us to relax into the posture of consuming, through which we come to understand who we really are, not what we are forced to try to be.
The problem with generative AI, from this perspective, is that it can be understood as inviting self-reflection: It asks you what you want, so it forces you into inauthenticity, makes you calculating and unspontaneous. It intensifies the pressure to want the right things, to present oneself better, to produce more competitive media with the model’s assistance. By putting more tools in people’s hands to deliberately make strategic images, AI models sharpen Polaroid’s opportunity to sell its commodified version of “reality” as spontaneous and unreflective.
As more and more AI content is circulated — a process which can itself be increasingly automated — it becomes more and more plausible to represent “reality” as scarce, as rare — as even more remote from ordinary experience and even more a thing you need to purchase special technology to access. The Polaroid camera becomes a virtual-reality helmet in reverse.
***
To make reality seem more scarce and more valuable, generative AI can be put to work on multiple fronts. It can be used in different ways to make people “inauthentic.” It can tempt us to prompt for “unreality,” trapping us in a kind of fantasy of autonomy that impedes our spontaneity and traps us in a sort of mirror. Or it can be used to overwhelm our screens, our current access points to reality, with content whose “reality” has been rendered unstable.
In this recent post, Ryan Broderick worries that “AI, by design, is fundamentally not something we experience in collectively public [ways],” because it can intervene at an infrastructural level and reconfigure what appears on our screens. When generative AI is not something we prompt but something programmed to shape what we see without our being able to control it, AI can begin to make our lives and memories unfold like an algorithmic feed, populated with material generated specifically for you, automating away the pseudo-autonomy of prompting without restoring some sort of compensatory access to reality. If the screen could ever be conceived as a shared public space, AI makes that idea harder to believe in. The space for the “reality” of collective experience is narrowed; more technological forces are working against it and pushing people into bubbles, and so on.
Broderick seems especially worked up that AI can edit personal photos into events that never occurred.
In the demo, a user presses a button to make their unsmiling family look happier in a photo. It’s horrifying in its mundanity. An AI for altering the records of our lives. Something you can quickly do and forget about. Which makes you wonder what happens when we can’t remember if we used the AI on our photos or not? What if a future upgrade tweaks our photos and we don’t even know?
It’s not clear to me why this is supposed to be “horrifying.” Why should using AI on photos be considered more distortive than using images to represent reality in the first place? Why should we be required to document every experience in exactly the same way, or to remember it in a format that can be externally authenticated? (Google can’t make your Polaroid smile, so presumably the Polaroid would then stand as an irrefutable record — as if taking an image has ever captured reality just as it is.) Why should we treat images as “records of our lives” rather than potentially something else, as whims or jokes or wish-fulfillments? Why would anyone assume that looking at an image is enough to make us remember precisely what it captured? Is AI “horrifying” because it reminds us yet again that what we already take to be real is a brittle construct, that human memory is not computer memory, that the social relations necessary to sustain meanings across time can’t be automated?
According to Broderick, AI is “not making new experiences, just codifying past ones.” So far, so good. Generative AI depends on experiences being turned into data; the models don’t have or make experiences, they synthesize media that people can then experience and re-experience and remember and forget.
He continues, “And as it gets better, it will become harder to know who’s using it and for what. But it’s still out there, still evolving, still optimizing itself.” Here, AI is no longer seen as a tool in the hands of capricious users to make inauthentic choices; it is correctly assessed as also a weapon that will be used against us. The rhetoric of “evolving,” however, verges on attributing a sneakiness to the algorithms themselves rather than the institutions that deploy them. Automation doesn’t need to “get better” to be foisted on us in a hidden or deceptive way; it’s certainly not as though most forms of automated decision making are currently explicit and well-labeled. And hegemony doesn’t necessarily require special technological capacities or flow directly from them. Whether AI can be deployed against our will doesn’t depend only on how unobtrusive it is; it chiefly depends on what sorts of power the states and companies implementing it have over us. It’s mainly political, not technical.
Broderick then concludes, “If we don’t keep paying attention, [AI] will worm its way into everything and we will no longer be able to know what we’ve made and what it made, including our own memories.” It will be as if some insidious force or “apparatus” would be standing between us and the real, imposing a representation of an imaginary relationship to the real conditions of our existence. It will call out to us, and we will feel hailed by it. It will constitute us as subjects even as we feel our autonomy to be self-evidently true. Could it be that AI is ideological?
Again, the implication is AI itself “worms” into things on its own accord, for its own purposes, as if it has intentions, and its effects are presented as novel, as if until now we have had full control over what we make (no alienated labor here!) and how we remember (what unconscious?). Since the power relations that automation is being deployed to articulate are screened out of his take, it just sounds like more ungrounded paranoia about how “we” all may be “losing our grip on reality.”
It seems more worrisome to me that AI will induce the opposite condition: It may give us too much of an apparent grip on reality, to the point where we feel comfortable trying to unilaterally shape it for ourselves. That is, we will accept the idea that reality is a matter of images rather than experiences, and we will seize on AI tools as a means of making the reality we want for ourselves, without requiring the consent or participation of other people. The way that technology reproduces asymmetric power relations and entrenches inequality will be left undisturbed, set aside in an individualist fantasy of agency and AI-powered creative fecundity. “How many hours would fans spend talking to a digital version of Taylor Swift this year, if they could?” Casey Newton asks in a piece about “the synthetic social network.” “How much would they pay for the privilege?”
I’m not sure that many people want that fantasy. It seems necessarily isolating, a depressing last resort for those who, for whatever reason, can’t have shared experiences but want a simulation of them. My impulse to pity makes me suspect that I project that solipsistic desire onto others to feel better about my own solipsism, my own delusions of potency. I imagine others being AI’s dupes so I can feel like my own life is a bit more real and grounded.
But then again, the fact that you could comfortably replace “AI” with “bourgeois modernity” in many of these sorts of critiques gives me pause too. In the 1936 essay “The Storyteller,” Walter Benjamin is already claiming that “the art of storytelling is coming to an end” and that humanity is losing “the ability to exchange experiences.” This, he claims, is because “experience has fallen in value” — they didn’t have Polaroids then to reinvigorate it — and “every glance at a newspaper demonstrates that it has reached a new low.” This is because print media obviate “stories” (which convey the wisdom of a people and its adequacy to the world they live in) and instead disseminate “information” (a bevy of facts that undermines our sense that we understand how the world works).
Where once there was social solidarity and wholeness and epic storytelling that flowed from it and reinforced it, in modernity there is the fractured surfeit of print culture and its circulation of dissociated facts. “Boredom is the dream bird that hatches the egg of experience,” Benjamin claims, but modern life is too full of distractions and “shock” to allow anyone to become properly bored enough to the point where they can tolerate the storyteller.
In his book on Adorno, Martin Jay has a useful summary of this argument and the distinction underlying it, between two German words for experience:
In a now celebrated distinction, Benjamin had divided experience into Erfahrung, the integration of events into the memory, of collective and personal traditions, and Erlebnis, the isolation of events from any such meaningful context, communal or individual. Exemplified by the erosion of the storyteller's ability to weave a coherent tale because of the replacement of narrative by disconnected information in our daily lives, Erfahrung had been steadily supplanted by the meaningless incoherence of Erlebnis in the culturally impoverished world of late capitalism.
My first impulse is to think of generative AI models as Erlebnis producers: They depend on data that have had all their “meaningful context” stripped from them to produce an endless supply of “meaningless incoherence” that is all the more corrosive for its surface ability to read as cogent. Yet Erlebnis, as Jay notes, was used by the early 20th century Lebensphilosophie movement to denote the sort of experience that supposedly came across as somehow immediate, spontaneous — “authentic” because it was not reduced to rational calculation. In an essay about Baudelaire, Benjamin describes Lebensphilosophie “as a series of attempts to lay hold of the ‘true’ experience as opposed to the kind that manifests itself in the standardized, denatured life of the civilized masses,” and points out that these efforts eventually culminated in some of them making “common cause with Fascism.”
It’s just as easy to conclude that Erlebnis is what Polaroid wants to help us access, whereas generative AI’s constructions would be more like Erfahrung, involving our deliberate, schematizing engagement. Similarly, one could see LLMs not as the elimination of collective and personal traditions but a technological totalization of them, which would mean that the models were perfected storytellers. Rather than disconnected information, they present a kind of statistically sanctioned wisdom.
But this sits poorly with Benjamin’s characterization of storytelling as an “artisan form of communication” that “does not aim to convey the pure essence of the thing, like information or a report.” Instead it filters the thing through the life of the storyteller so that the storyteller’s “traces … cling to the story the way the handprints of the potter cling to the clay vessel.” An LLM’s unlimited capability to "integrate" and forcibly harmonize into commensurate data the irreconcilable experiences of different people, different traditions seems less a realization of stories than their negation. If a machine can make a "story" out of anything, it can't make a meaningful story at all.
It seems that AI can’t be clearly assigned to one side of these binaries, that the framings of AI vs. reality, or AI vs. authenticity, or AI vs. real memories invite us to fall into the dubious positions of Lebensphilosophie, its dehistoricized evocation of pure kinds of experience that superior people can opt for, removing themselves from the “masses.” The specter of fake realities may be used to try to compel people to be “real.” In practice this will mean circumscribing their range of potential experience, overwriting their memories with official interpretations, whether these are automatically generated by machines or not.
***
A few weeks ago, a friend texted an image of old picture apparently taken at a party we were at a long time ago. I had no memory of the picture, had no idea it existed, and I was surprised by how much it unsettled me. I couldn’t remember being at this party, yet there I was, right in front, wearing a baseball cap for a team I don’t root for, holding some unknown drink in my hand, standing at the fringe of a group of people I mostly couldn’t identify. I didn’t think it was possible to forget so much.
On the text thread with a few other friends, we were able to come to a consensus about some of the details. It was probably taken in the winter of 1989, at what was possibly a going-away party. Some names were attached to some faces. Some theories were explored for why certain people might have been there. The facts were beginning to coalesce, and the shock of photo’s existence as a rogue document faded a bit. A story began to be crafted, about this party marking the end of an era, or serving as a transitional moment from the high school context that was being left behind, something we couldn’t have recognized at the time and probably would have denied.
After a bunch of texts went back and forth, I wondered, with a bit of reactionary complacency, if this experience would make any sense to younger people, who may never know what it is like to see photos from their lives in which every face is not already recognized by machines. Our reconstruction of the context of the photo probably has little to do with “what really happened,” but then again “what really happened” probably was different for everyone in the photo, not to mention whoever took it. The photo became a pretense for us to build a shared speculative version today of what happened then; it restored some personal history to me but also changed it, in a sense falsified it. My memories of being that age had been vague and nonspecific, but usually I remembered myself as being miserable and alone, trapped in myself. Yet there were parties. Who knows what I felt then? Forgetting is not as dramatic as using editing technology to paste smiles over people’s faces, but it is more insidious.
“According to Proust,” Benjamin writes in the Baudelaire essay, “it is a matter of chance whether an individual forms an image of himself, whether he can take hold of his experience.” Seeing the old photo my friend texted, I wondered whether I’ve ever had such an image, whether I had a hold on any experience at all, be it Erlebnis or Ehrfahrung. I remembered being impressed as a teenager with the first line from Dickens’s David Copperfield: “Whether I shall turn out to be the hero of my own life, or whether that station will be held by anybody else, these pages must show.” If I wrote everything down, if I tried to remember everything, would I push myself to the side in my own story?
For Benjamin, the difficulty of understanding one’s own life as something a storyteller could narrate, something that resonates with communal meaning, where “contents of the individual past combine with the material of the collective past” was such that Proust thought it could happen only if one were lucky enough to find the madeleines in one’s life. But if everything in the individual past is datafied and archived, we could set an algorithm to the task, like the memory features embedded in photo apps and social media platforms. I expect that generative models will be able to simulate memories far better than any future experiences, since all the data will already be there. A model’s statistical processes of codifying and unifying and synthesizing all the information about our lives are no more or less mysterious than how our organic memory works, and it could even filter it through other people’s data, simulating a social process, even though it would always keep me at the center of my life, never fail to give me an image of myself. A model will be able to invent answers to fill in any gaps, until we might wish we could remember what we used to forget.
I suggest you to read Annie Ernaux (Noble Prize winning author) The Years (2008) as an attempt to understand one’s own life as something a storyteller could narrate, something that resonates with communal meaning.