This week Google announced some new image-editing features for its Pixel phones. One is called “Add Me,” which allows the user to interpolate people or objects into a photo. The name struck me as making this kind of semi-automated Photoshop gimmick sound more desperate than necessary, as if you are begging your phone to make you seem popular. (Why not call it Add Anything?) “Add Me” makes me think of Nathan Fielder’s laughing tweet, or Steve Martin and Charles Grodin partying with cardboard cutouts in The Lonely Guy, or Rupert Pupkin’s talk show. The name invites users to think of making phony memories and almost seems intended to try to naturalize them in the face of general indifference. This Mashable PR piece wonders, “Have you ever been designated the group picture taker, but felt a tinge of disappointment that you, too, couldn't be a part of the group shot?” as though there were no such thing as photo timers. Apparently, according to this Wired report, it is designed to relieve users of the threat of having to trust a stranger with their phone.
“Add Me” seems to categorically deny the principle that you are already there, adding your subjective point of view and framing, when you take a picture. It posits that only what is literally in the frame counts, even as it provides the means to freely alter what’s there. This implies that it’s still worth falsifying what’s in an image, that someone, maybe yourself at some later time, will still be taken in by it. It pretends that there are higher stakes in editing images than there should be at this point, when anyone can do it without much thought or trouble and when image-capture software is already making all kinds of adjustments and alterations without our necessarily knowing or wanting it.
The proliferation of cameras and editing suites means that “images” and “memories” have less and less to do with each other: Memories should be understood not as a form of images but precisely as what can’t be imaged, what takes subjective shape in a consciousness and not in some media format. Memories are not documents; they can’t really be externalized. In Traveling, Ann Powers quotes Joni Mitchell, who explained her creative process circa Hejira as an effort to make something “very alert and very sensual and very unwritten.” Powers glosses this as an attempt to express inexpressibility: “To try to make unwritten songs is to avoid the crafted narratives of autobiography, to reach for what can’t be said, though it can still be shared.” Memory in general can be understood that way, as the “unwritten” that words (or images) can evoke but not themselves capture.
Google is trying to present the opposite message: All memories are only media images and subjective participation is not necessary to them, even as quasi-objective presence can be introduced artificially. That is, you don’t need to use your mind for memories because memories are just media recordings that you can and should doctor them to make them what you imagine you will want them to be in the future (as if memory and desire weren’t fundamentally at odds). In the Wired piece, one of Google’s product managers enthuses about the Pixel phone’s new capabilities because, he claims, “I see the memories people can’t capture because of technical limitations.” He is probably so habituated to company product lingo that he automatically uses “memories” to mean “images,” but this still sounds bizarre, as though he believes that the brain doesn’t construct memories but just records them, poorly, like some outmoded kind of videotape. To this kind of thinking, all memories must be written, and anything “unwritten” can’t actually be remembered. Memories are only media.
If one truly subscribes to the idea that memories are just data sets that can be compiled in a database — perhaps a frail, organic one inside the brain; perhaps a more robust external hard drive — then it probably makes sense to set those “memories” down in the most idealized way, since one has no faith in their own mind to magnify or sentimentalize anything later. But in all likelihood, the documents we produce now will block off ranges of experience over which future remembering would have operated; the more images we have, the less we will be able to remember, p[articular if the image has been edited and refined to say something more succinctly.
I know I’ve made these kinds of claims about memory and images before, and I don’t mean to bore you, but I continue to find it perplexing on a personal level. I don’t like being photographed, I don’t take many photographs, and I have a instinctive tendency toward iconoclasm — static images of dynamic experience seem intrinsically false, like psychic traps. Perhaps because I am broken in some fundamental way, I have a hard time understanding the desire to pre-empt the future capacity to remember, to try to force a future self to remember things in some specific way — that is, I don’t know why people try to remember anything. Because we have no choice but to remember; it’s an aspect of the mind’s metabolism. We are going to remember all sorts of things whether we want to or not, so why bother trying to steer or limit that process? Why constrict the way we will eventually and inevitably digest experience to some circumscribed set of pre-chosen images?
When tattoos became popular in the 1990s, I thought it was because they seemed like the antithesis of rising digitality — they were material and practically immutable, so they could come across as indelible documentation of some time and place, something that would compel one to remember an actual moment, an actual decision in a way that couldn’t be softened or redrawn. People would say they got tattoos to commemorate something about the person they were, invest in it to make it so they couldn’t forget. I would always wonder why they didn’t trust themselves to continue to remember what was important to them. I couldn’t understand why people wanted to inscribe a kind of personal history on their skin, as though that made it a more reliable declaration, but I assumed it was because it felt like something more substantial than an image.
Digital editing would seem to make images even less like tattoos, but maybe that intuition is wrong. The underlying similarities may be more significant: Memorial images and tattoos both reflect a belief that objective, external images are more important than ephemeral, fluid memories. Editing an image is like designing a tattoo, committing to a certain idea at a certain moment and hoping that the representation of that idea will be so indelible that it will forestall any future reimaginings.
Another of the Google phone’s new features, interestingly enough, is called “Reimagine.” It invites users to interpolate machine-generated material into their captured images. This, according to the product manager, is so that you can make the image look more like you will want to remember it:
When you define a memory as that there is a fallibility to it: You could have a true and perfect representation of a moment that felt completely fake and completely wrong. What some of these edits do is help you create the moment that is the way you remember it, that's authentic to your memory and to the greater context, but maybe isn't authentic to a particular millisecond.
Reimagine your experiences now, so that you can’t in the future.
Memory is here conceived as “fallible” rather than inevitable — as though there is way to remember wrongly. But what we will remember will be what is important and necessary to us then — we will remember what we remember how we remember it because it will suit us then. It makes no sense to try to “fix” it now, as if there is some necessary fidelity to maintain. No memories are “real” or “authentic,” or rather every memory is self-authenticating, as an experience you are actually having, whether or not it replays experiences in some exact, high-definition way “to a particular millisecond.” (Memories are inherently incomplete, like any other representation, but they are also subject to psychic mechanisms of compression, metonymy, montage, compositing, and every other form of dreamwork.)
It’s galling enough that Google expects you to want to put AI slop in your “memories”; that’s like injecting mind with styrofoam, having Chat-GPT generate your diary. But the product manager’s posture here that this permits users to achieve some higher truth invites users to betray their future selves. The sound principle of not being too hung up on documentary fidelity is warped into the misguided proposition that you can fully anticipate now what you’ll need to remember later.
The Reimagine feature allows a user to overwrite what is documented by a lens while still holding onto the pretense that “reality” was “captured” — the camera is there strictly as an alibi to make the generated “reimaginings” plausible, capable of seeming like images we will accept as “memories” in the future. The pseudo-objectivity of the lens-capture process launders the quasi-subjectivity of an image generated in response to your commands. But to what end? Who are we supposed to be trying to fool if not our future selves?
We don’t get to choose what we will remember or how we will remember it, or else we are no longer dealing with memories at all. The product manager must present his company’s false memory maker as solving a routine hassle, as making “remembering” more convenient, as if we don’t want to be bothered to remember to remember. “I do think we are going to make a lot of things a lot easier for folks, and that’s going to be fantastic,” he claims. “Then they can spend [time] doing other things, making more memories, instead of fighting to create the memories they thought they already had or didn’t match up to how they remembered it and now they’re frustrated and sad.”
It seems sadder to me to be convinced by a tech company that your memory doesn’t work right and your life inexperience is of course inadequate until augmented by content machine-tooled to be average, normative, and probable; or that with the help of synthetic images you will set yourself up to remember whatever you want regardless of what happened to you. (It doesn’t seem surprising that there would be a point-and-click camera revival as digital images become easier to edit and are manipulated by our phones without our knowing.)
We can’t program or imprint ourselves to remember experiences as specific affective data on command; if we could, we would be no better than a machine, available for others more powerful than us to program. It also seems like the wrong way to think about who we will become, a person who can’t be trusted to have their own feelings and priorities. If we try to control what we’ll remember too much, all we’ll be left with is regrets.
“After all, an illusion, no matter how convincing, remained nothing more than an illusion. At least objectively. But subjectively– quite the opposite entirely.”
Philip K. Dick, “We Can Remember It For You Wholesale”
"Google expects you to want to put AI slop in your 'memories'; that’s like injecting mind with styrofoam, having Chat-GPT generate your diary."
Spot on.