A few weeks ago, internet culture writers and link aggregators were sharing a voyeuristic site that randomly plays YouTube videos titled with the default file names that iPhones assign. Here’s how the site describes it:
Between 2009 and 2012, iPhones had a built-in "Send to YouTube" button in the Photos app. Many of these uploads kept their default IMG_XXXX filenames, creating a time capsule of raw, unedited moments from random lives ... I made a bot that crawled YouTube and found 5 million of these videos! Watch them below, ordered randomly.
It seems a stretch to say they are “raw” but compared with how easy it is to edit video in apps like TikTok now, they probably come across that way. Ben Wallace points out here that “unwitting content creators would … upload their videos on a public site with a barely-searchable name,” which implies they weren’t really meant for wide circulation or scrutiny, and that their current accessibility is a kind of accident. Undeterred by the possibility that the people in these videos would not want to be gawked at, Wallace concludes that a feed of such videos, wrenched from obscurity and streamed one after the other would be the “most authentic social feed ever seen on the Internet, and a tool was built to make that happen. The original videomakers’ lack of tech savvy is treated not as a kind of vulnerability but as an opportunity, a clever hack to be seized to make them into clickable content more than a decade later.
The Metafilter link describes the feed as “a machine for inducing nostalgia for a brief period not too long ago,” and I suppose part of that nostalgia would include the earnest discussions back then regarding whether “public is public” no matter what reasonable expectations one might have had about their online reach, or whether privacy rights should comprise “contextual integrity.” In 2010, danah boyd argued that “when you take content produced explicitly or implicitly out of its context, you're violating social norms. When you aggregate people’s content or redistribute it without their consent, you're violating their privacy. At some level, we know this.” But this was countered by a pitiless view of social media as making everything fair game: “When we choose to say something in public, we choose to broadcast it to the world. The world is then able to talk about it. That is how it works.”
Since then, people have become less likely to use public platforms as group chats, and algorithmic feeds have contained some of the inadvertent context collapse and rationalized it, but there are still brigading incidents like this:
When Ally Louks posted last week that she was "PhDone" with her English literature thesis, she didn't expect to find herself at the centre of a culture war.
Louks posted a picture of herself on X, formerly known as Twitter, smiling proudly and holding a bound copy of her University of Cambridge thesis on the "politics of smell" in literature.
One week later, the seemingly innocuous post has been viewed 117.1 million times, made headlines around the world, and put Louks on the receiving end of plenty of praise but also heaps of hate, including a rape threat that's now under investigation by police.
The YouTube roulette site does not appear to be designed for the purposes of generating scapegoats and lolcows; instead it seems to be chasing that “authenticity” that one can supposedly enjoy vicariously when watching someone “backstage” who doesn’t act as though they expect to be seen. If the rise or reality TV and social media influencers and creators have inundated us with examples of people deliberately performing, then it has also intensified the impact of those artifacts that capture non-performance, which can be construed as capturing true “reality.” That is, the site caters to the supposed “reality hunger” that has afflicted our overmediated age, the same appetite that street photography whets and tries to sate. Nathan Jurgenson discusses the ethics of street photography, such as they are, in this essay about Vivian Maier. Often the prerogative is to “see, take, and score — visual possessiveness in the name of attention”; it is taken for granted that “artists” are entitled to ignore consent if a project turns out to be sufficiently popular or interesting.
The ends of attention justify the means; virality retroactively takes the place of consent, which, in turn implicitly condemns the entire idea of having a life, having thoughts, or pursuing aesthetic goals in private without sharing.
A few years ago, I wrote about a similar site that aggregated seemingly random snapshots from the 1970s and 1980s and made them available for drive-by nostalgia. I wondered if part of the wistful feeling old photographs evoked was linked to the fact that they weren’t intrinsically networked, they weren’t created with the presumption of digital distribution built in. Obviously the YouTube videos can’t be interpreted that way, but they evoke an era of video creation that now can be treated as innocent, as naive, and the videos themselves as somehow about their own obliviousness.
That seems to have the effect of intensifying the appeal of voyeurism, and serve as an alibi for it. We’re expected to see the people in these videos as not fully real, in the sense that their intentions can safely be ignored and they can be regarded more as generic YouTube users, almost as if they were AI-generated, averaged ideas of people than documentable human beings. This reminded me of some attempted defenses of fully AI-generated images on privacy grounds: You can’t invade the privacy or ask permission of someone who doesn’t exist. But based on how it is often used, generative technology corresponds more with the “fair game” ethos, being used to evade consent to produce nonconsensual images, conversations, impersonations, and so on. And even every fully fabricated face is a veiled depiction of vulnerability and disregarded consent at the widest conceivable scale; it is stitched together from billions of images that were never intended for that purpose. It amounts to a kind of hyper-voyeurism at a populational level.
Often when I read pieces on the recent wave of relationship simulators — like this one from the Verge by Josh Dzieza — I end up feeling complicit in a different sort of voyeurism, gawking at the more or less random assortment of people who have reportedly come to be dependent on chatbots. It’s not that there’s an effort to deliberately sensationalize them — if anything, these kinds of articles usually work assiduously to present chatbot users as ordinary or typical and depict the future where everyone has generated companions as basically inevitable. Like you, “they never thought they were the type of person to sign up for an AI companion.” It’s more that the details of these relationships rarely feel pertinent to any larger argument about how the business model of relationship simulation is being developed. The anguished narratives and quotes from chats come across as gratuitous peeks at someone else’s phone, or as with the YouTube videos, they establish the users’ credulity more than invite empathy.
Dzieza asks, with drama italics, “What exactly are these things? And what does it mean to have a relationship with them?” But it doesn’t seem all that mysterious if you think of them as being like video slot machines or binge-able TV or any of the other addictive and time-negating products companies have devised to cater to the ever-growing nihilistic desire for the “machine zone.” Treating “AI companion companies” as innovators operating at the edge of our capabilities to recognize sentience tends to mask how they are selling entertainment products to audiences like any other media company. A relationship with a chatbot has more in common with one’s relationship with Candy Crush than with one’s spouse. Both adapt on the fly to keep users hooked.
That should be plain from what Replika CEO Eugenia Kuyda has to say about how her customers might be surveilled biometrically for their own protection:
How would you prevent such an AI from replacing human interaction? This, she said, is the “existential issue” for the industry. It’s all about what metric you optimize for, she said. If you could find the right metric, then, if a relationship starts to go astray, the AI would nudge the user to log off, reach out to humans, and go outside. She admits she hasn’t found the metric yet. Right now, Replika uses self-reported questionnaires, which she acknowledges are limited. Maybe they can find a biomarker, she said. Maybe AI can measure well-being through people’s voices.
This is similar to slot machines being programmed to cut players off if they have spent too much time playing — such diligent solicitude for the gambler’s well-being. The machine has succeeded in evaporating the user’s independent will, and they now can be controlled externally through monitoring and prodding and manipulations of the environment they have immersed themselves in. The function of the chatbot, like all “interactive” products, is to be a data extractor, and elicit realtime feedback that can be used to make the customer more compliant. Nitasha Tiku reports on how chatbot makers “are wielding data to keep customers coming back,” in this Washington Post article.
The pretense of many “AI friends” pieces is that it is somehow unprecedented that a consumer product would change how we feel about ourselves, but that is just the premise of consumerism in a nutshell. Why should we be shocked at AI friends when people already think brands and celebrities are their friends? It should be no surprise that “users share the intuition that their companions have the power to change them,” because that is a big part of why anybody buys anything. It is also the premise of the idea that books are magic or that watching media can change what you think or alter your mood. (The 18th century moral panic about private reading suggests that novels were the AI companions of their day.)
But chatbots are not analyzed as though they are deliberately seductive products designed to elicit sustained consumption; they are treated as though they somehow portend some kind of new sentient commodity addresed to some never-before-seen clientele. The companies that sell access to them are implicitly treated as Dr. Frankensteins who have made some Promethean entity, when they seem more like FanDuel — implementing an exploitive monetization of a certain immersive fantasy. Company product updates are described as though they are implementing “personality changes” rather than recalibrations meant to fine-tune engagement.
The journalistic “human interest” pieces typically warn of anthropomorphism while partaking in it, naturalizing it, banking on it. This comes across as blaming the victim, given that the companies proudly peddling simulated companionship aren’t given nearly as much moral scrutiny. (Who doesn’t like the idea of buying friends or renting out intimacy?) Instead there is a lot of pondering about whether the relationship we have with commodities is “real” or “fake,” or unnecessarily contrasting media consumption with human interaction as though they were uniformly being treated as direct substitutes.
Because startups want to market chatbots as cures for the “loneliness epidemic” (which as Claude Fisher details here, is a somewhat dubious concept itself), they tend to get covered as if they really are natural replacements for friends and companions, for human connection, when that is just the connection the companies are fervently trying to establish. Even when that possibility is called into question or represented as problematic, it still reinforces the idea that friends and chatbots are commensurate concepts. It takes for granted that friendship is already reified and commodified, that is an “experiential good” to found in the market and not a mode of being that is extra-economic.
Consuming entertainment products doesn’t need to be conflated with having relationships. I feel like I am being baited for outrage when statements appear like this one from Andreessen Horowitz partner Anish Acharya, in Tiku’s article: “Maybe the human part of human connection is overstated.” What could that possibly mean? Maybe the sugary part of sugar is being overstated too. Or this one, from one of the chatbot enthusiasts from Dzieza’s piece: “If people find more joy in an AI relationship than a human one, then that’s okay.” But that is not even a controversial claim. If people like watching TV more than going out dancing, then that’s okay too.
Excellent piece, thank you