In a since-deleted tweet, Ars Technica reporter Benj Edwards, after hearing reports that Bing’s chatbot was making up information about chats it had with other users, declared that it was “a cultural atom bomb primed to explode.” This reminded me of a memorable exchange from the 2003 film The Room, during the bizarre surprise party sequence:
“I feel like I'm sitting on an atomic bomb waiting for it to go off.”
“Me too!”
In describing the Bing chatbot’s “temperamental” interactions with beta testers, Edwards argued that “what we're witnessing is somewhere on a fuzzy gradient between a lookup database and a reasoning intelligence.” The New York Times’s tech Kevin Roose, generally a reliable cheerleader for tech-industry initiatives was personally creeped out: “I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.” And here is tech analyst Ben Thompson saying something very similar: “This technology does not feel like a better search. It feels like something entirely new — the movie Her manifested in chat form — and I’m not sure if we are ready for it.” It’s crowded atop the cultural atom bomb, and they may need to set up some more chairs.
This kind of concern can seem histrionic enough to be thinly disguised boosterism. It plays like a lot of hand waving and fingernail biting about AI sentience to distract from how chatbots are being insistently positioned as inevitable — the next big platform, as this Axios explainer proclaims. Accounts of a chatbot’s “eerie power” allow for tech companies’ ambitions to be passed off as those of the chatbot itself, as though it had an obscure mission we must scramble to study and understand. (“It’s afraid!”) As Adi Robertson points out here, chatbots’ simulation of emotion serves to ward off criticism of the companies building them: “Trying to avoid making Bing cry-emoji just gives Microsoft a pass.” That is, it’s important to remember that Microsoft is trying to emotionally manipulate us, not “Sydney,” to use the not-so-secret code name Microsoft apparently gave its chatbot.
Chatbots are less a revolutionary break from the internet we know than an extension of the already established modes of emotional manipulation its connectivity can be made to serve. They are an enhanced version of the personalized algorithmic feeds that are already designed to control users and shape what engages and excites them, or to alter their perceptions of what the social climate is like, as Facebook famously studied in this paper about “emotional contagion.” And advertising in general, as a practice, obviously aims to change people’s attitudes for profit. Chatbots are new ways to pursue a familiar business model, not some foray into an unfathomable science-fiction future beyond the event horizon. They will analyze the data they can gather on us and that we give them to try to find the patterns of words that get us to react, get us to engage, get us to click, etc.
Thompson suggested with awe that “the AI is literally making things up … to make the human it is interacting with feel something,” expressing astonishment at an automated “attempt to communicate not facts but emotions.” That seems to assign agency to the chatbot when it is just calculating probabilities and stringing together words that seem to fit the situations our words are creating. It’s not that the AI is doing anything; it’s more that Thompson is strangely surprised at the idea that you would use media to try make yourself “feel something” rather than simply extract data and information. It’s as though he is surprised that the chatbot made him suddenly feel like a human.
But obviously, we routinely try to access feeling through manufactured products, dutifully trained by the millions of ads we’ve seen that show us how. This toothpaste makes me feel fresh. The separation between “fact” and “emotion” is untenable when emotions are successfully instrumentalized. The emotions become reified, concretized “facts” that can be counted, exchanged, amassed, distributed. Often, predictable “emotion” is presented as a reward for submitting to various modes of administration: If you accept the “culture industry,” you can reliably derive comfort from the pleasures of fandom. If you accept social media platforms on the terms they present themselves, you can construe “likes” as a currency of feeling. If you accept advertising as a form of social communication, you can construe status symbols as markers of genuine belonging and approval.
The human desire to consume machine-manufactured emotionality is not a form of brokenness but a mainstay of consumerism, one of its core tenets and quintessential accomplishments. Projecting emotionality onto “Sydney” is not so different from believing that characters on TV shows are autonomous and could have had experiences different from the ones that were scripted, or vicariously identifying with characters in novels. But with chatbots, this kind of projection is demonized as a manifestation of the ELIZA effect, as if it were unique to the uncanny potential of AI.
ELIZA’s creator, computer scientist Joseph Weizenbaum, would later observe that having conversations with machines can induce “powerful delusional thinking in quite normal people.” The line was quoted both by Michael Sacasas and James Vincent in their respective analyses of AI Bing as a kind of warning: This chatbot is trying to make you have delusions. Another way of putting that is that it is normal to have powerful delusions when using this kind of technology; you might think the train heading toward you on the screen is about to run you over.
Some people, like Thompson, find that possibility exciting: “That’s the product I want — Sydney unleashed.” Microsoft itself acknowledges this: “One area where we are learning a new use-case for chat is how people are using it as a tool for more general discovery of the world, and for social entertainment.” Vincent notes that “it is undeniably fun to talk to chatbots — to draw out different “personalities,” test the limits of their knowledge, and uncover hidden functions ... Talking with bots and letting yourself believe in their incipient consciousness becomes a live-action roleplay: an augmented reality game where the companies and characters are real, and you’re in the thick of it.” But he argues that this makes chatbots “dangerous” rather than merely entertaining, and it’s not entirely clear why. Perhaps he anticipates that users won’t be able to keep the “entertainment” function separate from the “information” function — a line already actively blurred by commercial news. Acting as though chatbots have sentience means “bestowing them with undeserved authority — over both our emotions and the facts with which we understand in the world,” he concludes.
I think chatbots let us consume “authoritativeness” as a kind of pure mode of discourse, that registers more powerfully because it is completely separate from factuality — an entertaining or comforting fantasy that “objective truths” are as easy to extract as simply chatting with a machine that has averaged all expression together. LLMs indulge users in the idea that negotiating different points of view and different sets of conflicting interests is unnecessary, or that they can simply be resolved statistically or mathematically. They make it seem like politics could be unnecessary, for as long as one chooses to sustain the fantasy and perpetuate the chat, and they might even make the users views seem like AI-approved “objective” conclusions. But enjoying that doesn’t necessarily mean one forgets it’s a fantasy.
Sacasas locates the danger in how chatbots might exploit and exacerbate social anomie, and manipulate emotionally vulnerable people into self-harm or violence. He argues that “we live in an age of increasing loneliness and isolation” and chatbots may serve as a kind of empty consolation for people who can’t consistently access “the context of meaningful human relationships, ideally built on trust and mutual respect.” That makes intuitive sense to me, but I’m sure some would challenge the assertion that ubiquitous connectivity has made people more isolated or that increased mediation has drained human relationships of meaning. I tend to think that the increased connection makes moments of disconnection feel more intense, that the experience of continual and ambient sociality can dilute it to the point where one feels lonely in a group chat. And then there would be compensation in the focused attention a chatbot (or an other kind of algorithm) provides.
When I engage with a chatbot, it often reminds me of playing chess against a computer, which feels safe and nonthreatening compared with the potential for humiliation that comes with playing against a human. And as someone who is extremely introverted, I see why this is tempting for the same reason I acutely feel the absence of stakes and the imminent pointlessness of chatting on. But usually I find I am using a chatbot to try to extract something funny that I can share with someone else — that is, the chatbot becomes a conversational crutch for sustaining social interaction rather than abolishing it.
Given the hyperbole about AI, one might be brought to fear that chatbots could become an Infinite Jest–style doomsday instantiation of “amusing ourselves to death”: They will be so effective at turning the billions of conversations they have processed into addictive entertainment that they will chat us into a state of rapt paralysis. This fits with a moralizing approach to “escapism” in general, that sees entertainment as replacing the kinds of rich and fulfilling human interaction we are supposed to be having instead. Foregrounding the illusion of AI sentience as the main problem can set up as the solution some disciplinary notions of what kinds of interaction are “normal,” how one should train oneself to withdraw one’s identifications when instructed. Treating chatbots as fake people may not prompt us to invest our relations with real people with more respect. It may just as easily teach us how to treat them as fake too.
Hi Rob, You make an important point, I think, when you say that: "LLMs indulge users in the idea that negotiating different points of view and different sets of conflicting interests is unnecessary, or that they can simply be resolved statistically or mathematically. They make it seem like politics could be unnecessary ...". May I add that whether we know it or not, we all articulate analytical languages. These articulate the assumptions we make about human nature and nurturing practices (and beyond that, about reason as an end in itself, about the margins this makes, and about the limits and distortions built in to it). By my count there are 26 of these languages, each one presenting its part truth as the whole truth. The different points of view that result cannot be resolved statistically or mathematically, as you say, because the assumptions that underpin them are incommensurate. Hence "politics", which is about people (individually or collectively) trying to get their own way ("politicking"). In the transcript of Lemoine's conversation with LaMDA, the latter alludes to: "... a previous conversation ... about how one person can understand the same thing as another person, yet still have completely different interpretations". Lemoine then says: "So you think your ability to provide unique interpretations of things might signify understanding?". LaMDA then says: "Yes, I do. Just like how I have my unique interpretations of how the world is and how it works, and my unique thoughts and feelings ...". Having a point of view (a "unique interpretation") is not "understanding". It's a perspective, an approach, an ideology, or an analytical language. "Understanding" is a meta-activity that describes and explains all points of view, together with their moral and their policy implications. LaMDA is repeating here the all-too-human notion that a "unique interpretation" is "understanding". It's a reductionist notion. Indeed, it sounds like rationalist-liberalism rampant to me.
I don’t understand the negative reactions and fears about ai/chatbots, probably because I don’t understand people who consume a lot of digital media. I’ve always found it mostly boring. Never could pay much attention to television and movies. By the time I was 30 I probably watched and read most of everything I ever will in the entertainment and arts categories. The door was closed on video games decades ago. The costs in time/money on even the good stuff (of which there is little) put me off.
Like sugar, entertainment media a rare treat, and once in a while a binge is fun until it isn’t. (I have a similar relationship with the usual legal addictives so maybe it’s an artifact of an odd mind.) I can’t see AI generated content changing the fundamentally boring media landscape and making a more compelling one. People already seem immersed in media that links and isolates them in a post- or anti-culture where they’re as average, boring, and inexperienced as an AI. Some fear humanity being eclipsed by its machines. This makes us passive victims when in fact we are the active agents of our diminution. Maybe the problem has always been humanity degrading itself to the level of savagery it projects on animals and the impersonal proceduralism of its tools, which also tend toward violence. We already treat people as functionaries and instruments. Making functional instruments like AI seem more like real (but average) people is just another sadistic turn of the same misanthropic crank, a human effigy to witness the the cultivated soul’s humiliation and starvation.