It always seems strange to me that when “chat” is proposed as a mode of information retrieval. It seems like a matter that conversation would only complicate. I don’t expect books to talk back to me and would probably feel thwarted and frustrated if they did — much of what feels to me like thinking is the effort to extract ideas from text, especially ones I wasn’t necessarily looking for in advance. It rarely occurs to me to frame texts as teachers, or reading as a matter of questioning a text or its author for answers, as if books were meant to be helpers rather than adversaries. If a book were able to explain itself to me, I would probably stop listening or at least start considering the ways in which it was lying.
Mine is probably a minority view. Clearly tech companies assume that chatting with objects and compelling them to explain themselves is something everyone has been longing for, hoping to at last reduce their dependency on social contact. Many anticipated AI applications seem predicated on the idea that our experience of the world should require less thought and have better interfaces, that we want to consume the shape and form of conversation, consume simulations of speaking and listening without having to risk direct engagement with other people.
Consumerism is loneliness; it figures other people as a form of inconvenience and individualized consumption as the height of self-realization. But tech companies promise to solve loneliness with a more responsive kind of product and a more perfect form of solipsism. Chatbots are often marketed as though other people represent the main impediment to solving loneliness, and if you remove the threat of judgment and exclusion and rejection that other people represent, then no one will ever feel lonely again. We can get used to talking to things to gather information, and every conversation can be about data transfer and not some sort of sustained social connection. Whatever value “being in touch with a person” has can be supplemented with the value derived from useful or efficient interactions with entities that may or may not have the capability of knowing you. What difference does that really make?
Meta recently announced it would be rolling out a service called AI Studio, which is meant to allow users to customize generative models and tailor them to their interests. “Anyone can create their own AI designed to make you laugh, generate memes, give travel advice and so much more,” Meta’s ad copy proclaims. Because there is no difference, really, between going somewhere on a friend’s recommendation and going somewhere because a machine associated certain word patterns with your proposed itinerary. The machine is better because it doesn’t care if you take its advice and never wavers in its obsequiousness.
The company claims that “You can use a wide variety of prompt templates or start from scratch to make an AI that teaches you how to cook, helps you with your Instagram captions, generates memes to make your friends laugh — the possibilities are endless.” You can really sense the infinitude from these examples, two of which are ways to automate one’s presence on Meta’s social platforms and the other seems entirely redundant, given the already existing surfeit of online recipes and instructional videos. And what more could we ask of a friend than that they send us a AI-generated memes to make us laugh? Does anyone remember laughter?
Meta also proposes that “creators” can “make an AI as an extension of themselves” that can interact with their fans for them, go after the market for impersonators detailed in this Reuters report about chat on OnlyFans:
OnlyFans is a porn-driven, subscription-based website where content creators and their followers can develop what it calls “authentic relationships” by messaging each other. But many popular OnlyFans creators, including porn stars earning millions of dollars through the website, outsource the task of messaging their subscribers to paid impersonators known as “chatters.” It is their job to coax subscribers into tipping the creators and buying more porn.
Rather than hire humans as chatters, creators could use specifically trained generative models and share less of the profit produced through building authentic relationships. If you train the model well enough, you can produce more authenticity at scale.
This kind of outsourced or automated fan service seems like it would place a pretty serious strain on parasociality. As I understand it, parasocial relations depend mainly on the license for fantasizing that certain illusions of access provide. A suspension of disbelief occurs by which it suddenly seems possible that the parasocial relation is not an asymmetric one that borders or crosses over into exploitation, but is a reciprocal connection in which the fan means something to the celebrity as an individual; the fan has become someone specific in the celebrity’s eyes because of some unilateral effort of devotion the fan has made.
Knowing that it is highly likely that the celebrity uses proxies to interact with fans as individuals would seem to constitute a major hurdle, no matter how effective those proxies behave on the celebrity’s behalf. Undoubtedly the hired chatters and even the generative models do a better job connecting with a particular fan and simulating an interest in them. But even if the chatbot said exactly what the fan would want the celebrity would say to them, it seems like it should be utterly meaningless when that celebrity was in fact completely absent.
Meta claims its custom AIs “can help creators reach more people and fans get responses faster.” (Does it count as “reaching” someone when you are not in any way participating in the action? What is reaching what?) The Reuters article characterizes nondisclosed parasocial proxies as fraud, providing anecdotes of men who felt cheated when they turned out to be having chat-sex not with the female models whose images they were consuming but with random people frantically copying and pasting from batches of prescripted lines.
“We’re creating a fantasy world for these men,” said Maica Versoza, 30, who runs a chatting operation with her husband in the Philippines city of Bacolod. She said the key is to build a relationship with subscribers and “make them feel wanted.”
From that perspective, it doesn’t matter who or what is doing the chatting as long as consumer “feels wanted” — a feeling that can be effectively detached from actually being wanted. The “intimate experience” stack can be disaggregated, and one entity or software agent can supply the visuals, another can supply the timely responsiveness, another can generate the textual content, another can process the clients’ actions, and so on. These are all conceivable as independent services that don’t necessarily need to be provided by one person, or by a person at all.
Chatbots could be optimized to produce a “feeling of being wanted” without anyone there doing the wanting, and customers can cultivate a capacity to be contented with that abstracted feeling. That fits with Meta’s offer to help creators serve fans “faster,” as though immediacy and not connection with the celebrity is what the fan really wants. The bots could systematically train us to respond to responsiveness, to be so engrossed with the simple fact of it that it wouldn’t require any reciprocal content. One could gradually become acclimated to separate “being attended to” from the requirement of there being another consciousness doing the attending — companionship without companions. To demand that someone literally be with you for you not to feel alone would then seem like a failure of imagination, a limiting tendency to be over-literal about it all.
Chatbots demand we suspend disbelief even harder than the forms of vicarious media we are accustomed to; that extra effort of imagination required to posit a person in place of a machine should be seen as making it more rewarding, in the same way I find books more rewarding when they make me work harder to understand them. But the point is that friendship doesn’t require friends; it only requires our imagination and some material that it can productively go to work on.
That any kind of responsiveness, in and of itself, should be enough to make one feel seen is a theory that “AI companion” apps like Replika and the wearable device called Friend, discussed in this Guardian piece, seem prepared to test. Friend — not “the Friend” but “Friend” — is, according to its 21-year-old creator, an “emotional toy” that could pave the way for humans to have deeper relationships with computer programs. “AI companionship will be the most culturally impactful thing AI will do in the world,” he declares. Though he is quick to point out to the reporter that he is “a very social person,” he also claims that “my AI friend has, in a sense, become the most consistent relationship in my life.” Consistency, it seems, is another aspect of friendship that can be disaggregated and experienced unilaterally through an app or device rather than a reciprocal relationship. This arrangement, he suggests, best suits someone whose “work and schedule can be unpredictable,” which is increasingly all of us.
From that perspective, the “loneliness epidemic” is not caused mainly by consumerism (my customary assumption) or by the erosion of “bridging social capital” that sociologist Robert Putnam discusses in this interview with the New York Times but by work, which has become more time-consuming and isolating and less collaborative, and it can be addressed not by easing the demands of work but by applying the same principles of atomization and deskilling automation that have made work more unbearable to our personal lives, breaking down the intricate and embedded practices of trust and mutual care into disparate abstract tasks that can be pieced together to assemble an emotional life without any steady social relations. In showing us how we can experience the feelings of friendship without engaging in the reciprocal practices of it, AI companions become machines for destroying social capital and putting a more engrossing kind of consumerism in its place.
This was incisive, perceptive, well-written, and incredibly sad.
I had not taken time to actually consider the problems that AI companions create. This is really interesting. I am really interested in the subject. This was an awesome read.