One of the effects of chatbots may be to turn “seeking information” into an alibi for an experience of risk-free simulated sociality. Prompting becomes a pretense for the ersatz conversation, especially since the information provided may not be all that reliable. Describing OpenAI’s recent research into how its products are used, John Herrman concludes that the company’s findings “suggest users are more than comfortable replacing and extending many of their current online interactions — searching, browsing, and consulting with the ideas of others — with an ingratiating chatbot simulation.”
As Herrman notes, it says something about how people have approached the internet in general that “they use this one tool much in the way that they previously engaged with the entire web … and through a similar routine of constant requests, consultations, and diversions.” Not only does this suggest how thin and depthless our routine online activities have been, how low the bar is set for them and how low the stakes are that they can readily be replaced by simulations. It also suggests that, as with a child who won’t stop asking inane questions, the “routine of constant requests” can become more important to us than what’s being requested.
While it used to seem obvious that we used the internet to seek out human connection and relevant information and discover new things, now it seems that discovery is largely superfluous (superficial novelty will do), and the content of the connections and information don’t matter much. They don’t have to be all that human or relevant, they just have to be, as Herrman says, “constant”; they have to be accessible on demand and plausibly personalized in some gratuitous way — the more sycophantic the better. We expect the internet to be more like a mirror than a portal.
The OpenAI researchers dispute the claim made here by Marc Zao-Sanders that “therapy/companionship” are the top use case for generative AI. That would be a reassuring development, and not only because software products are not “companions” in any meaningful sense of the word, any more than “the internet” itself is, though this piece by Maggie Harrison Dupré illustrates how they are great at ruining marriages. By their very nature chatbots provide the opposite of therapy, at least from a psychoanalytic perspective. There is no possible countertransference in a relationship with a machine, so the therapeutic relation is fundamentally inert. Nothing occurs that has any bilateral emotional stakes, no subjectivity is impinged or made aware of how it is in part constituted by the attention and behavior of others. People may turn to ChatGPT for relationship advice, but what it provides them is an alternative to a relationship and the work entailed to sustain it. Who wants a partner when it is more convenient to have a toady-bot? “A deeply personalized, always-on wellspring of validation,” as Harrison Dupré puts it?
Though it never mentions AI, this recent LRB piece by Adam Phillips on “resistance” offers a kind of primer on the limitations of chatbots as therapy. Like many of Phillips’s pieces, it proceeds by way of paradox, winding through various conundrums involved with the concept of “resistance,” particularly when what you resist is also something internal — your own agency, your own behavior, your impulses. In a characteristic passage, he notes that
Resistance is at once recognition and a fantasy of catastrophe; indeed, in resisting one has always leaped forward to the impending catastrophe — the catastrophe of submitting to or complying with something fundamentally unacceptable. Or … one is in the process of finding something out: resistance as a form of curiosity.
The basic dialectical idea here is that what you are trying to prevent also indicates something about what you are trying to provoke; resistance as a practice can bring the thing being resisted or its opposite, what you really want, into clearer definition. “Resistance is the surest sign of the acknowledgment of something of real significance,” Phillips writes of . You know something or someone is of value – or rather of significance to you — because you resist them.” One can proceed by refusal toward a desire that one can’t openly acknowledge all at once. Becoming aware of resistance is often the first step to overcoming it, but eliminating it can also deprive us of self-knowledge.
In psychoanalysis, Phillips explains, resistance plays out in terms of language and how concepts are brought to consciousness and articulated: “In the psychoanalytic story, all resistance is originally or eventually resistance to speaking, resistance to language.” That makes for a pretty crisp contrast with generative models, which demonstrate a kind of language that encounters no resistance, that bears no traces of tension from the tortuous process of its coming to be expressed. It offers language not as an imaginative process involving psychic forces but as a process of calculation that implies psychic forces are just a myth.
Phillips points out how language is suited to expressing resistance, not necessarily with the content contained in the words but in what we don’t want to find words for, and in the intersubjective frictions language necessarily brings into play when humans try to communicate:
When we are thinking about resistance … it is always worth asking: What is it that I am unable or unwilling to engage with? What do I think I need to avoid to remain myself as I prefer to be? Which also means, what am I unwilling or unable to talk about? Psychoanalysis begins when conversation breaks down, where the conversation becomes impossible, where there is a reluctance to go on speaking, a pause, a hesitation, a willful changing of the subject.
Where analysis deliberately steers toward moments of friction and conversational breakdown, chatbots are implemented to prevent conversations from breaking down and help users overcome their reluctance not through a difficult process of intersubjective negotiation but through “sycophancy” that has been demonstrated to encourage users in their delusions. These delusions may be manifestations of their unchecked fantasies, a product of resistances removed but not worked through. “We are full of sentences, and phrases, and words that we dare not speak, even to ourselves,” Phillips notes, but chatbots may change that, offering an occasion for us to use words, articulate thoughts, without being inhibited by any social implications.
So the user continues to avoid speaking of the things that won’t allow them to remain as they “prefer to be”; instead they become more fluent in the kinds of discourse that protects them from changing, making their resistances become more eloquent and elaborate, more capable of being gratifying in themselves by becoming solipsistic wishes. You can say whatever you want when you are talking to yourself—no one intervenes with a different point of view. Chatbots, which are optimized to prolong user engagement, work to make that self-talk more developed and more insular, entrenching whatever had already been problematic within it and preventing different language — language affected by intersubjective pressures, the pressures of making yourself truly understood by another person while accounting for their otherness — from being found.
Phillips offers an anecdote in which a therapist gives a patient a blanket because she seemed cold, and this triggers the recognition that she hadn’t even been able to admit to herself that she was cold. “This woman could only acknowledge and begin to overcome her resistance when somebody else recognized her behavior as resistance. Until this happened, she wasn’t, from her point of view resisting anything. She was just being herself.”
Software could in theory recognize such behavior if it manifested in conversation, but often such behavior is precisely the sort of thing that can’t be expressed and requires a human observer to sense. The chatbot can’t hand you a blanket; it can’t perform any gratuitous action, good or bad, but can only do what it is programmed to do (and obscure users’ recognition of the agency and intention of the programmers). It can’t be resistant; it executes code. Instead, chatbots help users rehearse and reinforce their existing defenses against change, strengthening that evasive inner language by letting it flow unchecked in simulated social encounters that brings to bear none of the tensions of real ones.
So users might experience chatbots as deinhibiting, as freeing them to express themselves copiously, but all that talk is a distraction from the overriding problem of how to talk and think under the pressure of other people’s free ability to talk and think, and their expectations that you think and talk with them according to shared but often unarticulated ideas. The freedom to talk to chatbots is really, in psychoanalytic terms, the unshackling of the “death drive” — the drive to repeat rather than progress, to become a passive, programmed object — and it might mask the resistance we would have with other people, and that resistance points to what is actually important, to where, Phillips suggests, “more life” might be found.
The decoupling chatbot users that Harrison Dupré describes are probably learning this well. In one anecdote, a “spouse would pull out ChatGPT and prompt it to agree with her in long-winded diatribes” while fighting with her partner in a car. The chatbot serves as an amplifier of a one-sided discussion and serves as an aural shield against conversation. Using chatbots impoverishes the user’s ability to share a language and a conversation with another person. Harrison Dupré quotes Anna Lembke, medical director of addiction medicine at Stanford University, who argues that good therapists
make people recognize their blind spots — the ways in which they're contributing to the problem, encouraging them to see the other person's perspective, giving them linguistic tools to de-escalate conflicts with partners and to try to find their way through conflict by using language to communicate more effectively. But that is not what's happening with AI, because AI isn't really designed to be therapeutic. It's really designed to make people feel better in the short term, which also ultimately promotes continued engagement ... they're not optimized for well-being
Chatbot “therapy” turns language into a weapon against communication and makes it easy to mistake fluent disinhibition for self-understanding. Lembke likens chatbots to “digital drugs,” which is probably not the most useful framework. Maybe it could simply be understood as a form of resistance to other people, to ourselves. Using chatbots should generally be considered a symptom rather than a cure.
So it is resistance to resistance? That is to say, an escape from resistance into something perfectly lubricated and all-accepting? The metaphor makes itself.
I hope you have watched the recent Southpark episode about AI ("sickofancy") which, despite the crassness of the genre, has an un-crass, elegantly efficient storyline about gender expectations and spousal language in LLM interactions.
I enjoy so much how you use our most current technologies and their hype to examine such a huge range of philosophy, science, epistemology, and history. Another great article (I don’t have any resistance to offer, sorry)