Chatbots don't chat
Continuing the theme of my post from over the weekend, here is some more on generative models and the simulations of attentiveness, intimacy, or friendship they are purported to provide. This article by Robert Mahari and Pat Pataranutaporn, AI researchers at MIT, warns of something they call “addictive intelligence”: the idea that chatbots will be so powerfully gratifying that we will become addicted to talking to them. This certainly sounds like textbook “criti-hype” at first glance:
AI wields the collective charm of all human history and culture with infinite seductive mimicry. These systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory.
I’m not sure what “the collective charm of all human history” could even mean. I understand “charm” as something bound in time, something idiosyncratic and contextual that humans feel in interacting with each other, not as something that can be accumulated or even be made commensurable. And what is “infinite seductive mimicry”? Is mimicry generally experienced as a form of flattery? (It’s hard for me to imagine something more repulsive than having to interact with an externalized version of myself.) Is the infinitude referred to here a matter of the chatbot’s programmed compliance, its inability not to respond? They are “submissive” in that sense, and presumably they are “superior” because they will generate sentences in response to any query?
Keep reading with a 7-day free trial
Subscribe to Internal exile to keep reading this post and get 7 days of free access to the full post archives.