In a since-deleted tweet, Ars Technica reporter Benj Edwards, after hearing reports that Bing’s chatbot was making up information about chats it had with other users, declared that it was “a cultural atom bomb primed to explode.” This reminded me of
Lovely article. What concerns me is the prospect of Chatbot Search returning results that are even more personalised than they are today.
So asking “What happened on Jan 6th at the Capitol building?” would get an answer where the chatbot incorporates its experience of your political perspectives from previous “conversations”, and tries to give you the answer you want to read. This would further fragment the idea of collective truth.
I don’t understand the negative reactions and fears about ai/chatbots, probably because I don’t understand people who consume a lot of digital media. I’ve always found it mostly boring. Never could pay much attention to television and movies. By the time I was 30 I probably watched and read most of everything I ever will in the entertainment and arts categories. The door was closed on video games decades ago. The costs in time/money on even the good stuff (of which there is little) put me off.
Like sugar, entertainment media a rare treat, and once in a while a binge is fun until it isn’t. (I have a similar relationship with the usual legal addictives so maybe it’s an artifact of an odd mind.) I can’t see AI generated content changing the fundamentally boring media landscape and making a more compelling one. People already seem immersed in media that links and isolates them in a post- or anti-culture where they’re as average, boring, and inexperienced as an AI. Some fear humanity being eclipsed by its machines. This makes us passive victims when in fact we are the active agents of our diminution. Maybe the problem has always been humanity degrading itself to the level of savagery it projects on animals and the impersonal proceduralism of its tools, which also tend toward violence. We already treat people as functionaries and instruments. Making functional instruments like AI seem more like real (but average) people is just another sadistic turn of the same misanthropic crank, a human effigy to witness the the cultivated soul’s humiliation and starvation.
Hi Rob, You make an important point, I think, when you say that: "LLMs indulge users in the idea that negotiating different points of view and different sets of conflicting interests is unnecessary, or that they can simply be resolved statistically or mathematically. They make it seem like politics could be unnecessary ...". May I add that whether we know it or not, we all articulate analytical languages. These articulate the assumptions we make about human nature and nurturing practices (and beyond that, about reason as an end in itself, about the margins this makes, and about the limits and distortions built in to it). By my count there are 26 of these languages, each one presenting its part truth as the whole truth. The different points of view that result cannot be resolved statistically or mathematically, as you say, because the assumptions that underpin them are incommensurate. Hence "politics", which is about people (individually or collectively) trying to get their own way ("politicking"). In the transcript of Lemoine's conversation with LaMDA, the latter alludes to: "... a previous conversation ... about how one person can understand the same thing as another person, yet still have completely different interpretations". Lemoine then says: "So you think your ability to provide unique interpretations of things might signify understanding?". LaMDA then says: "Yes, I do. Just like how I have my unique interpretations of how the world is and how it works, and my unique thoughts and feelings ...". Having a point of view (a "unique interpretation") is not "understanding". It's a perspective, an approach, an ideology, or an analytical language. "Understanding" is a meta-activity that describes and explains all points of view, together with their moral and their policy implications. LaMDA is repeating here the all-too-human notion that a "unique interpretation" is "understanding". It's a reductionist notion. Indeed, it sounds like rationalist-liberalism rampant to me.