Thought pollution
Lincoln Michel concludes this “year in AI” wrap-up with a critique that has been familiar since the advent of autocomplete and “autocompose”:
But I’m also haunted by something I saw in Google’s A.I. demo. The video featured A.I. briefly summarizing emails someone hadn’t read. Then it demonstrated generating new emails to reply with. It’s easy to extrapolate. The recipients will use A.I. to avoid reading that email and generate new A.I. replies for others to avoid reading. How soon until everyone’s inbox is overflowing with emails no human has read or written? Why stop at emails? A.I. can write book reviews no one reads of A.I. novels no one buys, generate playlists no one listens to of A.I. songs no one hears, and create A.I. images no one looks at for websites no one visits.
This seems to be the future A.I. promises. Endless content generated by robots, enjoyed by no one, clogging up everything, and wasting everyone’s time.
Part of me wants to cynically say, so what? Who cares if AI books are reviewed by AI critics? And if the emails people exchange and the culture they consume are already so devoid of presence and content that it is easy to imagine a flood of generated material supplanting them, then doesn’t that suggest that we are already underwater? Otherwise wouldn’t we just ignore all the nowhere plans for nobody?
But the assumption here is that our engagement with automated content is beside the point. Much like advertising, another unwanted kind of discourse, automated content — which is intrinsically intrusive, interruptive, the voice of someone who doesn’t really know what they are talking about but insists on being heard anyway — will be injected into all occasions for communication, polluting the discursive space between any subject and object and pre-empting the possibility of intersubjectivity with endless loops of noise that make it so that we can’t hear ourselves think. The skills necessary to communicate with other people or to even carry out an inner dialogue with oneself will presumably atrophy as we are cocooned in thickets of automatic language aimed at eliminating the need for any effort of attunement. AI books will read themselves and tell us what they were about, and we won’t be able to get them to shut up about it.
The idea that technology will let us remove our consciousness from the performance of the meaningless kinds of communication we are at times required to carry out is sometimes presented optimistically, as though this will free us up for the really meaningful conversations we haven’t been able to make time for. But I think Michel’s vision here is nearer to the mark. The spaces in which “meaningful conversation” (whatever that was) could have taken place will be overwhelmed with automated chatter that will waste even more of our time.
Automation doesn’t free people up for meaningful tasks; it deskills the tasks they are required to perform, making them more rote, depleting, and mind-numbing. There is no reason to suppose that generative AI will do something different to language-oriented tasks. It will make them less meaningful to us as we have to do more of them. (Think of the piece workers clicking yes or no on an endless series of decontextualized language fragments to train tomorrow’s AIs.) It will inculcate people with the idea that language use itself — the effort to communicate at all — is a hassle, something more and more difficult to initiate with any expectation of good faith, given that so much more of the language we encounter will have been generated to stupefy and deceive us. (The preponderance of advertising may already have accomplished this.) Of course if you don’t care about speaking in good faith, AI will be very helpful to you.
Generated content creates a retroactive illusion about the “meaningful interactions” we’ve lost, framing a fantasy of some form of purer communication that we must get back to, a reified thing that we can achieve unilaterally through some supremely earnest act of authentic being. But then nothing is ever real enough, not merely because it is immediately subject to simulation or contamination but because “realness” depends on reciprocal human investment, and generative content will make that harder to perceive. Generative content will make us spend more time engaging with agents that constitutionally can’t reciprocate: both the machines and the companies they speak for.
Inhuman interest story
The title of this 404 Media piece by Jason Koebler — “Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real” — conveys a sense of panic that strikes me as somewhat misplaced. Much of the article traces how when some treacly image gets a lot of likes and comments from the people still using Facebook — the main example is a post of a guy sitting beside a wood carving of a dog — enterprising traffic seekers then treat it as a template and use AI image generators to iterate on it.
As Koebler admits, “Of all of the awful things we’ve seen artificial intelligence used for, engagement baiting with stolen content on Facebook is relatively tame.” I think it’s completely absurd to believe that the AI image knockoffs are threatening the dog woodcarver’s business or undermining the perception of his work, as one of Koebler’s sources suggests. The main concern in the article seems to be that ostensibly clueless people are making fools of themselves by not recognizing what is really going on with a variety of mundane, low- or no-stakes memes, which incidentally reminds readers how savvy they are in their own internet use and allows them to believe that individuals can infallibly “learn to recognize AI-generated imagery” if they paid more attention. “The comment sections of these pages feature hundreds of people who have no idea that these are AI-generated and are truly inspired by the dog carving,” Koebler notes, as if “true inspiration” should be limited to authenticated events and facts. (That would mean that all forms of myth, legend, and fiction should be renounced and banished.)
But what difference does it make whether contrived images meant to tug heartstrings are real? You don’t have to be Baudrillard to make the case that all engagement bait is inherently fake, referring not to an “original” “real” situation but to an already established formula. Human interest stories are stories, defactualized by their being chosen and groomed for mass media circulation. And how are the people who are suspending disbelief for a nanosecond to click “like” being harmed by any of this? It is nice to feel inspired, right?
Koebler suggests that this susceptibility shows that “masses of Facebook users are completely unprepared for our AI-generated future,” but maybe they are already well-adapted to it: They know when the stakes are low enough to suspend skepticism and derive whatever enjoyment they can from it. They aren’t trying not to believe in otherwise unconvincing images of humanity’s decency and ingenuity. Oh no! They are being tricked into thinking people could be extraordinary! You might argue that people should not enjoy this sort of thing — that people should not enjoy kitsch and so on — but that is different from being worried that people’s kitsch is not authentic enough.
If the legions of Facebook likers can be placated with AI-generated viral content, that would protect actual people from the downsides that come with unwanted or unexpected publicity at uncontrollable scale. Maybe it would be better if only bots and fakes went viral and not actual people who suffer actual fallout from it. Any story where the specific identity of the person doesn’t matter to its appeal may as well be populated with an AI-generated character.
Human interest stories seem pretty dehumanizing for their subjects, who have one aspect of their life magnified, commodified, and circulated for the amusement of people who otherwise don’t care. If human interest stories weren’t made with humans, we could possibly enjoy them with less guilt. But that assumes that inflicting the trauma of attention on actual people isn’t part of what we enjoy in the first place.
Open-label placebos
It’s customary to assume that placebos only work secretively — if we know we are taking a “fake” pill, it won’t have the impact we would hope for from a real medication, much like the image of the dog carver can supposedly only inspire us if we mistakenly believe it is of a real person. But Tom Vanderbilt writes here about “open-label placebos”: placebos that announce themselves as placebos but still have positive health effects anyway.
Vanderbilt calls particular attention to the metaphoric resonance of open-label placebos:
In a deepfake world where AIs masquerade as people, where marketing calls itself wellness, where politicians tell lies so brazen as to be self-debunking, and where you can be red-pilled, blue-pilled, black-pilled, and clear-pilled without ever being sure you’re seeing reality, there's perhaps nothing so refreshing as a tiny step in the opposite direction: prescribing a pill of nothing and calling it out as such.
That sounds a bit like what Peter Sloterdijk defined as “cynical reason,” a kind of “enlightened false consciousness.” By taking this known placebo, I get to tell myself I have moved beyond the space of being tricked, which then becomes the trick. By recognizing ideology as such, I then get to pretend that maybe it is not merely a false relation to the real — an idea that Žižek has gotten a lot of milage out of.
Vanderbilt catalogs some of the hypotheses for why open-label placebos work:
Maybe it’s because doing something rather than nothing can make us feel better. (Psychologists call this “action bias.”) Maybe it’s because people living in well-off countries with huge industrial-pharmaceutical complexes have been conditioned to expect the pills their doctors give them to work. Maybe the act of taking an OLP —twisting off the bottle cap, swallowing the pill — triggers some biomedically useful pathways, just as bloodcurdling movies can curdle (or coagulate) the blood even though the viewer knows everything in the film is fake. Or maybe the OLP begins to take effect before it’s even ingested, during the set of rituals, the enveloping theater, of the “therapeutic encounter.” … Maybe we start to feel better when someone listens to us, shows respect for our views, and makes common cause with us against our ailments … Perhaps OLPs are a sort of meta-placebo, a testament to how much we believe in our power of belief.
It seems as likely a testament to how we believe that belief is irrelevant — that maybe no one really knows anything and causation itself is a myth and there is nothing to our lives but going through the motions.
But I wonder if it has to do with the salutary effects of getting to choose the occasions for suspending disbelief. I have always been fascinated by the 19th century origins of advertising in marketing patent medicines — which in some respects could be understood as open-label placebos. The ads were deceptions, but they were also occasions for the willing suspension of disbelief, much like the forms of commercial entertainment that were then consolidating. One can consent to be tricked and still experience the trickery as real; it “really” works on you. You have to learn how to consume ads, or consume fictions, to open up that level of experience or access that kind of pleasure, the pleasure of falling for something.
Taking a known placebo is like reading a novel or going on Facebook or turning on the local news: You will see some stuff that at some level you will not want to not believe, and you can choose to inhere in that double negative.