AI is the conspiracy
Science published a behavioral study today called “Durably reducing conspiracy beliefs through dialogues with AI,” which set up a contrived and deeply preposterous experimental scenario to try to demonstrate that chatbots can forcibly change the minds of people who have been incentivized to persist in engaging with them. This, in the author’s opinion, shows that “deploying AI may mitigate conflicts and serve society.”
Of course, if chatbots can convince people of the “right” things, it can equally convince them of the “wrong” things; they can not only generate language that seems to mirror and support whatever ideas are thrown at them, but they also simulate a listener who seems to care what you think, which is likely more significant to users than the content of whatever words they string together. But of course, believing that a bot cares about you is always the wrong thing to be thinking; it’s like thinking a math problem could love your dog.
The researchers note that conspiracy theories fulfill “important psychological needs” to which reasoning and evidence are irrelevant — they ground one’s sense of identity and agency and can foster a sense of belonging when one feels marginalized from more mainstream sets of beliefs or values. But the opportunity presented by chatbots prompts the researchers to discard all that in favor of a model that assumes that if conspiracy believers are repeatedly injected with certain ideas, they will come to accept them. (Behaviorists love generative models and their implication that “we are all stochastic parrots.” AI’s cultural prominence allows them to push their ideas of human programming back into the limelight.)
Accordingly, the paper’s authors assume that conspiracy believers don’t have psychological needs that they are trying to meet through subjectivist epistemologies and shared alternative belief structures and communities; instead they are broken machines with bad data in their memory cells, and it need only be overwritten with whatever counterprogramming that a chatbot proprietor happens to prefer. (One can imagine the ceaseless cacophony of advertising chatbots this study would then authorize, as well as the unsilenceable chatbots compelling allegiance to the Great Leader.)
The authors’ rationale for dismissing all the previous psychological research is that in earlier studies, “fact-based interventions may appear to fall short because of a lack of depth and personalization of the corrective information.” After reporting their findings, they conclude by suggesting that chatbots somehow provide a “genuinely strong argument” beyond the capacity of ordinary human conversation that can really reach the flat-earthers and change their minds.
That seems absurd on its face (generative models don’t make facts), so one must assume the claim needs to be put into a real-world context. What chatbots can do that regular humans can’t is to continue to interact with a person with inane, annoying, or disturbing beliefs without giving up or expressing any investment in any ideas of their own. Human beings ordinarily don’t care enough or can’t be paid enough to show enough focused concern for the conspiracy-addled person to bring them back into the fold of a more broadly shared reality. But chatbots can be deployed as an automated stand-in for that reality, anchored in whatever probabilistic programming procedure is meant to keep it within the comfortable cliches of widely received and frequently articulated opinion. At the same time, they present this generated, unsubstantiatable simulation of reality as though it were designed for the individual alone, and in fact wouldn’t exist without the individual’s persistent interaction with bots. The anti-conspiracy chatbot thus replaces the apparently arbitrary details of any given conspiracy theory with a more powerful kind of false belief that the world really can be made to revolve around you personally.
The purpose of AI summaries
A study conducted on the behest of the Australian government found, according to this report, that “artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people.” Those are very broad claims: Claiming that something is worse “in every way” than something else would require infinite knowledge and and a capacity to inhabit every possible subject position from which “better” or “worse” could be calibrated to any given purpose. It seems sufficient to say that any off-the-shelf automatic summarizing tool will be inherently inadequate, because different situations call for different kinds of abridgment. That is, summaries by definition must be related to a human intention, and so whatever generative models are doing to texts should simply be called something else. Maybe “AI summary” is enough differentiation.
I would never willingly seek out an AI summary, but lately I have been encountering them on review sites or floating above comment sections. It seems they are deployed to save us the trouble of having to hear people’s individual voices and to reduce their various modes of personal expression to generalities and abstractions that function as verbose star ratings. It subtracts the subjectivity and leaves husks of decontextualized data, as if that were all people were good for. (The last thing you want is some sense that other people exist and have their own viewpoints, their own idiosyncratic conditions.) An ersatz consensus is imposed by averaging everyone together, as if positions and opinions were just quantities. This makes for a depiction of collective subjectivity as something hollow and fundamentally generic. A union of voices can then be seen not as something achieved through disputation, negotiation, and coalition building and thereby concentrates the people’s voices, but as something rote and diminished — a computational amalgamation of individual identities that can’t be combined to become more than themselves. Instead, the combination of people becomes something even less effective than a mob. The “collective subject” becomes an uninhabitable subject position, akin to trying to speak with the chatbot’s empty patter.
It’s not surprising that the Australian government would be looking into using AI to process inquiries from the populace; it would allow them to treat “the people” as all one generic voice, providing a rationale (and a convenient excuse) for catering only to the average and not the margins. The summaries could work like spam filters for “abnormal” or “improbable” messages and messengers, for anyone making things inconvenient. Like most other AI use cases, AI summaries are another way to have machines make decisions so that you don’t have to take responsibility for them. AI summaries cater to a fantasy that there can be information without accountability — that it can somehow come from nowhere with no point of view and get to the truth of things. But in effect, it makes it far harder to know what rationale has been used to include and exclude.
AI for depersonalization
In more tech news from Australia, another government inquiry led to a representative from Meta confirming an Australian senator David Shoebridge’s statement that “Meta has just decided that you will scrape all of the photos and all of the texts from every public post on Instagram or Facebook since 2007, unless there was a conscious decision to set them on private.” That in itself is no surprise, and it is also no surprise that without a regulatory framework and legal penalties in place, the company will feel compelled to do this for competitive reasons — the more training data you can access, the more powerful and lucrative your models will seem to investors. Would anyone be shocked if “private” data were also being used? Check the company’s track record.
Privacy abuses aside, it’s striking how tech companies are accustomed to operating at such a scale that renders their ostensible customers into manipulatable items. In their quest to build an AI summary of a human being, they treat individual humans not with respect or dignity but indiscriminately, as if they were all already commensurate and fungible. The companies seem to believe that users only achieve “personalization” through being processed by their systems.
The platforms’ indifference to dignity seems to have transformative effects on users, who come to emulate the platform’s priorities, addressing themselves to algorithms and consuming metrics rather than engaging in communication. They are trained in formatting data for a system that rewards them with a score, which becomes the primary way they can be differentiated from other users — who has higher numbers?
The advent of AI makes it even clearer how indifferent platforms are to users’ intentions: Those aims and goals and hopes and dreams are of merely local and transient interest; they are precisely what the companies find worthless and strip out of the information they extract. The effective reason anyone posts material to platforms is to train AI models; everything else is noise.
“We’re not ready”
It can be difficult to think of photography as a collective practice. It seems to glorify the subjective vision of the individual and give it a means to seize reality for itself while retaining an air of objectivity that makes reality worth seizing. Photography seemed like a way to make others see reality as you see it without their being able to tell you that’s it’s just your point of view.
But the power of that illusion is fading, and photographs no longer seem objective in and of themselves — they need to seen as coming from a context to be accepted as relevant, let alone interpreted. Photographs can no longer be passed off as passive, spontaneous recordings of the actual — as “declarations of the seamless integrity of the real,” as Rosalind Krauss once put it — yet they still apparently have the power to invoke the audience that supposedly takes them that way and that we must be worried for.
The Verge has produced some content lately on the theme of AI image editing, and how “we’re not ready” for what it portends, because the capabilities purportedly far exceed our capacity to imagine them or recognize when they have been used. AI editing is held to be more dangerous than Photoshop because it deskills Photoshop and “removes the time and skill barrier” to producing falsified images.
But AI editing doesn’t merely change things on the production side; it also changes the consumer’s understanding of images, such that individual images that are ambiguously or uncertainly sourced aren’t taken for granted as “true” depictions of anything. The ubiquity and ease of photo-editing demands that “we” come together in a more explicit way to negotiate what is more than subjective, what isn’t just an individual’s dream projection.
Much as Covid demands a collaborative public health response to keep as many people as protected as possible, so does the imposition on our society of more powerful image-editing tools: There is no way to deal with them that doesn’t involve the need for committed cooperation, for people working together to create a shared environment.
The Verge articles take it for granted that this “we” won’t come together and will never be ready — it takes tech companies to have already achieved their basic goal of atomizing and isolating everyone into discrete but fungible units that can’t form collectivities on their own terms, that are only ever “populations” bracketed and processed into different sets by some higher controlling force. The articles assume, perhaps correctly, that the possibility of conceiving collective responsibility, of having any faith in other people to do anything other than what strikes them as most convenient in any given moment (as tech companies have instructed them, promising endless amounts of pleasure if they do), has been extinguished.
But perhaps if we are fortunate we will be able to afford access to a chatbot that can convince us that whatever we want to believe is real and that everyone else’s confirmation has already been neatly summarized in its generated responses.