Achieving consensus can be hard, so wouldn’t it be nice if a machine could produce it as a kind of consumption good? Instead of having to work through differences and emotions collectively and collaboratively, each individual could articulate their position and their critiques alone and then a model could sort out what the underlying average position of all the participants in a “discussion” should be understood to be. Everyone can treat the machine as an unbiased witness and agree with it without seeming to have to agree with each other. No one’s ego would be bruised by any other ego; no wills would clash. Everyone could automate their civic duty without having to undergo the hardship of confronting other citizens or empathizing with them; instead their beliefs become disembodied abstractions, statistical constructs rather than liven experiences. Political deliberation could become a matter of everyone preaching their views into a phone and waiting for an opinion calculator to decide the state of the public sphere. And the phone, in turn, could in the meantime barrage users with statements optimized to moderate and normalize their views so that dissensus is smoothed away at scale.
Researchers from Google recently issued a paper describing what they call a “Habermas machine,” a LLM meant to help “small groups find common ground while discussing divisive political issues” by iterating “group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings.” Participants in their study “consistently preferred” the machine-generated statements to those produced by humans in the group and helped reduce the diversity of opinion within the group, which researchers interpret as “AI … finding common ground among dicussants with diverse views.” So much for the “lifeworld” and “intersubjective recognition.” It appears that people are more likely to agree with a position when it appears that no one really holds it than to agree with a position articulated by another person. It’s sort of like Žižek’s idea of the Big Other in reverse.
The researchers suggest that their experiment reveals that LLMs can “facilitate collective deliberation that is time-efficient, fair, and scalable,” even though it also lacks “the mediation-relevant capacities of fact-checking, staying on topic, or moderating the discourse.” If the deliberation happens quickly and at scale, does it really matter if it is factual or pertinent? Speed and scale are their own justification, and what is factual and pertinent can be freely adapted to those requirements, so that what groups deliberate on are questions that are most expedient for an LLM to handle.
Surprisingly enough, a recent 404 Media report entitled “AI-Powered Social Media Manipulation App Promises to ‘Shape Reality’” is not about the Habermas machine but an app called Impact, which tries to “organize and activate supporters on social media in order to promote certain political messages … it can send push notifications to groups of supporters directing them at a specific social media post and provide them with AI-generated text they can copy and paste in order to flood the replies with counter arguments.” Undoubtedly this sort of semi-automated brigading would lead to so much common ground being found and fully maximized approval ratings.
Efforts to automate the public sphere remind me of Hiroki Azuma’s 2011 book General Will 2.0, which argues for using broad surveillance to calculate the general will of a populace mathematically and dispense with the need for deliberative politics. “These are times when everyone is constantly bothered by ‘others with whom a point of compromise can’t be sounded out,” he argues, so we should dispense with Habermas’s and Arendt’s presumption that politics require consensus building through discussion. Instead, politics should be automated by aggregating data on everyone’s behavior and transforming that into political positions and decisions.
Since the ideals of an Arendtian, Habermasian public sphere are impossible to establish, doesn’t it make more sense to take our current social situation and technological conditions as our premise and discuss designs — the architecture —for bringing about a space that is “something like a public sphere”?
That sounds like a description of the Habermas Machine, something that resembles a public sphere but is actually software architecture, where interpersonal communication is replaced with information processing disguised as natural language.
Azuma draws on a reading of Rousseau’s The Social Contract to conclude that he “denies the necessity of citizens discussing or coordinating their views” and that the “general will” emerges more precisely when they don’t, since “long debates, dissensions, disturbances, signal the ascendancy of particular interests and the decline of the State.” If we follow Rousseau, according to Azuma, “communication should be banished from politics to allow the general will to be formed.” Maybe the Habermas Machine should have been called the Rousseau Machine.
The ideal that Azuma attributes to Rousseau — “freeing man from the order of men (communication) in order to allow them to live on the basis of the order of things (general will) alone” — fits well with the ideology that lurks behind many AI projects that sees communication as inconvenient and interpersonal encounters as so much unpleasantness to be avoided. The fantasy is to eliminate the order of humans and replace it with an order of things and a pseudo-physics that can completely explain everything that happens with that order.
Azuma describes this in terms of total surveillance and a hypothetical process that can convert all the resulting data into governance:
the coming society of ubiquitous documentation is one where the records of the desires of its constituent members are accumulated and converted into a utilizable format in an institutional manner regardless of a person’s conscious and actively expressed intentions. There, people’s wills are converted into a thing (data).
Since deliberation and communication are irrelevant, ultimately consciousness is irrelevant as well — intentions are not to be “expressed” but extracted from empirical measurements. And LLMs, which have no intentionality, can serve as the institutional language or format that operates without falling into intersubjectivity or projections of consciousness.
The Google researchers assert that “finding common ground is a precursor to collective action, in which people work together for the common good” and believe their LLM can help expedite the production of that common ground. But it may be that the process of producing the common ground (and not simply the agreed-upon tenets) is what makes the subsequent working together possible. The deliberative process, protracted as it may be, models what “working together” consists of, and the kinds of trust, compromise, and conflict resolution that will continue to be required to make progress. Or, as Azuma characterizes this position (en route to rejecting it), “what is important for democracy is not that everyone’s will is gathered but that each will is transformed through the process of consolidating those wills.”
But what is important for tech companies is simply that everyone’s will is gathered — that a kind of unceasing surveillance can be rationalized and operationalized. (“The general will is the database,” Azuma argues. “The environment retains a record.”) The process of consolidating those wills can be alienated from those whose wills will be consolidated, imposed mechanically from outside, not as a matter of conscious thought and changing people’s minds but as a matter of regarding consciousness as epiphenomenal, downstream from whatever has been imposed on it, using “AI” to “shape reality” and produce the consciousness of those subject to it. In other words, tech companies can posit a world where all political discourse occurs between isolated individuals and LLMs, and the data produced could be used to facilitate social control while everyone gets to feel heard. The automated production and summarization and summation of political opinion doesn’t help people engage in collective action; it produces an illusion of collective action for people increasingly isolated by media technology.
contrast with sortition, civic assemblies, etc. where it’s only real people deliberating with each other- no black box or autosuggests - is this a future fight?
and thanks for illuminating!