contrast with sortition, civic assemblies, etc. where it’s only real people deliberating with each other- no black box or autosuggests - is this a future fight?
I'm struck by the parallel with the aggregation of individualized guesses when it comes to, for example, guessing the number of jelly beans in a jar, as described by James Surowecki in his Wisdom of Crowds, and by others as well. The reliability of such guesses appears greatest when the guessers are not influenced by others and less reliable when the guessers are prompted by the guesses of others in their in-group. This suggests that discussion, at least when it come to jelly beans, is more subject to confirmation, availability and my-side biases and that unmediated guesses are likely to aggregate to a more reliable median.
It is obvious that humans are not very capable of resolving their conflicts - even the conflicts of minor cognitive complexity. Using machines to develop non-obvious solutions for a given problem, solutions that for humans would remain obscure, is my favourite utopia.
But using machines to implement a "perfectly democratical society" is almost an opposite of it and reminds of theories of Zhao Thingyang and the benevolent administration of CCP that focuses on stability. Habermas' theories of deliberative society required in their real-world implementation a sizeable set of taboos and ended as a sort of soft totalitarianism.
This is just like that recent "AI changes minds of conspiracy believers" and i'm just like: Yeah, but this only means that synthetic language simulators are now persuasive enough to, you know, change minds, and somehow i'm the only one who's creeped out by that.
Agreeing more with others when their position is rearticulated by a machine is deference to the ‘LLM supposed to know’: a disembodied voice with whom I have no ego rivalry, and to whom I entrust with the Authority to say, “What they _really_ meant was…”.
I was just directed here because I created another Habermas machine on the cheap, inspired by the DeepMind machine, to discourage violence starting next week. I love your analysis, am likewise inspired by Arendt and Habermas, but disagree with some of your foundational assertions. FYI, I have found most of the current LLM foundation models to be natural Habermasian mediators, perhaps due to their guardrails. Here is more on what I built and what it said to my straw MAGA man: https://www.linkedin.com/pulse/truth-reconciliation-maga-test-driving-habermas-machine-jon-neiditz-7dcse/
contrast with sortition, civic assemblies, etc. where it’s only real people deliberating with each other- no black box or autosuggests - is this a future fight?
and thanks for illuminating!
I'm struck by the parallel with the aggregation of individualized guesses when it comes to, for example, guessing the number of jelly beans in a jar, as described by James Surowecki in his Wisdom of Crowds, and by others as well. The reliability of such guesses appears greatest when the guessers are not influenced by others and less reliable when the guessers are prompted by the guesses of others in their in-group. This suggests that discussion, at least when it come to jelly beans, is more subject to confirmation, availability and my-side biases and that unmediated guesses are likely to aggregate to a more reliable median.
It is obvious that humans are not very capable of resolving their conflicts - even the conflicts of minor cognitive complexity. Using machines to develop non-obvious solutions for a given problem, solutions that for humans would remain obscure, is my favourite utopia.
But using machines to implement a "perfectly democratical society" is almost an opposite of it and reminds of theories of Zhao Thingyang and the benevolent administration of CCP that focuses on stability. Habermas' theories of deliberative society required in their real-world implementation a sizeable set of taboos and ended as a sort of soft totalitarianism.
Habermas Machines fit neatly into Churchman’s hierarchy of inquiring systems (Leibnizian, Lockean, Hegelian, Singerian). I wrote about it recently in the context of LLMs: https://realizable.substack.com/p/c-west-churchmans-systems-epistemology.
This is just like that recent "AI changes minds of conspiracy believers" and i'm just like: Yeah, but this only means that synthetic language simulators are now persuasive enough to, you know, change minds, and somehow i'm the only one who's creeped out by that.
The big Other in reverse? This is exemplary, no?
Agreeing more with others when their position is rearticulated by a machine is deference to the ‘LLM supposed to know’: a disembodied voice with whom I have no ego rivalry, and to whom I entrust with the Authority to say, “What they _really_ meant was…”.
I was just directed here because I created another Habermas machine on the cheap, inspired by the DeepMind machine, to discourage violence starting next week. I love your analysis, am likewise inspired by Arendt and Habermas, but disagree with some of your foundational assertions. FYI, I have found most of the current LLM foundation models to be natural Habermasian mediators, perhaps due to their guardrails. Here is more on what I built and what it said to my straw MAGA man: https://www.linkedin.com/pulse/truth-reconciliation-maga-test-driving-habermas-machine-jon-neiditz-7dcse/