It took a while, but I finally finished rereading Georg Lukács’s “Reification and the Consciousness of the Proletariat.” The last time I read it must have been in the “Web 2.0” era, because I had made some marginal notes about the “general intellect” and the commodification of the self in social media as a kind of proletarianization. This time through, as I noted in the previous post, I was thinking about LLMs and their relation to the “concrete historical totality” that Lukács argued a properly oriented proletariat could articulate.
According to Lukács’s analysis, “in the reified thought of the bourgeoisie the ‘facts’ have to play the part of its highest fetish in both theory and practice. This petrified factuality — in which everything is frozen into a ‘fixed magnitude’ in which the reality that just happens to exist persists in a totally senseless, unchanging way — precludes any theory that could throw light on even this immediate reality.” By contrast, because of its alienating experience within capitalist processes, “for the proletariat the way is opened to a complete penetration of the forms of reification.”
Lukács argues that the worker's experience of being commodified within capitalism opens the way to a shared class consciousness of how capitalism works as a process, while capitalists see it merely as the "way things really are," a set of immediate facts, quantities (rather than quantifications), and laws (rather than applications of social force). But that privileged insight doesn't automatically achieve anything. One way to understand LLMs and "AI" as a project is to make sure that insight is maligned, stunted, eradicated, abandoned.
LLMs, by exemplifying “petrified factuality,” “the forms in which contemporary bourgeois society is objectified,” illustrate the urgent, critical need for the “penetration of the forms of reification” and a clear way to assess the acuity of one’s critical thinking in the extent to which it diverges from LLMs’ statistically average slop. Yet at the same time, LLMs serve to suppress that need, gratifying the proletariat (i.e. those compelled to use AI tools) with immediate “answers,” enticing them to forget about their unique potential to see through reification.
Recent research (reported on by The Wall Street Journal here) even suggests that “lower artificial intelligence literacy predicts greater AI receptivity” — i.e. the more one understands about what AI is, the less likely one is to use it. But an array of forces are aligned against allowing it to be understood, much as capitalism has always been enshrouded in various mystifications and ideological misrecognitions. The Wall Street Journal article quotes a business professor who advocates “calibrated literacy” toward “AI”: people should be taught just enough about it to find it magical and “delightful” but not so much that they see what it actually is: algorithmic pattern matching. In other words, to get people to use “AI,” they must be taught to love their own ignorance as a kind of enabling magic. (Isn’t it better and more “delightful” to believe that the sun is carried across the sky by gods driving a celestial chariot than to develop the science of astronomy?)
If you buy Lukács’s argument, there is a sense in which AI should make only the bourgeoisie stupid. It offers immediacy — a static map of facts — to those who can’t or won’t see the processes and social relations, as well as the opportunities for undetermined action, that constitute history. This passage, which Lukács quotes from Marx’s The Holy Family, helps illustrates the stakes and clarify what differentiates those who are at home using AI — the ultimate reifying machine — and those who regard it as a personal and historical threat:
The property-owning class and the class of the proletariat represent the same human self-alienation. But the former feels at home in this self-alienation and feels itself confirmed by it; it recognizes alienation as its own instrument and in it it possesses the semblance of a human existence. The latter feels itself destroyed by this alienation and sees in it its own impotence and the reality of an inhuman existence.
One can be tempted to want to identify with capital, to take its immediate understanding of the world as truth and treat its reification machine as a gateway to occupying the empowered subject position within capitalism, where you put alienation to use for your apparent benefit. But this requires surrendering the ability to see and shape reality from the only standpoint (according to Lukács) that can grasp it and act on it.
Lukács writes:
when confronted by the overwhelming resources of knowledge, culture and routine which the bourgeoisie undoubtedly possesses and will continue to possess as long as it remains the ruling class, the only effective superiority of the proletariat, its only decisive weapon is its ability to see the social totality as a concrete historical totality; to see the reified forms as processes between men; to see the immanent meaning of history that only appears negatively in the contradictions of abstract forms, to raise its positive side to consciousness and to put it into practice.
One can see LLMs as a mobilization of “the overwhelming resources of knowledge, culture, and routine” against the free, historically novel insights made possible by a standpoint that looks to understand processes and dynamic relations rather than predict them as though they were simply determined by natural laws. The “delight” they provide is part of those overwhelming resources, and it is secured by refusing to see the contradictions inherent in “AI.”
Though generative models can seem like they offer a interactive interface to the “objective forms from which our environment and inner world are constructed,” they are nondialectical pseudo-totalities: They present what has already existed as a set of given truths, and encourages everyone to abandon understanding why they are interrelated in the ways that models capture but don’t explain. But the whole point is to understand historical processes as they unfold, not to be able to seemingly calculate the results in advance. Things change by being thought through, collectively; nothing happens when answers are simply given as a calculation.
Lukács argues that
As long as man adopts a stance of intuition and contemplation he can only relate to his own thought and to the objects of the empirical world in an immediate way. He accepts both as ready-made—produced by historical reality. As he wishes only to know the world and not to change it he is forced to accept both the empirical, material rigidity of existence and the logical rigidity of concepts as unchangeable. His mythological analyses are not concerned with the concrete origins of this rigidity nor with the real factors inherent in them that could lead to its elimination. They are concerned solely to discover how the unchanged nature of these data could be conjoined while leaving them unchanged and how to explain them as such.
LLMs emerge from this approach to the world: The models show how data are “conjoined” — how words and concepts have historically fit together — while leaving them unchanged and unexplained, attempting to convince users that they are all ultimately unchangeable. But as we bear the costs of AI and the damage it wreaks — as we resist rather than resign ourselves to the deskilling and desocializing it works to impose — our consciousness of what must be done, of what forms our resistance can practically and efficaciously take, comes into sharper focus. Thinking rather than prompting; collaborating with other people and socializing rather than withdrawing into nonreciprocal machine chat — these become clarified as sources of strength and means of de-reification. Of course, this means they will continue to be under constant ideological attack. Capitalism has to produce ignorance and apathy to perpetuate itself; “AI” is merely the latest means of production.
I'll mention another observation from personal experience: As other people in my circle withdraw from dialogue and two-way speech under the impervious aegis of business and self-care, it becomes more tempting to me to turn to the AI for things I really would rather have a human counterpart for.
"One can be tempted to want to identify with capital, to take its immediate understanding of the world as truth and treat its reification machine as a gateway to occupying the empowered subject position within capitalism, where you put alienation to use for your apparent benefit."
If I may can indulge in a personal example, I have been tempted by this in my efforts to find a good job. It pays well to be a Data Analyst. I found the technical skills easy to acquire. Yet there was a barrier for me that was holding me back. I couldn't come up with the language and the way of thinking about data that seemed to be demanded of me. You needed to be smart enough to 'turn data into insights' as they never tire of saying. Yet you needed to strictly limit your inquiry on both ends: never dig too deep into the data and question the categories and assumptions constituting them, and never think through what the consequences of applying some 'business insight' might be beyond profitability.
The language of the endless free and paid courses that promise to teach you to be a data analyst exhibit the inevitable quality of LLM text. I even noticed that on LinkedIn, many people have begun using generative AI to write their self-marketing posts. Presumably that is the type of language you need to demonstrate in a job interview, as well.