In the latest New Left Review, Hito Steyerl writes:
Large language models enclose miscellaneous data and privatize them for further development, following the precedent set by Web2 and social media. In the process, they also enclose a sort of negative version of what Marx called the “general intellect” — that is, the general social knowledge in its form as immediate productive power. LLMs trained on expropriated data present a degraded version of this, based on the leftovers of the internet: heartbroken teenagers’ Stack Overflow recommendations, child pornography, meme debris. Operationalizing the fantasy of a total enclosure of knowledge, as well as the privatization of language, LLMs impersonate fake totalities, based on the averaged mass of trawled data.
As this passage suggests, the origins of AI as we current know it are in the large data sets that have been assembled since storage became cheaper and surveillance (and outright appropriation of information) became more prevalent. “AI” was implied in every terms of service contract for every online activity for the past 20 years or so. Companies claimed the rights to collect data and use it basically for whatever purposes they wanted, and they started collecting whatever they could without necessarily having an immediate use for it. The growing data hoards, as they approached the size and scope at which they could pass as “fake totalities,” incentivized the development of processes that could purport to make profitable use of them.
A real totality would presumably be a whole that can’t be reduced to the sum of its parts; AI is a “fake totality” because it posits a whole that consists only of its parts, with a statistical analysis applied to suggest a completeness. If productivity and value emerge from our collaborative social practices — the “general intellect” Steyerl mentions, a real totality — then shouldn’t a large enough simulation of society, a fake totality, be able to produce similar value without the trouble of managing living labor or the resulting confusions of who that value belongs to?
Steyerl attributes “recent backlashes against AI-generated art” to the fact that “statistical renderings are by definition co-productions between programmers, prompt writers and the producers and annotators of training data, and thus lend themselves structurally to cooperative forms of ownership … but rights to such productions are not in any commons.”
But the point is not that individuals’ property rights have been violated; the point is that “AI” is a process designed to do what capitalism has always done: reorganize forms of social cooperation into structures designed to allow capitalists to better exploit them. When social cooperation is unorganized and underexploited, its value is frittered away on local outbreaks of human thriving. But when it is turned into data and structured into calculable forms, it can be managed and made more efficient, and be put in service of private profits.
At first this structuration takes the form of simulation and capture of existing social relations (a.k.a. “formal subsumption”), but eventually it takes the form of reordering social relations into the forms that a more profitable capitalist mode of production dictates (a.k.a “real subsumption”). So at first the data reflects existing sociality, but soon datafication begins to change social behavior to correspond with how it is being quantified. Becoming data comes to seem like the primary means of social participation, if not the point of it (as though the point of sociality was to “get likes,” etc.).
I used to blather about formal and real subsumption with respect to social media, which first captured a set of existing social relations (sharing with your friends and family on a privately owned platform, etc.) and then remade them (parasocial relations with influencers in a space saturated with personalized ads and predictive content meant to shape your level of pliability).
The shorthand for this passage from formal to real subsumption is “the algorithm,” which appears as a change agent that compels new ways to be social, new ways to understand oneself and one’s place in the world, and how to cope with it or try to change it. The algorithm (like “the market”) is supposed to objectively reveal to us what our contribution to society means or is worth; it is where we must submit our sociality to have it be made “real,” to have it authenticated. (Hence the obsession with authenticity in this milieu.) The degree to which “the algorithm” enters our thought and practice is the degree to which our social being has been reworked along the lines required by industrially organized social media.
All the ways, for example, that Instagram changed aesthetics (how people decorate rooms, how people smile, how people choose to eat photogenic food etc, etc.) are ramifications of this enclosure: Those aesthetic practices are part of the intrinsic value of social relations (the “general intellect” at work to make life collectively meaningful), and social media platforms worked to change those relations in order to codify them and make them more systematically calculable and exploitable (as earlier forms of media did before). Social meaning then appears to depend on the platforms (“TikTok shows me what to do and who I am”) rather than the people who are connecting and collaborating through them, albeit in alienable and often alienating ways.
The trajectory for AI is similar. It absorbs our behavior and social practices as data to be processed into their ultimate meaning after being combined with everyone else’s data, collected at the capitalists’ behest and aggregately transformed into the essential means of production. Put another way, AI is when “the algorithm” metastasizes to be capable of seemingly predicting and dictating everything, and the purveyors of AI have amassed enough power to compel us to accept it as such. Right now, AI models are replicating and mimicking the existing social relations to extract value from them, but as AI becomes more ubiquitous, it will reshape those relations in its own image. It will organize how we communicate with each other and work together, modulating those practices for maximum extraction. All the autocorrecting and autocompleting and integration of bots into human conversations will come to seem not like an aberration of sociality but the fundamental essence of it.
A good way to understand AI, as Matteo Pasquinelli’s recent book The Eye of the Master: A Social History of Artificial Intelligence argues well, is not as “the imitation of biological intelligence” but “the intelligence of labour and social relations.” It is not modeled after the capacity of any individual thinker or brain, but is instead a means for simplifying and capturing the “the knowledge expressed through individual and collective behaviors” in order to “encode it into algorithmic models to automate the most diverse tasks.”
The division of labor, Pasquinelli argues, derives from the innovations of human workers developing collaborative practices; the “algorithm” or “AI” are generalized attempts to co-opt it — that “social intelligence shapes the very design of AI algorithms from within.” He supports that thesis with a detailed assessment of various controversies in both the histories of management and machine learning. In particular, the oft-noted scientific dispute between “symbolic” and “connectionist” approaches to AI — between attempting to model deductive logical reasoning and attempting to simulate inductive reasoning through experience and the emergent organization of information — must be understood as a social question:
it comes as no surprise that the most successful AI technique, namely artificial neural networks, is the one that can best mirror, and therefore best capture, social cooperation. The paradigm of connectionist AI did not win out over symbolic AI because the former is ‘smarter’ or better able to mimic brain structures, but rather because inductive and statistical algorithms are more efficient at capturing the logic of social cooperation than deductive ones.
Neural networks “work” because they build on the data-driven management techniques already developed to expropriate and control workers — their output is sufficient for accomplishing that aim (and not because it produces “truth”).
Pasquinelli combines such historical excavation with a theoretical perspective derived from the Italian operaismo movement that emphasizes the agency of workers in their efforts to resist capitalist exploitation.
In the late 1960s, political philosopher Mario Tronti proposed to reverse a thesis which was then mainstream also in Marxism: capitalist development was always considered to shape workers’ organisation and their politics. To the contrary, Tronti claimed that capitalist development, including technological innovation, was always triggered by workers’ struggles. Interestingly, for a European intellectual such as Tronti, ‘the working-class struggle reached its highest level of development between 1933 and 1947, and specifically in the United States’, which are coincidentally the years that witness the rise of cybernetics and digital computation.
From that perspective, it is insufficient to view automation simply as a deskilling process imposed from above, once capitalist technicians devise new means of subordinating and humiliating human workers. Rather “automation” demarcates a more complex terrain of struggle, and technological developments follow from labor’s innovations: “The epistemic imperialism of science institutions has obfuscated the role that labour, craftsmanship, experiments, and spontaneous forms of knowledge have played in technological change: it is still largely believed that only the application of science to industry can invent new technologies and prompt economic growth, while this is in fact rarely the case,” Pasquinelli writes. He refers to Marx’s account from Capital of the origin of the steam engine:
After challenging the belief that science, rather than labour, is the origin of the machine, Marx reverses the perception of the steam engine as the prime catalyst of the Industrial Revolution. Instead, he contends that it is the growth of the division of labour, its tools and ‘tooling machines’, that ‘requires a mightier moving power than that of man’, a source of energy that will be found in steam.
Machines serve to contain the knowledge generated by labor practices and route it toward capitalist ends. “Being itself an embodiment of the division of labour, the machine then becomes the apparatus to discipline labour and regulate the extraction of relative surplus value,” Pasquinelli writes.
The development of AI can be conceived similarly, as a response to emerging modes of collaboration, cooperation, and organization spurred by more elaborate means of communication. It is an aspect of what James Beninger describes as The Control Revolution.
Central to this argument is the idea that labor and knowledge are collective and social by definition: “Intelligence emerges from the abstract assemblage of workers’ simple gestures and microdecisions, even and especially those which are unconscious,” Pasquinelli argues, which means that machines are necessary to manifest this intelligence that can’t otherwise be located or fully harnessed. Intelligence and creativity are not part of individual subjectivity but of social assemblages, and the individual’s exact contribution is held to be nonconscious if not unconscious. Individuals are denied the ability to make sense of their own practice as intelligent or creative without that broader social orientation — the social world that makes practice meaningful.
The machine, in some respects, is presented as a substitute for that sphere of living social practice that allow individuals to grasp their contribution without having to be a sustaining participant of that practice, in that social world. (You can, for example, believe you are a “creative artist” without being part of an art world.) AI, algorithms, social media networks, etc., provide that orientation in a way that may seem to privilege the individual subject.
Collective creativity is revealed through statistical analysis of accumulated data understood as the raw material of collective intelligence at work in production. Through the mediation of AI, an individual is part of a collective subjectivity but also independent of it, seeming to unilaterally leverage it for personal purposes. Any agency over the social orientation of their own practices that this mediation allows them to experience comes at the expense of being reduced to passively receptive pose, a consumerist disposition in which one consumes the meaning of their own behavior rather than living and directing it.
Social creativity (relationality and mutuality in producing meaning) is presented back to the isolated individual as an alienated thing they can purchase rather than something they collaboratively make. Our contribution to the social appears as the kinds of consumerist decisions we make and the kind of data we generate, intentionally or not. We play with the fake totality and lose sight of our realtion to the real one.
Great piece (and great meeting you last weekend), I'll return for a more close read later, but the thing that immediately strikes me is how quantifiable measures of "intelligence" such as "g factor" (https://en.wikipedia.org/wiki/G_factor_(psychometrics)) operate as almost the negative image of this false totality. If AI is the fake totality of a sort of social intelligence, then "g factor" is the fake totality of individual intelligence, similarly a product of statistical methodologies intended to produce an articulation of some "fake" whole, that can then be leveraged for the ends of capital (IQ testing for jobs, "meritocratic" endeavors, etc).
It strikes me that the entire "gifted kid" discourse ultimately comes down to the privileging of the individual through statistical measures of this sort, via the orientation you describe toward the individual subject, contrasted with the actuality of creative production as embedded in social relations. So the AI is, in a sense, a promise from the capitalists to the gifted kid cum adult, that "now you can finally reach your potential", but at the cost that you describe, of passivity. It's the sustaining of this fantasy (+ its confrontation with the negative fantasy of those who are already embedded within those pre-existing social relations in a more active way) that seems to create all of this emotion and discoursing.
this was really really interesting