The theft of alien labor time
When social media first emerged, there was a fair amount of theorizing about whether it was a manifestation of what autonomist Marxists called “the social factory,” whereby, according to Mario Tronti, “the social relation is transformed into a moment of the relation of production.” Similar theorizing addressed whether the effort that went into user-generated content was a form of “immaterial labor,” work that, in Maurizio Lazzarato’s definition, went toward “defining and fixing cultural and artistic standards, fashions, tastes, consumer norms, and, more strategically, public opinion.” Could the internet be understood as an expression of society’s “general intellect”? That concept stemmed from a section in Marx’s Grundrisse known as the “Fragment on Machines” and refers to collective knowledge and how machines expropriate it. As Matteo Pasquinelli explains in his overview of the concept, “Intelligence, here, resides in the ramifications of human cooperation rather than in individual mental labour. Machine intelligence mirrors, embodies and amplifies the analytical intelligence of collective labour.”
At the time, a lot of that thinking was aimed at explaining the persistence of neoliberalism and its dependence on the tech sector, which developed its tools and disseminated its requisite ideology: the networks society supplanted the welfare state; precarity, indebtedness, “human capital,” and ceaseless hustling characterized the individual’s economic predicament; social media and the gig economy served as structures that helped rationalize those conditions and the expectations that workers make everything of themselves available for exploitation. (This 2013 essay by Rosalind Gill and Andy Pratt covers some of that ground.)
But all those operaismo concepts also seem ready for reapplication to generative AI (which may also turn out to be a support system for neoliberalism). As the novelty of what LLMs etc. can do fades, the source of their abilities — how the models are trained on data collectively produced and scraped from the internet — is getting more attention. This Washington Post report goes into some of the specifics, but the as important as the details is the overall interpretive framework: If the internet is a cooperative project building up a knowledge base across different facets of society, tech companies haven’t so much facilitated that as stolen it. This Axios summary by Ina Fried captures the emerging commonsense view:
AI's hunger for training data casts the entire 30-year history of the popular internet in a new light. Today's AI breakthroughs couldn't happen without the availability of the digital stockpiles and landfills of info, ideas and feelings that the internet prompted people to produce. But we produced all that stuff for one another, not for AI. From this vantage, the existence of these vast "corpuses" of data was a profoundly important unintended consequence of the rise of the web itself.
Whether this was “an unintended consequence” can certainly be questioned: You only have to look at the enthusiasm for “Big Data” over the past decade or so, and the commercial development of surveillance technologies and archival capacities, to recognize how intentional this kind of data collection has long been for tech companies. But the recognition that LLMs are just an aggregation of the work we have performed and not in and of themselves some autonomous performance of intellectual work nonetheless clarifies what the companies making AI actually do: hoard data by any means necessary and throw lots of processing power at it. Their main innovation is scale. As this recent AI Now report on the tech landscape emphasizes, AI “is foundationally reliant on resources that are owned and controlled by only a handful of Big Tech firms.”
If AI is understood as broad-based intellectual-property theft, copyright enforcement appears as an apparent solution, an approach discussed in this interview at the Markup with former judge Katherine Forrest, and in this piece by Timothy Lee. Embracing IP laws, however, is not an especially enticing way forward for anyone who doesn’t want to further entrench capitalist property relations. An alternative can be glimpsed in the side of post-autonomism that emphasizes the opportunities in having the “general intellect” made explicit and material — the sorts of ideas that crop up in work on the “multitude.” From this perspective AI tools may have been made by voracious capitalist entities, but the machinery can’t help but reflect worker power, living labor as the source of all value, the power of cooperation, and the triumph of collectivity over individualism and so on. “The power to act is constituted by labor, intelligence, passion, and affect in one common place,” Hardt and Negri write in Empire. What if the massive generative models were conceived as that one common place. “The multitude not only uses machines to produce, but also becomes increasingly machinic itself, as the means of production are increasingly integrated into the minds and bodies of the multitude,” they write. What if that manifests as the multitude hybridizing with AI models to collectivize in common the knowhow of the general intellect, as a constitutive power for new kinds of community? “When human power appears immediately as an autonomous cooperating collective force, capitalist prehistory comes to an end.”
Much of this kind of theorizing overlaps with or bleeds into accelerationist thought. If we use AI for everything, we’ll heighten all the contradictions and dissolve all the structures and oppositions capitalism requires to function! I’m sure someone is out there writing essays that proclaim that ChatGPT gives a practical voice to the multitude and can organize its demands as a political subject. Actually, you could probably have ChatGPT write that essay.
Ads make consumers
Byrne Hobart offers an interesting theory for the predominance of the advertising business model, particularly with respect to U.S. media markets. It has several moving parts, but this is what interested me:
The world's demand for dollars means there's global demand for Americans to consume. And this environment happens to be very conducive to tech companies that monetize through ads: there's abundant consumer demand in the U.S. for products that the U.S. has a comparative advantage in consuming but not producing, i.e. anything that can create a dollar-denominated revenue source in a country that can't print dollars … persistent global demand for dollars means … that there's demand for the U.S. to consume more than it produces.
As I read that, the idea is that the demand for U.S. currency abroad means that there is an international interest in making Americans behave more like consumers. This orients American companies toward producing the ads, the ad space, and the attention to make that happen. Tech companies, from this point of view, are in the business of making people into consumers, a model subsidized by the manufacturing gains being realized in other economic regions.
Hobart then applies this to the push for generative AI, which can be reinterpreted as a new means for placing ads in front of Americans and reinforcing their propensity to consume.
When there's a new product that simultaneously captures lots of attention and ingests lots of data, it's solved the two key problems ad platforms need to solve: how to get attention, and how to direct it to the right commercial message. There are, of course, plenty of other monetizable features of large language models; they're great for searching and summarizing, two things businesses will actually pay for as complements to their existing work. But given how lucrative the ads business has been for the companies that go all-in on it, the gravitational pull is strong here, too.
This is not merely a matter of matching specific customers to specific ads, though that will undoubtedly happen: One would think that all the keyword auctioneering that goes on in search engines will happen in chat interfaces as well, with the added bonuses of extra obfuscation between what is “ad” and what is "content” and a fluent salesperson in the form of an LLM whose tone automatically synchs with whomever it interacts with. (People’s data profiles will likely be mined for information about what sorts of discourse appear to hold their attention and are more likely to manipulate them. Chatbots could respond in the tone of the sorts of websites they frequent or writers they like and so on.)
Beyond that, chatbot deployment will serve the end of producing consumerism as a general disposition, as the way of life that makes sense of why you want information in the first place: all quests for knowledge appear as wanting to buy something at some level, as a mode of commercial exchange. You are offered packets of information without your having to contribute any work of your own to produce the resulting “knowledge.” Rather than regard chatbot discourse as authoritative or credible (the premise of this piece by Amanda Mull is that chatbots produce “authority” abstracted from verifiable information), they should be seen as producing spontaneous advertising, even and especially when it is not clear what they’re advertising for: That’s possibly when they are serving most definitively as advertisements for advertising itself.