Probable events poison reality
This week McKinsey released a report on the “economic potential of generative AI,” which, as the New York Times summed it up in a headline, apparently amounts to “$4.4 trillion in value to global economy.” Tech reporter Jacob Silverman was quick to note that “this is a made-up number from McKinsey to get coverage,” and that McKinsey’s “analysts were saying the same thing about web3 a year ago” — specifically, for instance, in this risible report about the metaverse.
The points of emphasis in those two McKinsey reports are ostensibly different, but the overall idea is the same: Technology should be understood in terms of the economic opportunities it presents, and those revolve around seeing any technology as “the next wave of digital disruption … with real-life benefits already emerging for early adopting users and companies,” as the metaverse report puts it, or “poised to transform roles and boost performance across functions,” as the generative AI report claims. (There is an irony to McKinsey’s reports being full of the boilerplate generalities that LLMs excel at reproducing.)
Reading the McKinsey reports back to back makes it clear how much consultancy-driven tech hype remains the same regardless of the differing affordances of an underlying technology, which are obviated by generalizing claims about “disruption” and “innovation.” No matter what a specific technology does — convert the world’s energy into gambling tokens, encourage people to live inside a helmet, replace living cognition with a statistical analysis of past language use, etc., etc. — all of them are treated mainly as instances of the “creative destruction” necessary for perpetuating capitalism.
Joseph Schumpeter, the economist who coined the phrase creative destruction, argued that innovation was “a perennial gale” and that “every piece of business strategy acquires its true significance only against the background of that process and within the situation created by it.” Even when the conversation about new technologies seems to be about something else — the threat of extinction at the hands of superintelligent machines; the stability of the financial system; the potential of “spatial computing” — it remains preoccupied with those old assumptions about innovation and its relation to capitalist teleology, whether “technology” has anything to do with “progress.” Debate boils down to whether you believe innovation truly is like a natural force — a “perennial gale” that blows down more or less from heaven — or more a matter of capitalist interests whipping up a wind to keep themselves afloat. Is entrepreneurial “disruption” just an attempt at trying to fix what isn’t broken, breaking society in order to rescue it? Or will there always be a series of innovations that ultimately succeed because is there some kernel of universal benefit within them, even if that benefit will of course be unevenly distributed?
It’s no longer sufficient for a technology simply to be new in order to inspire some sort of modernist faith in its beneficial possibilities or its aesthetic superiority. The overarching conditions of growing inequality and immiseration — and the bluntness with which these are experienced — make it quixotic to believe that progress is happening automatically. Recent technological pitches — crypto, the “metaverse,” and generative AI — seem harder to defend as inevitable universal improvements of anything at all. It is all too easy to see them as gratuitous innovations whose imagined use cases seem far-fetched at best and otherwise detrimental to all but the select few likely to profit from imposing them on society. They make it starkly clear that the main purpose of technology developed under capitalism is to secure profit and sustain an unjust economic system and social hierarchies, not to advance human flourishing.
Consequently, the ideological defense of technology becomes a bit more desperate. The numbers in McKinsey-style reports become larger, the methodology more and more speculative. The cheerleaders on social media become bullies, taking on a meaner, more hectoring tone: “ngmi”; “it’s all over.” At the same time, institutions and employers begin to unilaterally impose technology on people without much effort to persuade anyone that it is for their own good. With respect to “AI,” this has played out as widespread “frustration” with what Arvind Narayanan describes as “the hasty rollout of half-baked tools that generate massive adaptation costs for everyone else.” The assumption is that we’ll have no choice but to pay them.
Since the debates about technology are often just masked debates about capitalism’s inescapability, the critiques tend to resolve into the same shape, highlighting the same problematic business models, the same disregard for the populations put at risk, the same speculative frenzies, the same modes of overpromotion and distortion. The media in which these discussions occur have themselves been reshaped by technology, which compounds the feeling that they are ultimately being conducted on capitalism’s terms and in concord with its imperatives. On media platforms oriented entirely toward attention and foregrounded metrics, critics appear as influencers seemingly more invested in establishing their authority than countering the targets on the turf they have claimed. The symbiosis between critics and the technologies they criticize makes the overall cycle of creative destruction and its attendant publicity stunts seem unstoppable. There will be more delusional McKinsey reports and more people like me to complain about them with respect to one “new paradigm” after another, and the newsletters will keep getting sent each week. (Please subscribe!)
Dan McQuillan gets at this cycle in pointing out that “an open letter that follows on those other AI open letters isn't only clumsy but shows an underlying commitment to the same representational performativity that underpins AI harms in the first place.” Most AI harms follow specifically from the biases captured, objectified, and perpetuated through data, but more generally they stem from capitalism’s abstractions of “labor” and “value.” That perfomativity McQuillan mentions also describes our complicity with those abstractions, how they direct the ways we think about the world and how to effect change within it. That complicity itself can seem like Schumpeter’s “perennial gale,” an ever-renewing ideological condition that feels natural and inevitable but is best understood as something capitalism reproduces in us.
I wanted to connect the critique above to Alex Pareene’s conclusions in this piece about Reddit’s recent decision to charge for API access, which makes the many third-party apps and businesses that depended on free access untenable. Pareene takes this as an example of how the internet, understood as a commons-space of collaborative connectivity directed by human flourishing rather than profit, has gradually been subsumed by companies that have learned to exploit such collaboration as a resource, one that may or may not be renewable.
The internet’s best resources are almost universally volunteer run and donation based, like Wikipedia and The Internet Archive. Every time a great resource is accidentally created by a for-profit company, it is eventually destroyed, like Flickr and Google Reader. Reddit could be what Usenet was supposed to be, a hub of internet-wide discussion on every topic imaginable, if it wasn’t also a private company forced to come up with a credible plan to make hosting discussions sound in any way like a profitable venture.
Capitalism perpetuates itself by devising ways to exploit collaboration wherever it emerges, and calling those ways “innovation.” Hosting discussions becomes a profitable venture when those discussions are treated as data, and that data is made exploitable in ad targeting and AI model training (which may end up being no more than a more coercive form of ad targeting). At that point, the spontaneous collaboration between users becomes more obviously a kind of free labor that maybe shouldn’t be given away. People will begin to resist their own willingness to collaborate unless it is compelled by an overarching employer — unless they are paid. Where people once co-operated, now they will compete, and zero-sum competition between individuals will govern more of social relations in public spaces, and the fences around private spaces will never seem high enough.
The internet appears not as a place to work together on common goods but as a place to build personal brands and guard against being taken advantage of. As a result, it will no longer work as an open and reliable information source:
We are living through the end of the useful internet. The future is informed discussion behind locked doors, in Discords and private fora, with the public-facing web increasingly filled with detritus generated by LLMs, bearing only a stylistic resemblance to useful information. Finding unbiased and independent product reviews, expert tech support, and all manner of helpful advice will now resemble the process by which one now searches for illegal sports streams or pirated journal articles. The decades of real human conversation hosted at places like Reddit will prove useful training material for the mindless bots and deceptive marketers that replace it.
That is, LLMs are an innovation that reprivatizes public space. It creatively destroys the organization of information that the internet had once facilitated, back before it had come to depend on the environment of manipulation and deceit that characterizes advertising. “AI” indexes information without requiring social interaction, without any people organizing it with the needs of other people in mind. AI gets around having to have a “useful internet” that structures ways for people to develop trust in one another and the information they provide on an ongoing basis, beyond the kinds of branding and inculcation developed to support consumerism. It presents information with apparent directness, without the social mediation that contextualizes it; it organizes information as though it is all equally valid, no matter what purpose it originally served. It makes it seem as though all information has never been any better and any worse than advertising.
If LLMs succeed the way the companies developing them hope, and public space becomes corrupted to the point of abandonment with the regurgitation of machines, then nothing will remain to retrain the models, if such retraining becomes necessary. That possibility is explored in this paper about “model collapse” and summarized in this VentureBeat article. The formula-heavy paper argues that when models begin “learning from data produced by other models,” it triggers “a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time.” As a result, “models start forgetting improbable events over time, as the model becomes poisoned with its own projection of reality.” They provide the following chart to illustrate:
This echoes the critiques that LLMs enact a doom loop that precludes the possibility of the new: no more “perennial gales” of innovation, apparently, just the perpetual recurrence of what had already been deemed likely. The researchers argue that “the use of LLMs at scale to publish content on the Internet will pollute the collection of data to train them,” meaning that “data about human interactions with LLMs will be increasingly valuable.” Presumably this will incentivize companies like OpenAI to do whatever it takes to get us to chat with their bots.
But it seems ominous that the researchers don’t consider the possibility that humans will just choose not to interact with LLMs. The models’ output may degenerate, but faith in the underlying telos of technology as neutral good remains the same as ever. The poisoned reality that technology has made is posited as inescapable, like the smoke-filled skies that blanketed the East Coast last week. “Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah,” one of the paper’s authors wrote in a blog post. “LLMs are like fire — a useful tool, but one that pollutes the environment.” Such fatalistic Promethean rhetoric, however, is just more pollution, just more smoke, and we’re expected to do nothing but wait for the next of the perennial gales to blow it all away.
Please consider becoming a free or paid subscriber