In a recent New Left Review article, Cédric Durand offers a bleak vision of one potential economic future:
Rentier and monopoly interests will continue to preside over an increasingly unequal, authoritarian and stagnant society, whose political structures will slowly mutate into some institutionalized oligarchic form. Over-accumulated fictitious capital [i.e. current financial claims to as-yet-unproduced wealth] will remain congealed and uninvested. Commodification will no longer be the vector that allows profits to grow out of abstract labour. Instead, a small stratum of super-rich individuals will harness new technologies to secure their rents and reproduce their lavish lifestyles in an ever more degraded and militarized world.
I have assumed that “generative AI” would be one of those “new technologies” of despotic oligarchy: It is a mass-scale appropriation scheme that attempts to turn the cultural commons of the past into the privatized and monopolized cultural production of the future, a truly “stagnant society,” while purportedly deskilling all forms of labor and thought across the board.
But perhaps Brian Merchant is right that we are at the “beginning of the end of the generative AI boom.” He cites the recent volatility in the stock market, some analyst reports (like this one from Goldman Sachs, which cites economist Daron Acemoglu, who wrote his own skeptical take here) that questioned AI’s short-term profitability, and the general public’s revulsion toward Google’s AI ads among other things to suggest that the “jig is up”: The conventional wisdom appears to be shifting away from the idea that “AI will transform everything” to the view that generative AI is mostly hype from tech companies and venture capitalists desperate for another “game-changing innovation,” and like the previous two tries (crypto and VR), it is not going to change everything or much of anything. It no longer seems that trillions of dollars of investment in machine learning will inevitably lead to some irresistible AI application that will allow companies to extract new rents or further deskill and exploit workers. Generative AI is not the investment vehicle that will make all that “over-accumulated fictitious capital” productive again.
As Merchant notes, “Enterprise AI was supposed to be the big moneymaker for generative AI firms, and now it’s increasingly clear that it isn’t adding much efficiency, at best, and is outright counterproductive at worst.” It appears to produce low-quality work faster, burdening other employees who must clean up after it and resist the temptation to throw more AI at the problems it has created, compounding the issue. Because generative AI seeks to remove thought from production, it is hard to conceive how generative tools can be brought into workflows without alienating the employees who care (those who try to think things through) and empowering the employees who don’t (those who want to provide the least effort and are utterly indifferent to particular production goals or the burden borne by their colleagues).
My exposure to the professional use of generative tools has been limited, but I’ve had some freelance editing work that has involved reading over generated PR material, which all has the same empty, formulaic quality to it — moronic five-paragraph essays assembled from the same dozen cliches, stuffed into same sentence structures. It not only is this; it’s that, where “this” and “that” are either the same idea in different words, or two abstractions with no inferable meaning at all. It’s lorem ipsum filler disguised as real copy. I end up feeling relieved when I get to read something incoherent and incompetent, even if it takes a lot more effort to get it into readable shape, because at least it is alive, and I am engaging with another person’s thinking, their presence and investment, such as it was. I can try to think about what they were trying to say and help clarify it. With the generated material, there is no further clarity possible because nothing is intended. It feels like editing static. (Organizing language by probability rather than intention produces a kind of pure entropy, the heat death of consciousness.)
Encountering ChatGPT in the wild is a bit like encountering a sociopath, or having the shadow of soulless evil fall over you for a chilling moment. So it is no wonder that Open AI was, according to this Wall Street Journal piece, reluctant to watermark ChatGPT’s output or release the detection tool it created that worked with 99.9% certainty. This suggests that OpenAI understands that the point of ChatGPT for many of its clients is deception. In a company survey of users, “nearly 30% said they would use ChatGPT less if it deployed watermarks and a rival didn’t.” OpenAI claims in this post that it wouldn’t want to “stigmatize use of AI as a useful writing tool for non-native English speakers,” while also giving some insider advice on how to thwart watermarking: try “using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character” and you too will find it “trivial” to defeat detection efforts.
Incidentally, the Open AI post contains many sentences that sound like generative content. For example: “People around the world are embracing generative AI to create and edit images, videos, and audio in ways that turbocharge creativity, productivity, and learning.” This sentence has the telltale flabby genericness, the list of three empty abstractions, the indifference to defensible claims, and the weird predilection for the present progressive tense that seems to mark ChatGPT’s promotional “voice.” It exemplifies precisely why generative models do the opposite of “turbocharging creativity, productivity, and learning.” Generative models absorb the opportunities for creativity and eliminate them, turning them into displays of anti-creativity, a matter of solving routine math problems. They produce material that only superficially accomplishes a communicative goal, creating a kind of opacity around any given situation, a fog of bafflegab, making them anti-productivity. And they produce simulations of meaning that mock the education process, inviting people to dissipate their curiosity in efficient acts of prompting that return expedient blocks of text behind which no one stands and for which no one claims responsibility. No one cares if it’s right or even arguable; it’s anti-learning.
If watermarking made it hard to deceive others with generative text, people could presumably still deceive themselves. This Washington Post report, for what its worth (it consults two AI enthusiasts and no critics for analysis of its findings), examined a corpus of chats and determined that the most common uses were “sex and homework.” The article alleges that “AI chatbots are taking the world by storm” but as Merchant points out, “after that initial meteoric rise to 100 million users, there hasn’t been much growth in the year and a half since. ChatGPT has its fans, sure, but it hasn’t become a must-use product for most.”
On a personal level, I hope Merchant is right about generative AI because I have grown ashamed of writing the same critiques of it and feeling a false sense of accomplishment. It’s sort of fun for me to write about LLMs because I can indulge my amateur speculations into epistemology and lapse into the kind of literature-major humanism that was seen as passé when I was actually a student. But it increasingly feels like a distraction, tangential to more invasive and menacing forms of automation that, for instance, enable and rationalize wide-scale surveillance, eliminate due process, and systematically classify and stigmatize people. Generative AI is capable of producing mediocrity at an unimaginable scale and unwinding the sittlichkeit accrued over centuries, but relative to the technologies that reproduce inequality and injustice, that are working more directly to make an “ever more degraded and militarized world,” it seems almost utopian, a weapon against ambition, a means of deincentivizing effort and will, evoking a world in which everyone is equally checked out of everything. After all, there are worse things than anti-productivity.
You may think your writing sounds repetitive but if it were possible to quote an entire substack article I would have. The entire time I was reading my own thoughts that I can't verbalize as well as a professional writer.
"Organizing language by probability rather than intention produces a kind of pure entropy, the heat death of consciousness."
I know this is parenthetical but holy wow it floored me.