I’ve been reading articles about ChatGPT all week, ordering them in my mind to make the discourse about it into a kind of coherent narrative that has ebbed and flowed from excitement to panic to backlash to counter-backlash. It’s apparently never to late to say “it’s early days” with generative AI, or to rehash concerns that have been aired with each new development in the means of mechanical reproduction.
On Twitter, Robin James suggested that “the ‘AI Art’ discourse is giving a real John Phillip Sousa "The Menace of Mechanical Music" vibe,” which seems true of some of the more reactionary commentators. Sousa, writing in 1906, was concerned that listening to newly available pre-recorded music would disincentivize children from developing their own musical abilities. Rather than seeing phonographs as a means for allowing more people to partake in cultural consumption (and perhaps becoming interested in learning to play themselves), Sousa regarded them as “automatic music devices” that replaced musicians’ labor, serving as a “substitute for human skill, intelligence, and soul.” Unlike live performance, pre-recorded music lacks true expression; it reduces “music to a mathematical system of megaphones, wheels, cogs, disks, cylinders, and all manner of revolving things.” The phonograph orients future innovation on improvements to its own apparatus, at the expense of the “human possibilities in the art.”
Likewise, anxious critics of generative AI imagine that it will replace artists and degrade the public’s capacity to even notice what has been lost. It has the potential to reduce not merely music (as with generative models like OpenAI’s Jukebox) but all forms of human cultural production to a “mathematical system” of statistical correlations and weighted parameters. And how will the children ever learn to write if they don’t have to craft their own five-paragraph essays for their teachers? As Sousa argued,
When music can be heard in the homes without the labor of study and close application, and without the slow process of acquiring a technic, it will be simply a question of time when the amateur disappears entirely, and with him a host of vocal and instrumental teachers, who will be without field or calling.
From there, it is doom to the “national throat,” as children, “if they sing it all,” will be no more than “human phonographs — without soul or expression.”
As overwrought as Sousa’s concern seems, I’m not entirely unsympathetic. It’s only a small step from “The Menace of Mechanical Music” to “The Culture Industry: Enlightenment as Mass Deception” — a comparison that perhaps discredits Adorno and Horkheimer as much as it excuses Sousa but gets at some of the larger stakes in the argument than the fate of the “national throat.” With respect to generative AI, the point is to think of it not merely as a gimmick or computational magic but as an emerging aspect of the culture industry, with the same implications for social domination. Generative AI is a form of propaganda not so much in the confabulated trash it can effortlessly flood media channels with, but in the epistemological assumptions upon which it is based: AI models presume that thought is entirely a matter of pattern recognition, and these patterns, already inscribed in the corpus of the internet, can mapped once and for all, with human “thinkers” always already trapped within them. The possibility that thought could consist of pattern breaking is eliminated.
Another way of putting it is that large-language models like ChatGPT are less generators than thought simulators. The trick of all simulation is to restrict the scope and range of possible human inputs to what the machine can process, while making those limitations appear as comprehensive, a clarifying articulation of what is humans actually do. Simulations purport to be totalities in which every act has rational, knowable meaning. They presume a closed system, where a response to each human input can be computed and remain convincing enough to maintain the simulation’s “spell” (to borrow one of Adorno’s favorite words for the administered world of social repression under capitalism).
With a truck-driving simulator, it seems reasonable enough to presume you can model all the relevant human actions and their consequences. But generative models aim at produce a simulation of knowledge, without requiring the effort of thought — without the “slow process of acquiring a technic,” as Sousa put it. You don’t learn how to think from this simulation, but to see thinking as superfluous, supplanted by a computational process. This allows consumers to experience “thinking” or “conversation” not as something that exceeds the contours of the program but simply the program’s execution — a kind of show that may produce weird and surprising results but unfolds without any spontaneity or freedom. To participate in the program, consumers can act programmatically themselves, make themselves act as a further piece of code. Hence, ChatGPT refines itself through the human inputs it entices out of us as we adopt the aspect of a debugging subroutine.
Nonetheless, it seems alarmist to think that AI models will eventually lead to the atrophy of human thinking. Instead they seem like whetstones. You can see this in how people test ChatGPT’s limits, trying to expose its errors, much like some people play video games not to win but to find the glitches. Every refinement to the model prompts a deeper exploration of how it falls short of cognition and a clarification of what can’t be totalized into the simulation. And likewise, AI models counter that and further the culture industry’s work of “advancing the rule of complete quantification,” as Adorno and Horkheimer put it. Whereas predictive recommendations (i.e. targeted ads and other attempts at manipulation) work toward this by reducing individuals to their data, generative models do it by making the world’s “content” seem derivable from data sets. In that sense it is pre-schematized, extending the 20th century culture industry’s content formulas into a more elaborate means for reproducing superficially variant sameness. In an especially Sousa-esque passage, Adorno and Horkheimer write:
A constant sameness governs the relationship to the past as well. What is new about the phase of mass culture compared with the late liberal stage is the exclusion of the new. The machine rotates on the same spot. While determining consumption it excludes the untried as a risk. The movie-makers distrust any manuscript which is not reassuringly backed by a bestseller. Yet for this very reason there is never-ending talk of ideas, novelty, and surprise, of what is taken for granted but has never existed. Tempo and dynamics serve this trend. Nothing remains as of old; everything has to run incessantly, to keep moving.
For only the universal triumph of the rhythm of mechanical production and reproduction promises that nothing changes, and nothing unsuitable will appear. Any additions to the well-proven culture inventory are too much of a speculation. The ossified forms — such as the sketch, short story, problem film, or hit song — are the standardized average of late liberal taste, dictated with threats from above. The people at the top in the culture agencies, who work in harmony as only one manager can with another, whether he comes from the rag trade or from college, have long since reorganized and rationalized the objective spirit. One might think that an omnipresent authority had sifted the material and drawn up an official catalogue of cultural commodities to provide a smooth supply of available mass-produced lines. The ideas are written in the cultural firmament where they had already been numbered by Plato – and were indeed numbers, incapable of increase and immutable.
This begins as mainly a critique of IP-dependent cultural production, but it also applies to generative AI, which is frequently used to apply one formulaic style to some other pre-given blob of content. Write a series of rhyming tweets about artificial intelligence in the style of Adorno. But the conclusion speaks to how AI models operate as though all the possible ideas are already contained in the data sets, and that “thinking” merely consists of recombining them. Instead of hack writers cranking out predictable material and censors suppressing anything subversive, generative models — “the omnipresent authority” that has "sifted the material and drawn up an official catalog of cultural commodities” — can literally predict content into being that is neutered of subversive potential in its very genesis. The beat goes on, drums keep pounding a rhythm into the brain.
This has frightened me so much that I almost didn’t subscribe to your site! Nice touch making the article fully accessible without said sign up by the way.
Speaking of dumbing down, regurgitation of the past endeavours of original creators etc. and, to use a term from one of the works of my favourite Author who’s work I think has been terribly distorted, diluted and just made generally unappealing to the majority of people - it’s very “Jackpot” the world we’re currently living in, IMHO of course.
Thanks so much 🤲🏾❤️🤲🏾
Great as always