It’s a bit absurd that a band best known for an album called Young, Loud, and Snotty would be working on a new record as near septuagenarians. It’s even more absurd that a band called the Dead Boys would want to reanimate their dead lead singer, Stiv Bators, with an AI model so that he could sing (or as the press release put it, be “artfully dusted”) on that new record. This sort of self-ironizing stunt seems more in keeping with something Bators’s one-time bandmate in the Lords of the New Church, Tony James, would have come up with: James was the mastermind behind Sigue Sigue Sputnik, whose album Flaunt It is probably best remembered for having genuine paid advertisements between the tracks. (Shouldn’t any pop song be understood as an advertisement for itself anyway?)
This NME report details the drama that ensued when the living Dead Boy lead singer, Jake Hout, understandably enough, decided to quit the band rather than share duties with a simulation of the vocalist he was already struggling to credibly replace. “I know that there are gray areas with lineup changes and artistic tools, etc,” Hout wrote in an Instagram caption, “but for me personally, AI is a bridge too far.” Dead Boys guitarist and sole remaining original member Cheetah Chrome responded with an Instagram caption of his own, saying that Hout refused to be patient while the band “got more info on AI to make an informed decision.” He then declared that you can still come to a Dead Boys show if you want “the punk credibility.” And it’s true, nothing is more credibly punk than data-driven informed decision making.
One could plausibly interpret the willingness to use AI as a kind of egoless gesture, an acknowledgment that one can contribute more but nullifying one’s own personality in favor of a machine that accomplishes tasks in an institutionally approved, resolutely average sort of way. I imagine that in the workplace, the eagerness with which an employee adopts AI is taken by bosses as an indication of how much of a “team player” they are — no “I” in team; no “I” in AI either (if you ask an LLM). It indicates that you’re not invested in doing things your way or leaving your personal stamp on anything, that you are not overambitious or invested in your own talent, that the main thing you have to contribute is diligent and self-effacing obedience to management’s desired shortcuts. All of that seems very punk rock to me too.
There is probably a very limited interest in new material from the Dead Boys in the 2020s (the follower counts on those Instagram accounts seem telling), so they have nothing to lose in getting whatever headlines they can for committing AI-adjacent heresies. It would be a mistake to interpret this incident as much of a harbinger of anything more than the willingness of desperate people to latch onto fads in whatever way they can. But I also can’t help but wonder how common this nonconsensual sampling approach will become — boiling down a musician’s recorded history into a kind of synthesizer that can be played like an instrument — and whether the intellectual property rights over the dead will be able to contain it. Do Amy Winehouse’s heirs get to make new Amy Winehouse records if they want to? Does her former record company? Is there any market for that sort of thing?
It seems like this approach would make a performer’s distinctive style into something fungible and easily reproducible on the surface of things, but in doing so it trivializes what is readily copiable and clarifies what is essential about their legacy — the choices they did make, the work they did perform. If Cheetah Chrome defied plausibility and made a great album using the re-created voice of Stiv Bators, none of the credit would accrue to Bators for it. And it is not as though the band is deliberately trying to trick anyone into believing he is still alive anymore than Abba is trying to pass off their touring holograms as being flesh and blood. In this respect what the Dead Boys are doing here, while stupid, isn’t as egregious as that sham Beatles song made from a John Lennon demo.
Using AI tools to re-create dead legends doesn’t leverage the legend for your new work; it pits you against that legend, whose legacy becomes an obstacle to be overcome. Listeners have to be convinced to see the dead legend as having never had anything significant to contribute through their own will or intentionality — that they were a never more than a mannequin to be manipulated by whoever happened to pull the strings. The song has to convince you that it is not a grotesque travesty before it can be assessed for any of its musical qualities. It’s a self-contradictory strategy that can only fail by succeeding. Every attempt to simulate an auratic presence necessarily fails; it would have to abolish the aura to produce a convincing sense of presence.
It’s not hard to find the tools to make an AI Elvis sing a personalized song about you, or have Sinatra sing Nine Inch Nails songs or whatever, but none of that sort of concoction rises above the level of momentarily diverting novelties, more interesting in description than in execution. As easy as it is to consume these sorts of things — they don’t require or invite careful listening — I can’t imagine a case where their gimmick could be transcended, where something like that could be compelling in its own right; it’s easier to imagine that repeated exposure to that kind of content makes everything seem like a gimmick.
If there were an audio-based social media, it would probably be overwhelmed with this aural slop, systematically reducing the amount of attention we are accustomed to paying to anything we are exposed to. We could thereby be trained to expect that anything we hear should be immediately understandable and immediately dismissible, so that we have no choice but to keep listening for something else. There should be no threat that any content will require our careful attention, only the minimum amount to register as metricizied “engagement.” This would make for the apotheosis of the streaming platform.
That seems to be the point of AI video slop as well, as this 404 Media piece by Jason Koebler about a “festival” of AI-generated films sponsored by Chinese television maker TCL suggests. The festival was accompanied by a presentation by TCL executives about what the point of it all was:
Catherine Zhang, TCL’s vice president of content services and partnerships, then explained to the audience that TCL’s streaming strategy is to “offer a lean-back binge-watching experience” in which content passively washes over the people watching it. “Data told us that our users don’t want to work that hard,” she said. “Half of them don’t even change the channel.”
AI video would then meet the passive demand for undemanding content and help reinforce the stupor in which viewers remain too torpid to change the channel. It would be optimized for inducing a narcotizing apathy, and TCL’s connected TVs would be able to collect the data on what sorts of slop was working to that effect. The data that “told them” that “users don’t want to work that hard” will become the data that guarantees that they won’t.
Television has always been thought to train users in passivity, as books like Jerry Mander’s Four Arguments for the Elimination of Television (1978) insist. Mander argues that TV as a form is suited to the transmission of images of products:
You do not have problems of subtlety, detail, time and space, historical context or organic form. Products are inherently communicable on television because of their static quality, sharp, clear, highly visible lines, and because they carry no informational meaning beyond what they themselves are. They contain no life at all and are therefore not capable of dimension. Nothing works better as telecommunication than images of products.
The way Mander defines “product” here also seems like an apt description of generative content: material that contains “no life at all.” Koebler notes that the festival’s AI films “all suffer from the same problem that every other AI film, video, or image you have seen suffers from. The AI-generated people often have dead eyes, vacant expressions, and move unnaturally,” and these problems “affect a viewer’s ability to empathize with any character.” But empathy would only require a viewer to work harder.
Pieces of generative content, by definition, also have “no informational meaning beyond what they themselves are.” They repel the interpretative impulse and invite it to atrophy; instead there is a passive accumulation of associations presented and experienced as self-evident, especially since there is no artistic intentionality to complicate the meanings. As Eryk Salvaggio argues here
AI slop breaks down the inquiry and investigation into the world as it is, replacing the critical landscape with text and image fragments that affirm the world as it is imagined. In essence, it circumvents any desire to understand the world because it offers us the immediate satisfaction of having a feeling about the world.
The immediacy is the content, regardless of what is depicted. And that immediacy induces passivity. Generative models are optimized to produce content that feels immediately like what it is, which in Mander’s terms is like seeing only images of products. Another way of putting that would be in terms of “reification” or of living relations being presented as dead things.
When Koebler talked to TCL’s “chief content officer for North America,” he came across very much like Cheetah Chrome:
“There is definitely a hyper-focused critical eye that goes to AI for a variety of different reasons where some people are just averse to it because they don't want to embrace the technology and they don't like potentially where it's going or how it might impact the [movie] business.”
Some short-sighted people are just prejudiced against AI, and they aren’t being patient enough to wait for more information to make an informed decision.
But it helps to be more specific about what the “hyper-focused critical eye” actually sees in AI content, as Koebler is here:
For every earnest, creative filmmaker carefully using AI to enhance what they are doing to tell a better story, there will be thousands of grifters spamming every platform and corner of the internet with keyword-loaded content designed to perform in an algorithm and passively wash over you for the sole purpose of making money. For every studio carefully using AI to make a better movie, there will be a company making whatever, looking at it and saying “good enough,” and putting it out there for the purpose of delivering advertising.
Generative content is almost uniquely capable of fulfilling the mission of numbing audiences at scale into the necessary state of passivity in which “whatever” is enough. It not only produces a superfluity of content that warrants no attention; it can flood platforms and search engines so that no other kind of content can be discovered there. It marks the point at which entertainment and anti-entertainment become indistinguishable.
'boiling down a musician’s recorded history into a kind of synthesizer that can be played like an instrument'
You're actually being too generous - using granular synthesis to manually piece together someone's voice is less objectionable than what it is, which is an algorithmic sum of same that only relies on matching phonemes and statistically averaging together the performer's generalized affect
Mister Bators deserves his rest.