(1) Cringe theory
“Cringe” is a lot like “camp”: Both are modes of more or less deliberate failure. In both it is necessarily ambiguous who is in on the joke. Camp and cringe can both seem perfectly ironic or entirely without irony. Practitioners are deeply in earnest and utterly “inauthentic” at the same time; they commit to performances to a degree that forces audiences to question whether they are performing, whether they are even capable of it. Susan Sontag insists that good camp can’t be intentional, but is instead a “delicate relation between parody and self-parody,” a “seriousness that fails” and thereby succeeds.
If camp was “a fugitive sensibility” in Sontag’s estimation in 1964, maybe 21st century cringe is camp’s capture and rehabilitation as a mainstream entertainment genre, an establishment form of comedy that champions the necessity of everyday humiliation in a world that no one can imagine can be improved or saved anymore.
A New York Times piece by Kate Ryan that examines “CringeTok” claims that “as a concept, cringe is deceptively hard to describe,” but that may be less a question of its obscurity than the whole point of it being deliberately uncomfortable to discuss. It doesn’t seem all that hard to recognize people debasing themselves for laughs, putting themselves out there under the auspices of “any attention is better than no attention.” (All posting is cringe posting.)
Cringe creators make the fundamentally humiliating practice of clamoring for attention seem less embarrassing for the rest of us, who don’t need to go so far or still have the luxury of drawing the line somewhere. Ryan proposes that “being authentically embarrassing is still authentic,” which makes cringe interesting to brand advertisers, who always want to frame authenticity as following trends at all cost. Cringe influencers obfuscate that apparent contradiction; people can disavow the shameless desperation in fashion-following and attention-seeking, laugh at it while navigating their own relationship to it and to the contradictions inherent in having to “be yourself” in awkward performative ways.
In this puff piece about Tim Robinson, the screamo comedian behind the Netflix sketch series I Think You Should Leave, Sam Anderson proposes that “cringe comedy is like social chile powder: a way to feel the burn without getting burned.” This sounds a bit like 18th century theorizations of the sublime, a way of tricking yourself into a feeling of mastery over something — in this case, “society” — that has indomitable mastery over you. Anderson likens it to “benign masochism,” and interprets it as way of dealing with the instability of social practices: “Some rules are rigid (stop signs), while others are flexible (yield signs) — and it’s your job to know the difference. Not to mention that the rules are never fixed: With every step you take, with every threshold you cross, the rule-cloud will shift around you.”
But that also sounds like a description of the limitations of various forms of automation: of autonomous vehicles that can’t handle anomalies, of LLMs that can’t recognize when their own output is nonsense and which become outdated as the use of language evolves. If cringe is a perplexed response to the ambiguous and ever-changing nature of social practice, one might say that AI is necessarily cringe.
(2) Vibes as bias
A few weeks ago, Max Read proposed that “one way of thinking about a program like ChatGPT is that it’s much better at assessing vibes than it is at reproducing facts,” which makes it good enough to get the gist of some phenomenon — “contextualizing, explaining, and appraising a given subject.” That seems like another way of saying that LLMs reproduce without euphemism and in an authoritative tone the ambient biases and prejudices that circulate about any subject. General impressions unencumbered by factual substantiation is just bias. “Vibes” are just vaguely mystified prejudices.
A Bloomberg report that looked at Stable Diffusion finds that its text-to-image generator not only reproduces bias but amplifies them.
“We are essentially projecting a single worldview out into the world, instead of representing diverse kinds of cultures or visual identities,” said Sasha Luccioni, a research scientist at AI startup Hugging Face who co-authored a study of bias in text-to-image generative AI models.
That “single worldview” is what models are designed to produce out of whatever data they are fed; it can’t do anything but average out and homogenize “diverse kinds of culture.” So it’s not as though generative models can simply be de-biased and trained not to disseminate univocal vibes. The methodology is to eliminate the “difference” between a concept and an image of it; it presumes necessarily that there is a right answer to the question of what something should look like.
Dan McQuillan points out that
statistics, even Bayesian, does not extrapolate to individuals or future situations in a linear-causal fashion, and completely leaves out outliers. Ported to the sociological settings of everyday life, this results in epistemic injustice, where the subject’s own voice and experience is devalued in relation to the algorithm’s predictive judgements.
This makes all generative models into forms of “algorithmic violence”: “AI may seem to produce predictions or emulations,” McQuillan writes, “I would argue that its real product is precaritisation and the erosion of social security; that is, AI introduces vulnerability and uncertainty for the rest of us.”
(3) Frame analysis
John Phipps, who writes a newsletter about painting, posted about use of generative models to speculatively expand works beyond their frames, as in the example going around of the Mona Lisa sitting in a vast monotonous landscape:
A friend of mine, who believes that people are basically losing the ability to discriminate between reality and fiction, sometimes describes this attitude as one in which the very idea of a narrative or pictorial frame is seen as a form of selfishness. A deliberate denial of audience access, and so an instance, in the final analysis, of exclusion. So it makes sense to me that at a moment like this, painting— the most unreachable, resistant and curiously prestigious of the major art forms— should also be subject to the same impulse from various directions. [emphasis added]
This captures the entitlement that marks a lot of “fan culture,” which is another phrase for inevitable disappointments of consumerism dictating all forms of pleasure. Consumerism suggests that pleasures are immediate, fungible, serially collectible, purchasable as one package after another. It shouldn’t entail resistance or friction yet nonetheless prompt “engagement.” This doesn’t make sense as practice, which is perhaps part of why reality and fiction collapse into each other. We live a fictional relation to our society, to our desires, to ourselves.
Wanting to “expand the frame” or extend narratives with fan fiction and so on is really a way to shrink and nullify the existing work and its irresolvable mysteries. The expanded universe is always a diminished one.
(4) Word processing
In The Beast in the Nursery, Adam Phillips quotes an anecdote of Ted Hughes’s about how children’s writing changed with word processing.
when I first worked for a film company. I had to write brief summaries of novels and plays to give the directors some idea of their film potential—a page or so of prose about each book or play, and then my comment. That was where I began to write for the first time directly onto a type-writer. I was then about twenty-five. I realised instantly that when I composed directly onto the type-writer my sentences became three times as long, much longer. My subordinate clauses flowered and multiplied and ramified away down the length of the page, all much more eloquently than anything I would have written by hand.
Recently I made another similar discovery. For about thirty years I’ve been on the judging panel of the W. H. Smith children’s writing competition.… Usually the entries are a page, two pages, three pages. That’s been the norm. Just a poem or a bit of prose, a little longer. But in the early 1980’s we suddenly began to get seventy and eighty page works. These were usually space fiction, always very inventive and always extraordinarily fluent—a definite impression of a command of words and and prose, but without exception strangely boring. It was almost impossible to read them through.
After two or three years, as these became more numerous, we realised that this was a new thing. So we enquired. It turned out that these were pieces that children had composed on word processor. What’s happening is that as the actual tools for getting words onto the page become more flexible and externalised, the writer can get down almost every thought, or every extension of thought. That ought to be an advantage. But in fact, in all these cases, it just extends everything slightly too much. Every sentence is too long. Everything is taken a bit too far, too attenuated. There’s always a bit too much there and it’s too thin. Whereas when writing by hand you meet the terrible resistance of what happened your first year at it when you couldn’t write at all … when you were making attempts, pretending to form letters. These ancient feelings are there, wanting to be expressed. When you sit with your pen, every year of your life is right there, wired into the communication between your brain and your writing hand. There is a natural characteristic resistance that produces a certain kind of result analogous to your actual handwriting. As you force your expression against the built-in resistance, things become automatically more compressed, more summary and, perhaps, psychologically denser.
This has an obvious application to LLMs, which also produce thin, overextended, unreadable text — as though the models are filibustering to fill the page. I don’t know about Hughes’s fetishization of handwriting, but the larger point about thinking emerging from friction, from material obduracy, seems apt. Thought isn’t fluency but the synthesis of repeated struggles with inexpressibility. What Hughes refers to as compression and density is basically the opposite of what probabilistic models do in processing and abstracting the substance and context of lived experience out of language.
(5) Reactionary modernism
There is a certain sort of commentator who wants AI to “work” not because they will directly profit from it but because they are palpably excited at transferring all of society’s decision-making power to entities controlled by the biggest corporations, to seeing a totalizing concentration of social power realized once and for all so they will know exactly which false god to try to worship and placate.
John Herrman recognized this type in the sort of poster who gets excited about how it’s “so over” for certain people (that is, whomever the poster envies and/or resents), thanks to AI: “It’s AI as a reckoning, a punisher, a revealer of frauds. It’s AI as a future vindicator of their hunches about how the world works, and as an extension of their politics. It’s AI as a cleansing force that humbles your enemies and proves you right.” Again, AI appears to substantiate “hunches” or “vibes” that are no more than self-interested prejudices, what is convenient for privileged people to accept as obviously true.
In a post about the reactionary turn of tech industry leaders, John Ganz offers a useful term that I also think helps in placing these sorts of people: “reactionary modernists.”
This is the tradition identified in Jeffrey Herf’s 1984 book Reactionary Modernism: Technology, culture, and politics in Weimar and the Third Reich. Herf noticed that what typified the thought of the Conservative Revolutionaries and a set of right-wing engineers in Weimar and then the ideologists of the Third Reich was not a rejection of modernity so much as the search for an alternative modernity: a vision of high technics and industrial productivity without liberalism, democracy, and egalitarianism. Technology in the service of a hierarchical society and an authoritarian politics, or rather, hierarchical society and authoritarian politics in the service of technology, as the correct pathway to unfettered progress and development.
This not only can “help us understand the openness to biological racism and antisemitism within the tech oligarch set,” as Ganz notes, but also explains the enthusiasm for the bias-reproducing (and augmenting, as noted above) tools of “AI,” which can be deployed to reify hierarchies and which encourage the kind of complacency, passive acceptance, and allergy to complex thought that underpins authoritarianism.
(6) Days on the road
I wrote an essay for the journal Overland on Karen Levy’s excellent book Data Driven, which is about automation and surveillance in the long-haul trucking industry.
Please please consider becoming a free or paid subscriber; I could really use the encouragement. Thanks!