Video essays
Should I become a video essayist? Is that the best way to “reach Gen Z” and get my hands on some creator-economy riches? Since all the text of the future will apparently be machine-generated lorem ipsum filling up any open platform, won’t video be the only way to convince someone a thinking human being is behind what I am trying to communicate? Hadn’t I better adapt to the return of oral culture in the post-textual era?
When I see the phrase “video essay” I tend to think of essay films (F for Fake, Los Angeles Plays Itself, etc.) and Adam Curtis documentaries, but in practice, video essays on YouTube seem modeled after 60 Minutes segments, or are like TED talks delivered into a phone camera, or offer running commentary on other media in a Mystery Science Theater–like style. This Refinery 29 article touts the “intertextuality” of video essays and the density of their pop culture references and argues that this makes them more suitable to a '“generation that was raised primarily online,” as if no other generation ever shared a set of cultural references. What seems different now is that one can reasonably expect to have anything they might search for be explained to them in a video, to the degree that if there isn’t video explaining it, it probably doesn’t seem worth knowing.
That mentality is entirely alien to me, which makes me feel superannuated. I’m stuck with an idea of reading as being faster and more efficient than viewing, and I tend to rationalize this as being a more “active” way of learning: Reading is “real work” while viewing is “passive consumption.” But this is more a fetishization of work, a warped commitment to productivity for its own sake. (What am I even learning for? No one cares if I know anything.) Maybe it offers some psychic compensation to an unemployed, fully discouraged job seeker like me to always seem busy with the “labor” of reading.
As someone who has been poisoned by autodidacticism, I tend to balk at having people explain things to me; instead I like extracting an understanding of things from words on a page (or a screen) that seem to come from nowhere, that have no force of personality behind them that I am required to reckon with. Given how antisocial my attitude toward knowledge acquisition is, I wonder sometimes why I don’t relish the idea of chatbots, which would seem to offer information without any interpersonal relation — just as playing chess against a computer removes the threat of humiliating losses. But the current textual interface of chatbots seems merely transitional, and the text they generate doesn’t often feel like writing at all. Chatbots have been optimized to seem conversational, as though they are going to be scripting the on-demand explainer videos of the future.
Mainly I resist video watching (and video making) because I have convinced myself that I only really think about things by trying to write about them, and only the practice of making and remaking sentences disciplines me into not just accepting at face value whatever it is other people or machines are saying. I don’t know what it means (yet?) to think by means of video editing, though I wonder if the tools designed to make that easier (like the “AI” suite in Google’s new phones) will facilitate or forestall that thinking. Generative text is palpably repellant in part because it seems to lack this quality of thinking ideas through live on the page — the person I wouldn’t want explaining things directly to me is still faintly detectable in writing but they are talking to themselves, doing what I am usually doing, talking to myself as though I were an audience. (Sorry for importing an ontology of presence into this discussion of writing.) Generative models suggest writing doesn’t involve thinking at all, and to me that feels like existential invalidation.
“AI is not a technology”
It seems deeply inadequate to think of AI primarily as a technology, especially given that the phrase is a marketing term meant to attract funding and distract from the industry’s material impact. (Kate Crawford’s Atlas of AI makes this point well.) One shouldn’t have to be a STS scholar to recognize that what a technology can do can’t be understood outside the context of who develops and controls it, who will be subjected to it, what assumptions and social arrangements it requires, and what its externalities are, and so on — what a technology does is precisely a matter of those larger questions and not a myopic, tautological matter of its performing some specific task. Facial recognition technology doesn’t simply “recognize faces”; it reconfigures the idea of privacy and distributes new kinds of power, risk, opportunity, and vulnerability across society.
Dave Karpf describes this kind of thinking as “technological pragmatism,” which involves recognizing that technologies are “catalysts of change” and that “the direction of that change is determined through design choices, and through the social forces of existing institutional arrangements, financial incentives, and power structures.” One might go further and point out that technology is often deployed to reinforce those institutional arrangements and incentives and the power structures that rely on them — technology is directed to neutralize itself, so that whatever change it brings is superficial, fully assimilated by the existing socioeconomic order. The rich continue to get richer, regardless of the technological vehicle involved.
In a comment for a recent special issue on Critical AI Studies in the journal American Literature, Orit Halpern notes that “that AI is not a technology—it is an epistemology and also a form of governmentality and political economy” that instantiates the ideas of Hayek’s essay “The Use of Knowledge in Society” and its concept of the market as an information processor. According to Hayek, in a passage Halpern quotes, “the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.” But the market supposedly solves the problem of dispersed information by aggregating it all and turning it all into price signals that tell people what things are really worth pursuing, even if they can’t understand why from their limited perspective. Hayek’s assertion “gestures to a grand aspiration,” Halpern writes: “a fervent dream for a new world governed by data.”
“AI” can be interpreted similarly, as an aggregator and processor of the world’s discrete bits of information, capable of producing synthetic truths that no individual can understand. Instead of generating prices, AI models generate weighted parameters that can be probed in various ways to extract statistically substantiated truths. AI models could be seen as being markets of sorts themselves, and all human acts of communication as obscure attempts to revalue the “prices” or “parameters” of particular ideas — they are acts of economic exchange, regardless of the auspices under which people might think they are speaking. All communication is aimed at persuading the models to change, regardless of who we think we are talking to, because only the models have the scale, scope, and perspective to dictate which meaning are binding, what things “really” look like, and so on.
So while “many of us would agree to the networked nature of intelligence, the critique of enlightenment reason and objectivity, and fantasize about collective forms of engagement and decision making,” Halpern writes, perhaps thinking of “posthumanist” lines of academic discourse, it shouldn’t be overlooked how neoliberals like Hayek lay claim to similar ideas to basically refute the possibility of collectivity.
Back to postmodernism
One of the main themes of anti-generative-models critique is that it heralds the end of artistic innovation; the models automate the recycling of past cultural production, further depleting the world of the kinds of audiences that could support an avant-garde. The rebuttal to this is usually takes the position that all artistic innovation has always been no more than remixing and repurposing elements from the past to make something new, and AI models are no different from previous technical developments that allowed for new tactics of appropriation and transformation.
This recent essay by Jason Farago about “Why Culture Has Come to a Standstill” seems to want to target the limitations of that framing. It grants that modernism is dead — that the specific historical conditions that produced it and its emphasis on formal innovation and purification are no longer operative. “Culture remains capable of endless production, but it’s far less capable of change,” Farago writes, but he attributes that not to artists’ shortcomings but to our having anachronistically absorbed modernism’s imperatives: “we are still inculcated, so unconsciously we never even bother to spell it out, in what the modernists believed: that good art is good because it is innovative, and that an ambitious writer, composer, director or choreographer should not make things too much like what others have made before.”
Apparently we have lost the ability to be in tune with the aesthetic implications of the historical conditions under which “we” live. (The collective we here is of course a fiction; historical conditions are not uniform.) Contemporary conditions don’t “inculcate” us with any directives about how to feel about contemporary culture, at least none that rise to the level of consciousness. There have been discussions of vibe shifts and so on, but these have been marked by baffled commentators unable to explain the logic behind the shifts — cyclical changes in fashion have come to seem totally arbitrary to those caught up in them (perhaps they always have been), but this hasn’t been sufficiently troubling enough for anyone to make a convincing case for a new logic. The infrastructure for circulating content has vastly expanded, but this has only seemed to make the experience of consuming content seem more formulaic.
Perhaps the problem is a universal sense that we are “consuming content” rather than inhabiting a culture. Farago argues that since we are all unreconstructed modernists, we still believe that “a work of art demonstrates its value through its freshness” But because there appear to be no more formal innovations, “we have shifted our expectations from new forms to new subject matter — new stories, told in the same old languages as before.” This is the AI doom loop, where a machine can replace the artist and just shuffle the deck of established signifiers on demand.
In the 20th century we were taught that cleaving “style” from “content” was a fallacy, but in the 21st century content (that word!) has had its ultimate vengeance, as the sole component of culture that our machines can fully understand, transmit and monetize. What cannot be categorized cannot be streamed; to pass through the pipes art must become information.
Formal innovation stalls because the culture industry can’t accommodate it, and the tech industry has instead made datafication (seeing the world one-dimensionally as “content”) very profitable. Generative AI serves that project as well; it flattens style into “style transfers,” parodic imitations that presume that any form and any content can arbitrarily be joined together in bogus travesties of aesthetic unity.
Farago insists that he doesn’t mean to “rant” about how culture was better in the past but instead wants to ask “why cultural production no longer progresses in time as it once did.” In other words, why are there no master narratives? What happened to a recognizable zeitgeist, and to a sense of progress, to teleology? These are basically the concerns raised in the 1980s by the “postmodern turn,” and Farago dutifully mentions Jameson and Virilio as heralds of postmodernism’s emergence (Lyotard didn’t make the cut for some reason). Farago argues that the “drift” into postmodernism is only now being completed thanks mainly to digital transmission and reproduction, which have accelerated the exhaustion of traditional forms and automated the “citation and rearrangement” of previous works. And he lands on the same sort of prescription that postmodern theorists would make for how to accommodate the postmodern condition: “free articulation,” which he poptimistically locates in the work of Amy Winehouse.
she was living through, and channeling into “Back to Black,” the initial dissolution of history into streams of digital information, disembodied, disintermediated, each no further from the present than a Google prompt. She freely recombined those fragments but never indulged in nostalgia; she was disappointed by the present but knew there was no going back. And at enormous personal cost, she created something enduring out of it, showing how much harder it would be to leave a real mark amid fathomless data — to transcend mere recombination, sampling, pastiche.
They tried to make her go to rehab, but she said no.
The medium is the message? https://player.hourone.ai/67abc7c47b534a18b1ecc8fce119bcc3
Internet lost its interest for me with the “pivot to video” moment eight or nine years ago. Right about the same time as algorithmic feed sorting. The more “smart” tech gets the dumber we are made to feel.