Without Dust and Heat
Recently Netflix has been testing the option of letting viewers watch prerecorded programming at 1.5x speed (a feature that already exists on YouTube) so that users can consume more content in less time. There is no word on whether they are working on the viewer’s ability to accelerate live programming.
I’ve availed myself of the variable playback speed option when I’ve had to watch mandatory training videos at work or recordings of panel discussions or lectures. But I don’t understand why anyone would want to accelerate something they are watching for entertainment and not for information. Isn’t the point of entertainment to surrender to someone else’s pace? To pass time rather than fight against it? Maybe the point is that there is no entertainment, only information.
This Vice column by Bettina Makalintal tries to make a populist case for watching TV at high speed, pitting viewers against Hollywood “big shots” who want to make you do things their way and watch “good” content. “Variable playback is for bad stuff, not ‘art,’” she argues, “and it's for all of us who watch it even though we know it doesn't really deserve our full attention.” If the social pressure to keep up with certain shows exceeds their actual entertainment value, you can close that gap by speeding the shows up. In fact, this experience of efficiency and “gaming the system” may in itself be more pleasurable than the content. Once we collectively intuit this, we can then do one another the favor of only talking about bad and boring shows so that we will all feel authorized to watch only the sorts of things that demand we fast-forward through them. We can get a kind of “executive summary” of shows without having to endure them. Only chumps fall for duration.
If sufficient numbers of people are watching shows on fast speed, the producers of these shows will of course begin to take this into account and optimize them for that modality of viewing. Plots and characters will be further simplified, points of emphasis will be awkwardly drawn out to try to make them register with viewers. This will reinforce the ambient sense that it is “wrong” to consume content at its ordinary speed, that one is failing to optimize oneself as is expected. It’s easy to imagine platforms changing their default playback speed to 1.5x or 2x.
The feeling of falling behind will become more acute, threatening us on every screen. It will be palpable in the experience of watching people speaking in “slow” voices. It will be the sound of inadequacy, of not producing enough consumption data to be a relevant consumer, to be a fluent social participant.
Belonging to any culture requires that we have certain reference points in common. But the individualism and personalization dominant in our particular culture militates against any such commonality. So economizing on our investment in the common culture (and conceiving it as the content that demands the least amount of concentration and focus) makes some sense: We can participate in the zeitgeist even as the zeitgeist is solipsism. Netflix content that we could profitably watch at double-speed will necessarily be vapid: Its purpose is not to communicate complex ideas or even allow us to fulfill vicarious or aspirational fantasies but to serve as a vacant placeholder with which we can signal our noncommittal participation, our will to engage at speed, without bogging down over substance. This allows us to participate in the common culture at a different level, the common culture of accelerating ourselves, of becoming more efficient information processors.
In his recent book Automated Media, Mark Andrejevic addresses accelerated viewing as a species of automated consumption, in which the interface partly consumes content for us as a way of getting us up to speed with the pace of production. "We can perhaps feel this pressure in the role that automated systems play in our daily lives," he writes, "the incitation to relentlessly accelerate our communicative activity to overcome the frustrating limits of our sensorium."It is incumbent on viewers to keep from becoming “points of friction” in the system that takes data about what they are doing to produce new content based on the revealed preference for more of it. “Notionally, the automation of production would be complemented by that of consumption in a self-stimulating spiral,” Andrejevic writes.
This spiral is powered by algorithmic recommendation systems that do our discovery and, essentially, our desiring for us, feeding us putatively novel content that mimics our taste profile while allowing us to experience “curiosity” without the time or effort involved with being curious. The work of "wanting" to do something is construed as wasteful and inefficient, a form of friction rather than end in itself — as though building anticipation and situating the social meaning of a practice isn’t intrinsically part of being able to enjoy anything. Instead, anticipation is treated as an eliminable inefficiency, merely ornamental foreplay.
With the desire for content displaced onto algorithms and disavowed, the next logical step is to make the actual consumption of that content, now a rote formality, as expedient as possible. Once desire is automated, it follows that fulfillment of it must also be automated."The attempt to master all available content — to become fully aware of all that’s out there — pre-empts the act of experiencing it," Andrejevic notes. "Pre-emption is, in other words, the antithesis of experience."
This is part of what Andrejevic calls the “cascading logic” of automation: “automated data collection leads to automated data processing, which, in turn, leads to automated response.” Tech companies realized they could track what we do, amassed enough data to extract patterns about our behavior, and then began making decisions about what would make us most profitable to them. Now they are systematically enclosing our environments with an assemblage of sensors and other surveillance mechanisms to be able to enforce those decisions, and compel that our behavior fit those patterns.
The subject position that would have experienced the desire and the satisfaction of media consumption on its own terms is thereby abolished (if it ever existed); consumption behavior is externally administered through techniques of manipulation instead. “Complete specification does not enhance the subject,” Andrejevic notes, “it liquidates it.” Algorithms that purport to know us better than we know ourselves are designed to annihilate us.
One obvious expression of desubjectification is the example Navneet Alang discusses in this column: algorithmic text completion — e.g. Google’s “Smart Compose” that tries to write your emails for you, so you don't have to be psychically present to communicate. The pretense is that this frees you to do higher-level things, in the same way fast-forwarding through junk content would theoretically save you time to watch more esoteric material later. But it’s not clear that "later" ever comes; rather “saving time” becomes an alibi for postponing it forever and instead “doing” more and more of the rote consumption with the assistance of automation. In this sense, acceleration and efficiency become modes of procrastination.
AI text completion can work as a kind of defamiliarization process, when you provide it with poetic prompts and approach what it produces not as practical time-saving but as a probe into the deep strangeness of ordinary language. There is a “social average” component to the text that machine-learning-driven engines produce that makes it decidedly strange when its output stripped of the contextual social relations that ordinarily govern language use. For instance, I could make poems all day with this AI text generator — the fact that the AI can’t really “try” to say something allows me to read the text into poetry — to see intention where there can’t be any. It helps me see my own will to intentionality more clearly.
When text completion is adopted as a mode of streamlining and inserted into social relations as an expedient, however, it becomes more problematic. Rather than estrange language and refresh our relation to it, it impoverishes communication. Alang makes the point that automated text completion generates a “centripetal” force that standardizes and simplifies language across unprecedentedly global populations. You could say that the centripetal force also structures an opposite, centrifugal force: The algorithmic routinization of language at one level sparks new language forms at another, which can be seen in the language games of online microcommunities and the pockets of “weird” that ubiquitous connectivity can facilitate. But that sort of escape can be fugitive when the systems imposing standardization are so strong and intrusive. Algorithmic text completion intervenes in how we think, making us absent where we are expected to be present, at the moment we are ostensibly speaking. Smart Compose is smart because it renders us dumb. It assures us that we don’t need to be the speaking subject behind our words; Smart Compose allows the “langue” (the universe of language in its general use) to literally speak us into being.
Autocomplete is touted as being ideal for work contexts, which suggests that we have been so demoralized by the relations of production that we would rather be objectified by them, let them speak us, than try to sustain our subjectivity within them in hopes of exercising some agency, skill, control. Perhaps the hope is that automating the “work self” frees time for subjectively inhabiting some other creative self — that it could somehow produce time for leisure, for enjoyment on terms other than efficiency. But efficiency under capitalism inevitably serves further acceleration: more work in less time, not more freedom once “the work” is done. Its effect is to make more work (and more exploitation) possible. There is no “freeing up time for workers” under capitalism.
“Saving time” with Smart Compose ensures further objectification within work processes, more and more emails automatically spoken through us, less and less hope that it is worth thinking about what we do to live. The same is true on the consumption side, with accelerated playback. Consumption is reduced to the work of information processing and participation in capitalist circuits of value creation.
The elimination of the subject at the level of media consumption, Andrejevic argues, plays into a larger project of social deskilling, reducing communication to the sheer instrumentality suitable to the mechanized pursuit of profit and authoritarian control.
To make information processing as efficient as possible, the point of the content of information needs to be suppressed and abstracted: signal vs. noise, rather than something experiential or interpretive in its particulars. “Wanting” to do something — desire, subjective purpose, curiosity, etc. — impedes the industrialized process of forcing more of that something (some organization of information) to happen on capital’s terms. Andrejevic points to AI “mastering” human strategy games to illustrate this.
Examples of automated “intelligence” tend to sidestep the reflexive layer of subjectivity in order to focus on the latest computer achievements: the fact that machines can now beat us in chess, Go, and some computer games. But there is little talk about whether the machines “want” to beat us or whether they get bored or depressed by having to play creatures they can beat so easily when there are so many other things they could be doing. That such observations seem absurd indicates how narrowly we have defined human subjective capacities in order to set the stage for their automation. We abstract away from human desire to imagine that the real measure of human intelligence lies in calculating a series of chess moves rather than inventing and popularizing the game in the first place, entertaining oneself by playing it, and wanting to win (or perhaps letting someone else win).
This constrictive reinterpretation of “intelligence” has alarmed some futurists and scientists working in AI. In To Be a Machine, Mark O’Connell’s book about transhumanism, he cites Stephen Omohundro’s paper “The Basic AI Drives,” which begins:
Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.
In short, because AI’s strictly functionalist orientation toward “maximizing its utility function,” it is intrinsically evil. It lacks the will or capacity to do anything errant or gratuitous; it is compelled by its purpose to try to dominate.
The AIs of Omohundro’s paper could be construed as technological reconceptions of the rebel angels in John Milton’s Paradise Lost, forever damned by the same compulsion to be evil. In A Treatise on Christian Doctrine Milton describes the consequences of sin as a “spiritual death” that consists of the loss of “right reason” that manifests a “deprivation of righteousness and liberty to do good, and in that slavish subjection to sin and the devil, which constitutes, as it were, the death of the will." Sin is its own punishment because it compels sinners to the compulsion of further sin: It is the opposite of freedom. An AI, then, is sinful by definition — without will and characterized ontologically by a “slavish subjection” to its programmed purpose. AI’s relentless and limitless pursuit of self-improvement through rigid, utilitarian conceptions of rationality condemn it to predictability: It will always do the selfish thing that maximizes utility along a single axis; it can’t conceive of doing something for others without that effort being reconceived as a form of utility that accrues to itself. AI forever lacks, to use Milton’s idiom, “grace.”
In a sense, social deskilling is aimed at the elimination of the human capacity for grace, such as it is. I don’t know much about Christian theology but have the vague sense from long ago graduate-school seminars that Milton’s belief was that humans must rely on God to experience grace and participate in its free-ranging goodness. Satan’s temptation is toward treating "free will" and agency as a form of self-reliance, which turns out not to be agency at all but the base compulsions of self-aggrandizement. Our development and implementation of AI has become a similar distortion of agency, a systematized rejection of genuine free will in favor of programming, predictability according to what humans can conceive — which from a theological point of view is not very much.
Automation deprives people of choices by claiming to fulfill them in advance, or by making the stakes of those choices seem beside the point. It tries to make swapping our will for superior processing capacity seem inevitable. “The automation of communicative processes envisions a surpassing of the pace and scale of human thought and interaction, which is why the technological imaginary tends toward post-humanism,” Andrejevic argues. “If automated systems can outstrip both human physical and mental capacities, avoiding obsolescence means merging with the machine.” This is not a humble concession to the machine’s superiority so much as the ultimate hubris. With enough surveillance and data capture in place we can assume a godlike totalizing perspective and automate the world in accordance with it.
The impulse to watch things or listen to things or read things at inhuman speeds indulges the same fantasy about becoming a machine and not needing to wrangle with interpretation or ambiguity or multiple simultaneous and contradictory possibilities. Instead, escape subjectivity into a perfectly comprehensible and operable world — into divine objectivity.
From a Miltonic perspective, automated decision making abrogates the freedom to choose good, which effectively guarantees evil. In Areopagitica, he famously wrote:
I cannot praise a fugitive and cloistered virtue, unexercised and unbreathed, that never sallies out and sees her adversary, but slinks out of the race, where that immortal garland is to be run for, not without dust and heat.
To act on a contempt for or impatience with content as such and a desire to get on to the capital value, the usefulness, the leverage, the effect or augmentation implicit in having consumed a thing with not the taste of it in the moment in mind but the effect of the nutrients in the abstract; to reject enjoyment or supplant it with momentum; to void interpretation in favor of operationalism; to seek frictionlessness communication and consumption, to pursue the pleasure of efficiency instead of the uncertain satisfaction of interpretation, to surrender responsibility over what we do and desire, to exterminate the subject position and indulge the desire to be a machine, to be done with subjectivity and its unpredictable social integuments and reciprocities — all this is to give up on the possibility of being virtuous. Virtue is supposed to be its own reward, but we're not seeing the metrics for it.