Sorry for the brief interruption in service. I will try to do better going forward!
“Algorithm fatigue”
Last week, I read The Feel of Algorithms, a recent book by Minna Ruckerstein about “algorithmic culture” — a useful all-purpose term for the range of automated systems, predictive analyses, and data-creation protocols being introduced across society. It offers an empirical investigation (in the form of interviews with a few dozen Finns) not of how algorithms technically work but of what living with them feels like, regardless of whether those feelings pertain to accurate assessments of technological capabilities. The aim is to get at how algorithmic culture has been able to become entrenched despite its manifest flaws, inequities, and indignities. How do algorithms come to seem like ordinary infrastructure and not a mode of oppression? As Ruckerstein puts it, how do they come to “define a comfortable life, rather than present an exogenous threat”?
The hegemony that algorithmic culture is in the process of achieving doesn’t depend on what algorithms “really” do, but on the ways we come to tolerate their deployment — what sorts of stories we come to tell ourselves about algorithms to domesticate them or disavow them, to make their threat to our autonomy seem manageable. “From this perspective,” Ruckerstein writes, “autonomy appears as a situational achievement in human-algorithm relations, one constituted by reflective, adjustive, and protective behaviors in relation to algorithms and their imagined effects.” Or to put it slightly differently, we come to be complicit in algorithmic culture; its compromises become our own.
This doesn’t mean we come to celebrate algorithms; rather we systematically misunderstand them and have an imaginary relation to the power they derive from — a capital-A Algorithm is posited, with its own foibles and autonomous demands. A classic example of this is the idea that one’s phone is listening to everything one says: Debunking this claim is sort of beside the point, because it speaks to what ubiquitous surveillance feels like while underestimating its actual breadth and intent. The actual population-scale surveillance techniques of tech companies, data brokers, government agencies, etc. far exceed the blunt and specific invasions of privacy we sometimes imagine when snippets of our conversation appear to drive targeted ads shortly thereafter.
That piece of “algorithmic folklore” about phones listening to us according to Ruckerstein works as “an attempt to control an environment that is in many ways uncontrollable.” Imagining we are being personally spied on by our devices actually affords a sense of comprehension and mastery; it assumes that we matter as individuals to a system of control that actually doesn’t care about us at all, any more than a thresher cares about a single grain of wheat. It concedes that algorithms are omnipotent and omnipresent in the process of complaining about them, such that resisting their imposition comes to seem like challenging the weather. But it also makes the weather about you, a personal rain cloud to follow you around.
In other words, part of the “structures of feeling” regarding algorithms is that they seem to foreground the importance of the individual — going so far as to surveil them personally — even as algorithmic systems dissolve individuals into fungible data sets. This felt experience reverses the core principle of algorithmic systems, which is to be opaque in their operations (black-boxed) and treat human subjects as externally manipulable objects without internal deliberative capabilities. Algorithmic systems assume that subjectivity must be nullified, because otherwise their conscious choices will pollute the data and “game the system.” But in the funhouse mirror of ideology, algorithms appear to cater to individualism and personal caprice. Our vulnerability to algorithms — to having our decisionmaking ability cancelled by them — is then misconstrued or rationalized away as a kind of care or flattery; the larger stakes of who controls the means of datafication become distorted. Datafication itself can then be experienced as a privileged kind of self-knowledge bestowed on us through all the various forms of self-tracking and personalized data dashboards we’re offered — as though the data were being generated solely for our benefit and at our request and weren’t being imposed on us for purposes of classification.
As algorithms process us into forms more suited to bureaucratic control and commercial manipulation, we may then experience this not as domination or alienation plain and simple but as an ambivalent and fluid mix of empowerment and hopelessness. Ruckerstein argues that this ambivalence manifests as an “irritation” or “fatigue” or “friction” with algorithmic systems that testifies to how the “tensions with algorithms cannot be fully erased.” This friction is potentially “a form-giving social force that is trying to communicate to us what, exactly, has gone wrong in algorithmic culture.”
But such form-giving can also reify the irritation, make it a concrete aspect of the world, something that must be accepted if one is being “realistic.” The irritation confirms the hegemony of algorithms rather than exposing their brittleness; or rather it turns their brittleness into an aspect of their inevitability. Ruckerstein defines algorithmic fatigue as “a side effect of deepening algorithmic relations, suggesting that technological systems ignore the skills and know-how that people actually possess to intervene in and steer machinic processes.” But it may not be a side effect but a primary goal of these systems: to induce a weariness with the burden of one’s own agency once it’s shown to be ineffective, marginal, vestigial.
When a self-checkout machine barks incomprehensible instructions even when the scanned items are in the bagging area, or when automated phone menu apparently can’t parse your voice and you can’t tell if the mechanism is truly flawed or deliberately wasting your time and depleting your will, or when ads and content keep pointing an unflattering mirror at you, irritation both mounts and dissipates into futile personal petulance. I recently experienced this sort of friction when I was trying to get through a passport control machine. It trapped me for a few moments in its airlock because it couldn’t match my face to my photo. (I was wearing glasses that I hadn’t been told to remove.) When it finally released me, I gave it the finger in frustration, an uncharacteristic and petty gesture that could have pointlessly landed me on a watchlist and accomplished nothing but underscoring for me my own sense of humiliation.
In that dismal moment I could feel that the systems meant to process us haven’t “gone wrong” when they embarrass us. They aren’t being refined toward some higher level of seamlessness, once the technology and the data sets improve. Rather they “improve” by relocating the frictions we inevitably feel and giving it no outlet. The indifference of these systems to us and our powerlessness in the face of them in that moment becomes the indifference of society and our powerlessness to change it. In a flash, the welling irritation conveys instantly, reflexively, that solidarity must be impossible in a world where all human relations are machine-mediated.
Automation ultimately isn’t steered by the aim of making things work better or easier for the people they are imposed on; it is meant to divert resources according to the operative distributions of power. It certainly doesn’t redistribute power downward. Rather it systematically puts people in their place. The equilibrium that automated systems settle into isn’t about efficiency so much as ideology: the optimal level of frustration that people can be made to bear without sparking any meaningful resistance. They train us in accepting ambient contempt with a sense of apathy.
“Social media is over”
A recent New York Times article examined Facebook’s efforts to more thoroughly exploit WhatsApp. “If you’re envisioning what will be the private social platform of the future, starting from scratch,” Mark Zuckerberg is quoted as saying, “I think it would basically look like WhatsApp.” The backdrop to this assertion is the growing sense that the earlier era of “public social platforms” (Twitter, Facebook, Instagram) is essentially over, and new practices are emerging to accommodate the shift. Meanwhile, the remaining users of the moribund platforms are being left behind: As this Wired headline has it, “First-gen social media users have nowhere to go.”
But this framework is not always clear about what has really changed and why. Is this shift driven by consumer tastes starting to change, or is it more a matter of depleted business models? Why did people ever want social media in the first place, if it really is becoming apparent now that they don’t want it anymore? The Wired piece declares that a “golden age of connectivity is ending,” yet at the same time Zuckerberg (self-servingly) asserts that “now that everyone has mobile phones and are basically producing content and messaging all day long, I think you can do something that’s a lot better and more intimate than just a feed of all your friends.” What does that even mean?
Perhaps the idea is that we should concede that our friends are boring and their content is bad. They aren’t creators or influencers and it’s hard to sell ads against what they do — its unctuousness contrasts poorly with actual friendly discourse. But if they are making all this bad content all day long, why are platforms struggling? Why do users “have nowhere to go”?
The something “more intimate than just a feed of all your friends” that Zuckerberg yearns for would presumably be some sort of AI agent that knows all your secrets, is willing to talk to you about nothing but you and your needs, and will act on your behalf when it has calculated what is in your interests (or whatever interests the machine serves), though it’s not clear what that would have to do with WhatsApp, which is currently being reimagined as a site for more proactive “click-to-message” advertising and a place where people can chat with brands and salespeople and customer-service bots. Is that so different from “the golden age of connectivity” we’ve already enjoyed?
As with all “golden age” rhetoric, the social media past is being misremembered and the present mischaracterized in order to pretend that “connectivity” was once good but has now been spoiled and “enshittified” by rampant commercialism and inauthenticity. All the amazing sharing and meeting cool people with the same interests has been replaced by sponsored posting, algorithmic siloing, neoliberalistic hustling, and doomscrolling.
But was “connectivity” — the commercialized and gamified form of social interaction developed and incentivized by platforms — ever actually good? Is the codification of sponsored content in the form of creators and influencers such a bad thing? Was the pretense that everyone should post publicly and try to scale and go viral ever in any individual’s best interests?
When social media first began to emerge, it was often cloaked in “convergence culture” rhetoric about empowered prosumers and the righteousness of fans who wanted more representation and participation in the entertainment industry. User-generated content was heralded as a healthy alternative to the prefab pablum being forced on people by clueless culture czars who refused to see the wisdom and genius of everyday people and their homespun originality and authenticity. (Taylor Lorenz’s recent book Extremely Online largely takes this tack, painting the history of the internet as a protracted war of new voices against gatekeepers.) At last people would be able to make and promote cultural products instead of passively consuming them — they could finally perform their ideas and their lives not for their friends (who cares about them) but for the world. Everyone could become the professional content creator that they have always longed to be.
What if, however, most people didn’t aspire to make content, to be content, to pursue fame, to subordinate friendship to commercial practice? The rise of social media was largely an effort to normalize the choices of “prosuming” and ultimately living life as content, which may have enriched the platforms and a handful of individual users but further imposed its terms, its anxieties and competitive pressures, on the billions of the rest of us.
Before social media emerged to elicit and facilitate user-generated content, it was easy to assume that it was inherently counter-hegemonic, that it automatically constituted a form of resistance — you created your own scene like Maximum Rock N Roll told you to. But with social media, it became obvious that prosumption is collaboration with the culture industry and not a blow against it. Social media assured that all the new voices would be heard saying the same things: endorsing the value of celebrity for its own sake, asserting the infallibility of popularity and metrics, pandering to audiences, “keeping it real,” and so on.
The “end of social media,” for those who find it sad, as Kyle Chayka appears to in this New Yorker piece that complains that “the Internet isn’t fun anymore,” seems to mean the end of that fantasy that the internet allows us to find all the cool weird people doing cool weird things without making those people less cool and less weird in the process. But social media was always the process of extinguishing that kind of discovery by systematizing it. There is no social media without influencers because they have only ever existed to create influencers, to offer the promise that there was some better user-generated content to consume than your friends’ texts. There is no such thing, as Rebecca Jennings details here, as “deinfluencing,” anymore than there is any such thing as anti-fashion. There are no anti-social-media social platforms, no viable anti-Instagrams.
If, to quote Jennings, the “internet now feels like a place whose sole purpose is selling you something,” that’s because the entire world feels that way. This was social media’s mission: to drive capitalist value seeking into areas heretofore resistant or oppositional to it, the “havens in a heartless world,” if not the inner reaches of subjectivity itself. Why parent when you can be parenting influencer? Why share with friends when you can entertain followers? Why think things through when you can offer an immediate opinion or reaction and see how people react to that? If the internet was ever “fun,” it was because we enjoyed feeling those ways, vicariously or in actual practice — we liked turning away from friends and toward people as content, talking to friends as though we were making tweets, turning get-togethers into content-making workshops, making content making the substrate of friendship. It’s been “fun” discovering how anything can be content, and if it feels like the fun is ending, it’s because it seems like we’ve run out of things to force into being content — no one on TikTok can be said to have any illusions about it.
It would be great if social media and its form of fun were truly over, and the commodification of everyday life, having reached high tide, were suddenly to begin to ebb and recede. It would be great if the world were truly bored with “connectivity” and was settling back into ordinary social connection at an ordinary scale. But it seems overly optimistic to think that people have run out of ways to sell each other and themselves out. I suspect the fun’s only beginning.
Good to have you back, Rob. Great post!