Your choices
Last week, Facebook executive Nick Clegg posted a defense of his current industry: "You and the Algorithm: It Takes Two to Tango." The cliche in the title gives a fairly good indication of how the argument proceeds. One can only assume he considered "Algorithms Don't Hurt People: People Hurt People," "If Facebook Told You to Jump Off a Cliff, Would You Jump?" and "Nobody Put a Gun to Your Head and Forced You to Use Facebook" as alternates.
Aside from some preposterous assertions about social media advancing the unfolding of human spirit in history ("turning the clock back to some false sepia-tinted yesteryear — before personalized advertising, before algorithmic content ranking, before the grassroots freedoms of the internet challenged the powers that be — would forfeit so many benefits to society..."), Clegg's basic point is that algorithms aren't agents. They simply react to what users do: "You are an active participant in the experience." As a result, he suggests, algorithms don't amplify or produce tendencies in users but simply reflect our "natural" predilections in individually tailored products. The algorithms are there to reinforce our own pre-existing sense of what is "meaningful" to us, as computed from our data.
Thousands of signals are assessed for these posts, like who posted it, when, whether it’s a photo, video or link, how popular it is on the platform, or the type of device you are using. From there, the algorithm uses these signals to predict how likely it is to be relevant and meaningful to you: for example, how likely you might be to “like” it or find that viewing it was worth your time. The goal is to make sure you see what you find most meaningful — not to keep you glued to your smartphone for hours on end.
Never mind that what is "meaningful" here is defined as what is "worth your time" and thus would extend your time on the site for "hours on end" if enough "meaning" were provided. Clegg spends a lot of time insisting that Facebook's incentive is not to increase user engagement with "sensational" content — which basically means it seeks to train users to find meaning in the kinds of content that more advertisers are more comfortable with. (Let us replace your divisive politics with inclusive consumerism!)
But the key point is that "meaning" and "what you want" is held to be better inferred from Facebook's surveillance of your behavior (whose limits are never delineated) and unspecified proxies than from your explicit requests. You are not asked what you want, because Facebook assumes that you, by nature, enjoy being catered to, enjoy being made passive, can't be bothered with sorting and selecting what you want to pay attention to from among the abundance of content Facebook has chosen to inundate you with under the alibi of "more speech."
What you say you want, Facebook implicitly assumes, would just be your occluded mind lying to itself. Instead, your behavior is taken to generate "revealed preferences" that are the truth about your "nature," as if the assumptions and affordances encoded into the interface don't shape or guide your actions at all. As a result, a behaviorist assumption (people can't be trusted to articulate their truth) is used to try to refute behaviorism: No system is capable of manipulating you; algorithms can only reveal what is always already true. When these algorithms reveal that "human nature" is drawn to "sensationalism" (a somewhat tautological assertion, since "sensationalism" is defined as that which secures attention), Facebook beneficently steers users toward their better angels. (This thread by Daniel Kreiss provides a critique of Facebook's approach to "reducing polarization," which is meaningless independent a concept of social justice.)
None of that resolves the underlying claim that "engagement" is something that emerges through Facebook's measurement practices and is structured by the consumption environment the company creates. "Liking" content is not some free choice; it is a reaction induced by the interface and the set of practices that have been normalized around it. The meaning of a like is contained and circumscribed by Facebook and is functionalized in opaque ways that users can't control, just like all other user behavior that Facebook captures and instrumentalizes. Users are not permitted to negotiate over what behavior should count as their expression and how it should be interpreted, as they generally can do in their direct interactions with one another. Instead, the company unilaterally objectifies them. It assess who we are and provides a tailored environment designed to prevent us from autonomously changing anything about ourselves. We are obliged to "train our algorithm," which is to say, we are expected to let it take over and train us.
Social media companies have claimed for years that algorithmic sorting increases user engagement, which proves the companies are giving people what they want by implementing algorithmic recommendation systems. The longstanding retort to this self-serving claim is that ceaseless engagement is what advertisers want, not users. The fact that advertising exists at all presumes that people's behavior can be systematically modified. Advertisers fund social media not to liberate communication and help humanity become free; they fund it because it captures humans within incentivized systems and forms of discourse that make them more susceptible to persuasion, more malleable (and not necessarily more polarized or tribalistic). The logic of social media sorting algorithms acculturates users to what sorts of attention should be seen as rewarding, rewardable; it teaches users to aspire to be "influencers" — i.e. human advertisements. It encourages users to adopt the prerogatives of advertising as their own ethical system: The purpose of life is to develop clout. Be the best NFT you can be.
Both advertising and algorithms tell us what we are supposed to want, which can be reassuring and efficient — a shortcut around more arduous forms of social communication that involve encounters with actual other people. Ads and algorithms can make us feel as though we are operating within the norms of society, when in fact we are only within the norms set out by these systems themselves and which achieve their social prominence through strenuous expenditure. At that point, advertising finds itself capable of defining deviance.
Clegg concludes with this condescending piety: "In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror, and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along." In a sense, he's right to say we shouldn't blame algorithms for the uses companies like Facebook puts them to. As Ted Chiang argued in this interview with Ezra Klein (which I saw excerpted in this tweet), "Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two."
The "we" Clegg instructs to look in the mirror, then, should refer to tech companies, not those subjected to their dominance. Machines aren't manipulating us, but tech companies are. They take "false comfort" in the alibi that an abstracted and reified concept of technology supplies. The "more complex societal forces at play" begin with the capitalist relations of production and the means of production that capital has develop to further its interests over and against society. It is enough to make you wonder if the technology of global-scale social media networks would have ever been nurtured by venture capital if they couldn't have been put to exploitative uses. But I suppose to wonder about that means I am trapped in sepia-toned yesteryear.