Days of futures past
Earlier this month, The Information reported on a VC-funded startup called Nate that purported to use "AI" to fill in users' contact and payment information for them but was in fact using workers in the Philippines. One might simply and rightly take away from this — yet another example of what Astra Taylor described as "fauxtomation"— that claims about the capabilities of "artificial intelligence" are often exaggerated to take advantage of venture capitalists and other investors.
But conversely, one might also say that "artificial intelligence" simply means "disempowering workers," not replacing them, and it loosely applies to any situation where technology is employed to that end. In other words, "artificial intelligence" doesn't mean machines that can think; it means machines that facilitate exploitation. Startups, from this perspective, would tout their use of "AI" not to show some sort of fictitious form of innovation but to indirectly communicate their intention of squeezing workers as mercilessly as possible (though that would presume it wasn't already a given for any capitalist firm).
As Claudia Aradau and Mercedes Bunz put it in a survey of critiques of AI in Radical Philosophy, “the language of AI itself is used to signify technological rationality and market value rather than as a definition of a specific range of technics.” That is, “AI” is not a particular capability of machines but more like an ideology, a way for people to misrecognize an evolving structure of domination. Fixating on AI's technical potential (or its ludicrous failures and frauds) means missing or taking for granted the intent to dominate. “Rather than seeing AI as a high-tech autonomous weapons system that is a killer robot, or an automated facial recognition system,” they write, it should be regarded as “a distributed socio-technical system that is always already produced, circulated, maintained and repaired through dispersed, intensive and underpaid labour.” From that perspective, AI hasn’t only just recently emerged but has — under different names — been fundamental to subsuming labor under capitalism since its beginnings in industrialization.
Aradau and Bunz cite Herbert Marcuse, who argued in a 1965 essay, “Industrialization and Capitalism,” that “specific purposes and interests of domination are not foisted upon technology ‘subsequently’ and from the outside; they enter the very construction of the technical apparatus ... Technology is always a historical-social project: in it is projected what a society and its ruling interests intend to do with men and things.” Or more bluntly: “The very concept of technical reason is perhaps ideological. Not only the application of technology but technology itself is domination (of nature and men) — methodical, scientific, calculated, calculating control.”
Predictive technologies, as Sun-Ha Hong details in a recent paper called "Prediction as Extraction of Discretion,” are no different. Even though sometimes proposed as freeing us of the limits and subjectivity of human decision-making, this kind of “AI” is rather a perfect current example of “technical reason” and its inherent complicity with oppression. The systems are biased not only in their execution (bad data) but in their conception.
Prediction, as Hong makes clear, is just another word for control, though he tends to use the less intuitive word discretion. But what he is writing about is the power over how a particular situation is understood, what it means, what is significant in it and what can be measured about it, who are its subjects and objects and how can that distinction be enforced as mutually exclusive. These are some of the vectors by which AI systems reshape what they purport to be modeling. As Hong puts it, discretion “describes the always unequal distribution of the power to define the situation: which bodies are immediately and tightly fastened to the regimes of visual surveillance and statistical correlation without appeal, and which bodies retain wriggle room, perhaps through the aid of a sympathetic human clerk and other forms of cultural capital.”
Algorithmic systems don’t simply start with data and “the facts,” as though these are just lying around, unwarped by the intentions of whoever was in a position to gather them. “The grammar of prediction tends to prefer particular kinds of data and relations over others: a preference driven not purely on epistemic grounds, but by economic and institutional conditions that make some datasets and problems more available than others,” Hong notes. "Reality" doesn't automatically translate into some "correct" data set. It always exceeds what can be measured, and what is measured becomes a kind of argument for a particular understanding of reality.
Likewise, AI systems don’t begin on neutral ground and dictate “just” outcomes, extrapolating “natural” or “unmistakable” implications and correlations from the inherently flawed data they make use of. They are means for taking existing inequalities in what Hong calls “epistemic power” (who chooses what counts as knowledge) and both naturalizing and extending them, transferring power from those subjected to the systems to those already powerful enough to impose them. “Prediction is so valuable precisely because it offers both the technical mechanisms and the logic of justification through which pre-existing extraction of discretion can be replenished,” Hong writes. “This motivation is the starting point for the shape that predictive systems tend to take today — not the side effect.” AI is not trying to make decision-making “better” or more fair but to rationalize existing asymmetries, if not exacerbate them.
Making something appear as “predictable” means making it an opportunity to strip other people of their power and autonomy. As Hong explains:
Prediction is as much a way of thinking and talking about how we make facts, and who declares those facts about whom, as it is a set of calculative techniques. The struggle over these concepts – and the moral attitudes, affective orientations, and other pictures of the world embedded into them – shape our perceptions of what kind of technological arrangement is ‘inevitable’, or what kinds of reform, abolition, and alternatives are considered ‘plausible’.
“Prediction systems,” then, are not really about determining future likelihoods; they are about guaranteeing preferred the ruling class's preferred outcomes. Atop the hierarchy, life become more predictable: subordinates do what they are told. For those below, life becomes less predictable and more chaotic as they are forced to accommodate decisions that appear arbitrary and are often presented with no justification or possibility for recourse. Computer says no.
Hong mainly talks about algorithmic management systems in the criminal justice system, and in warehouse and gig labor where their exercise is especially malign. Common to these systems is the presumption of guilt in the surveilled and managed: “The promotion of ‘cutting-edge AI solutions’ leverages and amplifies a pre-existing fantasy that criminalizes the targets of prediction,” Hong writes. “The worker is presumed to be, by default, a potential thief (of wages via low productivity, or directly of company property).” Hong cites an example that suggests that for employers administering these systems, “the worker is the suspect, and it is this a priori declaration that determines what role the data will play to begin with.” Data collection is oriented toward confirming guilt, with the AI system's effectivity as an alibi.
Remote proctoring systems similarly presume students are cheats, Ring doorbells presume that neighbors are thieves, and so on. The systems seek to reproduce the distrust they are premised on, looking only for the sort of evidence that would rationalize their deployment. At the same time, they foster a climate of distrust, in which it would be foolish not to surveil other people when you can or try to discipline their behavior with monitoring systems.
I think one could think about algorithmic recommendation in this way too: When streaming services push specific content on users, or when feeds arrange content in terms of predicted user preferences, this is part of the same climate of distrust. When we use these services, we are having it inculcated in us that we should not trust ourselves, that we should habitually see ourselves as in the wrong and in need of correction. The data about our “preferences” isn’t being collected to help us; it is being collected to further reinforce that self-mistrust, to facilitate the transfer of our discretion to those services so that they can extend their regimes of dominance over us.