Inspired by this paper by Ali Alkhatib that I mentioned in a post a few weeks ago, I decided to read James C. Scott’s Seeing Like a State, which I only knew secondhand from summaries in other works. There was more on Soviet and Tanzanian agriculture than I was expecting, but as an analysis of “how certain schemes to improve the human condition have failed,” it certainly has application to current debates about “AI.”
That’s not to say that I think AI is being developed to “improve the human condition”; it seems guided instead by the imperative to extend capitalisms’s viability. But even if AI research was motivated by a desire to facilitate human thriving, Scott’s book shows how such intentions have often been betrayed by having a homogeneous group of engineers unilaterally impose their improvement schemes on populations whose experience and insight is dismissed, appropriated and oversimplified, or ignored.
Alkhatib’s paper does a good job applying Scott’s framework to bureaucratically or commercially imposed algorithmic systems:
In creating and advancing algorithmic recommendation and decision-making systems, designers consolidate and ossify power, granting it to AIs under the false belief that they necessarily make better or even more informed decisions. Part of the harm surfaces when the data or the code, or both, go uninterrogated before and during deployment. But part of the harm emerges later, as these systems impose the worldview instantiated by the “abridged maps” of the world they generate; when they assume “a flattened, idealized geography where frictions do not exist” and crucially when those maps cannot be challenged by the people most affected by the abridgments systems make.
The belief that engineers can program a general decision-making machine that can make better decisions than the people directly involved and invested in particular situations exemplifies what Scott calls “high modernist ideology,” by which he means, as Alkhatib explains it, “the unscientific optimism about the capacity for comprehensive planning to improve life.” The point of such planning is to save ordinary people from having to make decisions about their lives and the world, as if this were to bestow a blessing of convenience on them, preserving and sanctifying their ignorance and impotence, freeing them to enjoy creature comforts. Shouldn’t an algorithm just recommend everything to everyone?
Scott argues that these kinds of megalomaniacal projects ultimately fail to achieve their putative goals because of how they denigrate the informal, undatafiable “local knowledge and know-how” on which social processes ultimately depend. Since “no administrative system is capable of representing any existing social community except through a heroic and greatly schematized process of abstraction and simplification,” Scott writes, these systems always fail to account for the tacit and dynamic forms of knowledge that make systems workable. “A human community is surely far too complicated and variable to easily yield its secrets to bureaucratic formulae,” he avers. (This is one way of interpreting Kant’s idea of the “aesthetic.”)
But of course, the makers AI systems confidently insist that this is not the case — that human community is reducible to data sets. All the human practices that involve language use can be fully simulated and encompassed in a formula that a company can own and that bureaucracies can customize and deploy for their own purposes. Tacit knowledge can be compelled to speak itself. All the claims regarding “AGI” are meant to assuage us that models are not reductive at all; in fact, they can completely simulate not only our historical reality but countless possible alternate realities — they possess the tacit knowledge that must be prompted out of them. So we are invited to see AI as capable of filling in the blanks of our inadequate understanding of the world, autocompleting that picture from its unfathomable billions of data points and parameters.
AI makers, just like any other proselytizer for automation, ultimately want their customers to believe that their systems can obviate situated knowledge and disempower those who possess it. Scott writes that
it would be a serious error to believe that the destruction of metis [his preferred term for situated knowledge] was merely the inadvertent and necessary by-product of economic progress. The destruction of metis and its replacement by standardized formulas legible only from the center is virtually inscribed in the activities of both the state and large-scale bureaucratic capitalism.
Such centralization of ability and know-how is inscribed in the development of AI as well.
As a "project," it is the object of constant initiatives which are never entirely successful, for no forms of production or social life can be made to work by formulas alone — that is, without metis. The logic animating the project, however, is one of control and appropriation. Local knowledge, because it is dispersed and relatively autonomous, is all but unappropriable. The reduction or, more utopian still, the elimination of metis and the local control it entails are preconditions, in the case of the state, of administrative order and fiscal appropriation and, in the case of the large capitalist firm, of worker discipline and profit.
That seems like a concise summary of what the AI hype is mainly about: eliminating local knowledge. Like “the builders of the modern nation-state,” AI makers intend not to “merely describe, observe, and map; they strive to shape a people and a landscape that will fit their techniques of observation.” They try to create a world seen as fully quantifiable — “a slide-rule authoritarianism,” to borrow Scott’s phrase — and impossible to understand in any other way — a world in which we can only make data and not decisions. AI systems are thus optimized for dependency.
Extrapolating from Scott’s “high modernist” examples, however, suggests that where AI systems are deployed, new forms of local knowledge will eventually spring up around them to make them seem to work (an extension of the “ghost work” that already goes into AI). Scott speculates that “the greater the pretense of and insistence on an officially decreed micro-order, the greater the volume of nonconforming practices necessary to sustain that fiction.” From that perspective, AI systems will not eliminate or absorb tacit knowledge as implicitly promised — they are presumably too static and schematic to anticipate every contingency — but reveal new areas where that necessarily social know-how can be developed, at the limits not of what the technology can do but of what can be datafied and what can be socially tolerated. Forms of accommodation and obfuscation will be developed to allow a local system based on human interaction and spontaneous adaptation to co-exist with an automated system that makes no room for it.
But this will only take shape at the expense of the suffering inflicted especially on marginalized people who are most vulnerable to automated systems, most likely to be misrepresented in datasets. As Alkhatib puts it, “In the process of training a model, the algorithm creates its own world … and imposes its model upon the world, judging and punishing people who don’t fit the model.”
This ties into what I found striking about Scott’s description of “the carriers of high modernism”: that they “tended to see rational order in remarkably visual aesthetic terms” — that for them (he is thinking especially of Le Corbusier) “order” was supposed to look a certain way rather than, say, yield an experience of justice.
This, Scott implies, stems from the assumption that people and practices must be made legible to be tamed, controlled, managed, accounted for. A “clean,” rectilinear, minimalistic arrangement reflects an overall simplification of what matters in an environment that would seem to transfer to the inhabitants within it, so that their lives could be held to be fully datafied be capturing only a few key variables. The aesthetic of “orderliness” begins to become an end in itself, as if visual order simply was political order and justifiable on those terms. In describing a Tanzanian agricultural reorganization plan, Scott writes:
If the plans for villagization were so rational and scientific, why did they bring about such general ruin? The answer, I believe, is that such plans were not scientific or rational in any meaningful sense of those terms. What these planners carried in their mind's eye was a certain aesthetic, what one might call a visual codification of modern rural production and community life. Like a religious faith, this visual codification was almost impervious to criticism or disconfirming evidence.
People’s lives are unraveled, warped, and reshaped merely so that they will conform to some powerful person’s idea of what looks right.
Algorithmic feeds have long tended to work similarly. Just as “high modernism” aspired to abolish visual difference, algorithmic systems incentivize various forms of conformity and normativity (while intensifying the possibilities for forms of opacity and subversion). A planner doesn’t set the aesthetic but one emerges from the historical biases and commercial imperatives that govern how content circulates. Instagram’s history of favoring of minimalist aesthetics is a pretty literal example of how imposed algorithms settle on certain streamlined depictions of order and elicit them from subordinated populations.
Feeds that have come to feel “dead” for many users echo the sepulchral feel that Scott associates with planned cities, whose design cannot accommodate the “disorderly” spirit of life as it is actually lived. The attention-optimized algorithmic feed is like the opposite of the Jane Jacobs-ian streetscape, which facilitates diverse uses and ad hoc encounters and so on. Generative images can be interpreted similarly; they manifest the visual idea of order that underlies the authoritarian tendencies of AI systems. One can work against these images of order, but the systems are designed to default toward normative renderings of concepts — what power believes “looks right.”
Scott argues that the visual aesthetic of “high modernism” reflects the planners’ desire to play god. It facilitates a distant view from which one can observe the little people conforming to the plan: “only someone outside and above the display can fully appreciate it as a totality; the individual participants at ground level are small molecules within an organism whose brain is elsewhere.” Uber executives probably felt that way when they used to use the “God view,” and one imagines that tech engineers feel similarly when they tweak an algorithm and watch the data on users’ aggregate behavior shift. “Such displays,” Scott suggests, “may, in the short run at least, constitute a reassuring self-hypnosis which serves to reinforce the moral purpose and confidence of the elites.”
I wonder if generative images have a similar function for the people who seem to loudly enjoy them on various platforms. These images must seem self-evidently correct to them, as though they were Platonic forms, and they must believe the images will convert anyone else who sees them to a belief in AI’s inevitability. They must see “moral purpose” in the images’ seeming to come from the future, the future they are investing in and hope to profit from. I’d like to think also that most people, sensing this, feel a moral revulsion when they look at these kinds of images, recognizing that they signify the systematic extermination of difference.
All through reading this, I kept thinking about project management processes like agile.