This paper by Ignas Kalpokas has the sort of title that I find irresistible: “Work of Art in the Age of Its AI Reproduction.” Sadly, it doesn’t live up to what that title promises; it doesn’t provide a thorough historicization of “AI” that draws on and illuminates Benjamin’s dialectic of mechanical reproduction and the “aura” of particular works. Instead it makes two interrelated claims that I find dubious: that generative models (1) reveal the “collective unconscious of society” and (2) undermine the individual human subject to make way for “more than human” ontologies.
Admittedly, I don’t fully understand the stakes or the terminology of the “posthuman” turn. In my perplexed and defensive attempts to make sense of those kinds of arguments, which I have mostly encountered secondhand, I almost certainly fall into any number of anthropocentric traps that expose me as an entrenched reactionary. The arguments usually dwell on the idea that thinking in terms of human “agency” is crude and short-sighted; instead, capacities should be regarded as being distributed among humans, systems, and other animate or inanimate objects, though that distinction doesn’t seem to matter. By revealing the broader spectrum of actants in the world, human fantasies of sovereignty and self-sufficiency are exposed, the underpinnings of intellectual property are dismantled (since no one can claim full responsibility for something), and the category of the human itself, which has been used to rationalize modes of exploitation, repression, and exclusion, is demystified.
These kinds of analyses tend to proceed as though recognizing various human-machine interdependencies and assemblages were sufficient to redistribute power among the humans caught up within them. Or rather, there is no point in redistributing power, because power itself is more or less an illusion, part of a flawed way of conceiving the world in terms of “subjects” and “objects” instead of “relations.”
Posthumanists are, in the broadest sense, targeting the Western philosophical attitude epitomized by this kind of passage, from Kant’s Critique of Judgment:
Now of the human being (and thus of every rational being in the world), as a moral being, it cannot be further asked why (quem in finem) it exists. His existence contains the highest end itself, to which, as far as he is capable, he can subject the whole of nature, or against which at least he need not hold himself to be subjected by any influence from nature. — Now if things in the world, as dependent beings as far as their existence is concerned, need a supreme cause acting in accordance with ends, then the human being is the final end of creation; for without him the chain of ends subordinated to one another would not be completely grounded; and only in the human being, although in him only as a subject of morality, is unconditional legislation with regard to ends to be found, which therefore makes him alone capable of being a final end, to which the whole of nature is teleologically subordinated.
In short, the meaning of life is for individual humans to exercise their freedom (the only thing that is not mechanically predetermined), which means doing their duty by one another, and everything else in the world is subordinate to that. Human freedom is grounded in an unequivocal and unapologetic anthropocentrism: If freedom is real, then the world exists for us. Every entangled relation with objects is a kind of heteronomy that is occluding our fundamental purpose. Posthumanist theory presumably wants to ground the idea of freedom in something other than the explicit subordination of the world to specifically human aims that Kant authorizes, but often it seems to me that they would like to do away with freedom altogether, and to claim that human intelligence is the most artificial of all. (This is one of the reactionary traps I fall into.)
Since we will never finish the project of fully articulating how our agency is situated and circumscribed by other material forces and nonhuman agents, it seems there will never be any point in trying to exercise it in a meaningful way. The only thing to do is to trace the kinds of assemblages that are at work, a view that corresponds with what AI companies seem to be pursuing: a mapping of relations among objects in the world that doesn’t privilege or facilitate human behavior but instead seeks to explain it, predict it, and ultimately control it.
Some posthumanists are drawn to AI because it appears to make the machine-human assemblage idea explicit and unmistakable. AI systems, by their very existence, do the work of revealing our entanglements with other kinds of agents and demonstrate the folly of human sovereignty and the arrogance inherent to the idea of “individual human creativity.” A relatively clear statement of this idea can be found in this essay by Martin Zeilinger, which argues that generative adversarial networks (used in image recognition and text-to-image models) are “best understood as aligned with a progressive (posthumanist) notion of expressive agency that contradicts romantic ideals of creativity and originality, and which, in doing so, also challenges the cultural logic of intellectual property.”
Zeilinger offers this “close reading” of how the models work:
Despite the nominally adversarial nature of the interaction between Generator and Discriminator, the two discrete units work in tandem to form what can be described as a sophisticated appropriation machine, capable of approximating style, content, and other desired qualities of the training materials. In my mind, it is actually thanks to this capability that GANs bear resemblance to the creative minds of human agents: not in the traditional sense of the spirited original genius figure producing unique creative works, but rather in a more progressive sense that proposes creativity as fundamentally relational, embedded, and dialogic. To turn things on their head a bit, following this logic it is entirely feasible to describe human creativity itself by borrowing from the conceptual register of technical descriptions of machine learning.
So the GAN “challenges assumptions of the centrality and supremacy of a unified, singular, spirited human artist and their unique ability to create original expressions.”
But why do you need a machine to model the essential sociality of creativity? Why champion the redistribution — centralization, really — of agency to profit-seeking systems under the control of giant tech companies if not to the system of “capital” itself?
The description of how GANs work here is obviously an anthropomorphic model of social practice, where two software components are turned into characters, the generator and the discriminator, who appear as human-like figures with discernible, competitive motivations. Zeilinger later argues that “the ‘adversarial’ interplay (or intra-action) between Generator and Discriminator may appear to project a kind of split personality, a simple competitive duality revolving around ‘copy’ and ‘original’, ‘fake’ and ‘real’; but more importantly, it also represents a decentered agential assemblage that will not and cannot conform to the conventions by which the unified agency of the singular human artist figure has traditionally been identified.”
So if we replace the “personality” metaphor with “decentered agential assemblage” we can apparently proceed as though this is no longer anthropocentric. But why bother? It seems more important to highlight how AI models hide the human efforts that support them and the real human beings that have been reified into data and suffer the social consequences of that than to emphasize how AI explodes the fantasy of the autonomous human subject. You can just as easily close-read the GAN as epitomizing the serial, systematic practices of discrimination and classification that sustain social hierarchies, normalizing the idea that every individual can be unilaterally described from the outside in reductive terms they can’t control, and that they are not only not really creative or intelligent but they are also no more than a set of numbers and are entitled to no other treatment than to be processed along with all the other number sets.
Kalpokas cites Zeilinger and takes a similar line in his paper. After quoting Leonardo Arriagada’s claim that “AI-generated art ‘is fundamentally based on Big Data, which is the most social thing we have,’” he goes on to argue that
Big Data are constantly multiplied as endless digital decorporealizations of the world while simultaneously challenging the traditional Western understanding of human primacy and, instead, moving towards an understanding of life as ‘an ongoing composition in which humans and non-humans participate’. In this way, everyday realities acquire a ‘more-than-human’ character, one marked by assemblages and interembodiments of human, digital (data and algorithms/AI) and physical summands … AI-generated art is based on ever-morphing recursive relationships whereby flows and modifications of data act as socio-technical glue, never stable but always in the process of affecting and being affected in return.
Here we are invited to understand life as “an ongoing composition in which humans and nonhumans participate,” as opposed to a life as a matter of humans imposing their will on the nonhuman, as Kant posited. AI is taken to serve as a material demonstration of that flat ontology of sorts, the “more than human character” of “everyday realities.”
This perspective, then, entails viewing data as “the most social thing we have” rather than as the antithesis of sociality. I usually treat datafication as the denial of the social, a way to extinguish any mystery about “social facts” and reduce them to aggregated information about discrete individuals, who are indelibly placed into various hierarchical classifications. But here data is proposed as a kind of social partner in and of itself as well as a capture of social practice — not as a tool in the hands of discriminators and generators but an agent without intentionality. The experience of being subjected to data-based forms of control is reimagined as “the more than human character” of “everyday realities” — just inevitable facts about the structure of being in the universe, a salutary reminder of our lack of autonomy.
Data flows are understood not as surveillance and expropriation imposed through asymmetric power relations but as “glue” that holds the human-machine assemblage together (even though such assemblages in some form are inevitable anyway). The feedback loops created by automated processes are discussed as though the fact of their existence is the important thing to note and not the means by which they are imposed and for whose benefit. It’s not that the models supplant human creativity (or make more exploitive labor arrangements possible) but they show us that creativity belongs to everything and nothing.
The view that “data is social” rather than antisocial, an agent rather than an exploited resource, leads to the supposition that generative models put social life on display rather than obfuscate its centrality to thought and hide it behind a simulation. “By making the layers of data and trends and patterns therein visible, AI-generated art reveals the collective unconscious of today’s society, including its deep human-machine entanglement,” Kalpokas writes. “That might in itself be a source of value pertaining to AI-generated content — a revelatory capacity to render visible the actual digitally enmeshed and entangled conditions of life in contemporary societies.” But making these conditions visible (if that were, in fact, what generative models do) doesn’t mean they will be interpretable. The scale at which the algorithms are operating assures that they aren’t.
AI renders the patterns it detects, which may or may not mirror opaque social relationships, in a falsely legible form, as if its representations were the answer to a human question. But the representations the models offer can’t be contextualized; they can’t be treated as meaningful data at face value. And making entangled conditions visible does not in itself make them more just. The questions of working toward better conditions, or what the criteria for those conditions might be, are not raised; it is apparently enough to be disabused of the idea that human subjects are sovereign. (If we stop trying to hold individuals accountable for outcomes in the world, all the things that would seem to demand accountability will apparently disappear.)
Kalpokas suggests that “one should see the input of AI … as a demonstration of how social practices ‘are always situated in the lively web of interdependencies,’” because “only in this way is it possible top move beyond the narrative of competition between humans and machines and opt for a more egalitarian, interdependent relationship.” But the competition is not between humans and machines, but humans and other humans. There is no “egalitarian” relationship with objects because objects don’t have demands, unless one performs some metaphysical contortions to construe the human intended use of objects as an expression of the object’s rights in the abstract. (Another reactionary trap I fall in.) Automated decisions are not necessarily more egalitarian; they are far more likely to be implemented (by people organized into corporations or bureaucracies) to prevent decisions from being scrutinized, to make them less democratic. AI is a weapon in the competition between humans, and the political arrangements that make the technology possible in the first place (centralization, ubiquitous surveillance, etc.) assure that it will shift even more power to the strong against the weak. The weak would be wise to regard AI as an enemy and not an equal partner in some sociotechnical assemblage.
Using the generative model as a metaphor for interdependency obscures the complexity of the human relations and the asymmetries of power that underlies all AI and all creativity. To explode the hegemony of possessive individualism and the cult of the individual genius, we are invited to embrace black-boxed algorithms as a lesson in how everything and everyone is connected in ways we can’t be bothered to try to understand.
This seems to swing the pendulum too far, especially in the face of how much more intensively life has been “entangled” with algorithms deployed to control us. There seems to be less danger in imagining ourselves to be singular artist-creators than in our accepting that even the minuscule amount of agency we believe we can exercise should be surrendered to automated processes. But of course, those contradictory feelings are interconnected and intensify each other: The solipsism of being a lone “creator” lends itself to a view of the world as being composed of fully determined monads. Using generative models to expose our autonomy as an illusion seems unlikely to awaken our sense of social connectedness and the importance of social practice; it instead invites subordination to machines (i.e. the megacompanies that run them) that aim to abolish collective as well as individual agency.