In scrolling through various year-end lists of what’s to come in 2023, I have seen several predictions about the coming commercial exploitation of generative AI models (large-language models, text-to-image generators, predictive recommenders, “algorithmic culture” in general), which many commentators insist are on the cusp of being further integrated into all sorts of services and production processes. Of course, similar predictions were made about blockchains and crypto, but unlike them, generative AI models use an enormous amount of electricity to do more than make Ponzis.
With crypto, the appropriate response has always seemed obvious: Point out it is a scam. The only mystery there is figuring out whether the people who aren’t saying this are complicit or delusional. With generative AI, though, the situation seems more complicated. The discussion about it is certainly bloated with misleading and dangerous hype, but it’s not merely a scam. Certain critiques demand reiterating. For instance, this report from April, from the University of Michigan Technology Assessment Project, highlights some of the established concerns with large-language models: They are inherently inaccurate and biased; their size and scale means that only large corporations (or states) can develop and administer them, exacerbating various forms of marginalization and inequality; they have a vast, detrimental environmental impact; they intensify the demand for more pervasive and invasive kinds of surveillance; and so on.
But despite such well-founded criticism, the hype for large-language models continues to intensify, often predicated on a supposed qualitative breakthrough in what they can accomplish that will turn the critique into cavils. There appear to be actual use cases that non-scammers can readily grab ahold of, adding momentum to the idea that some threshold has been or will be crossed. At that point the models’ output will be accepted as good enough for whatever purposes they are being put to, and the onus will be on us to adapt and accommodate their shortcomings. The models will subsume aspects of design, education, programming, training, etc., etc., and become de facto arbiters of standards and labor practices in those fields.
More generally, the fact that AI models will give plausible answers to any question posed to them will come to be more valuable than whether those answers are correct. Instead we will change our sense of what is plausible to fit what the models can do. If the models are truly generative, they will gradually produce the world where they have all the right answers in advance.
Just as from the perspective of the autonomous vehicle developer, human pedestrian behavior is an irrational anomaly and a threat — the vehicles are “autonomous” to the degree that human autonomy can be disciplined and constrained (we must fence in the sidewalks!) — human behavior in general may come to seem irrational or dangerous when it comes into conflict with what AI models predict or assert. Despite the models’ fundamental confabulation, we, and not the models, will be wrong once they are sufficiently integrated into everyday life. The models will be providing accurate information about their own parameters and associations, and this will be more important than anything that occurs outside the models.
The underlying implication is that our understanding of reality itself is already not grounded in anything firm but is entirely malleable, and that data collection is not about producing a better understanding or clearer or even accurate representation of a world that really exists but is about developing a means and a mechanism for producing a socially convincing world that can be controlled in advance. Generative AI uses past data to produce simulations that are meant not to re-create but to supplant “reality,” which is not something that’s given and empirically fixed but is a mediated social construction shaped by power and, in our era, capital. AI produces not a fake or counterfeit reality but rather reinforces a different definition of what reality is and how it can be understood.
That, anyway, is one way to extrapolate and apply Jean Baudrillard’s theory about simulations and “hyperreality,” a somewhat confusing term for me that becomes much clearer to me when I think of it as another way to describe the images and spurious texts that generative models can disgorge. Ever since Baudrillard’s Simulations and Simulacra was famously deployed in The Matrix, it’s easy to think of “simulation” in terms of virtual reality and cyberspace, as an immersive alternative world we enter into, leaving the “real world” behind. But his earlier work, particularly “The Order of Simulacra” chapter from Symbolic Exchange and Death (1976), makes it especially clear that “computer simulation” is not something on a screen but a matter of how “the real world” itself comes to be seen — as no more than code, made up of information rather than matter (or matter that is understood as no more than encoded information). This mirrors the generative AI premise that data about the world is in itself sufficient to understand and reproduce it.
Baudrillard seems to reach that conclusion by way of semiotics, and the idea that language is a system of signs that refer to each other rather than to some external reality that they necessarily describe. Likewise, under capitalism’s “law of value,” goods have an exchange value that refers to other exchange values and not some underlying use value that would ground the whole system in something external and essential. The eventual result of this arbitrarity, in which there are no “natural” referents for signs and no natural meanings or uses implied in objects or relations, is, according to Baudrillard, “something we did not expect: a simulacrum in which the project of a universal semiotics is condensed.” That is the world understood as data, which is followed by the belief that manipulating the data can change the world.
This “project,” Baudrillard writes, is not about “the ‘progress’ of technology or the rational aims of science” — how AI researchers, for instance, tend to describe what they are doing. Rather, he insists that “it is a project which aims at political and mental hegemony.” That is, it is an extension of capitalist domination by means that no longer revolve around exploited labor and production of commodities but the reproduction of the world as no more than a system of arbitrary signs and their recombinant play. Baudrillard writes:
Practically and historically, this means that social control by means of the end (and the more or less dialectical providence that ministers to the fulfillment of this end) is replaced with social control by means of prediction, simulation, programmed anticipation and indeterminate mutation, all governed, however, by the code. Instead of a process finalized in accordance with its ideal development, we are dealing with generative models.
It’s hard to reconstruct now what Baudrillard thought he was referring to here because this seems like such a prescient account of generative AI models and predictive algorithms. Over the course of the chapter, he brings up opinion polling, DNA analysis, psychoanalysis, the two-party system, the nouveau roman, pop art, urban planning, and graffiti, and later in the book he uses fashion and the body to exemplify the closed world of signs and the general shift “from a capitalist productivist society to a neo-capitalist cybernetic order.” But it is hard to think of a better example of “the code” than generative AI.
It becomes enormously clear that this line of thinking drives Baudrillard to a kind of despair. Much of the rest of the book is a lot of Bataille-derived hyperbolic bluster about how the only escape from the code, from simulation and the “structuralist law of value,” may be through such practices as cannibalism, suicide, ritual murder, terrorism, and the like. He goes so far as to assert that death itself is a social construct, now under the control of the code, in order to claim that we can crack that code by going back to earlier conceptions of life and death. We should, he suggests, learn from “primitive” ancestor worship how to transcend our myopic individualistic hang-ups in thinking that one has to still be breathing to be alive.
This is the sort of provocative rhetoric that does a lot to discredit Baudrillard in the eyes of many readers, and not just the ones who aren’t cannibals. But as ludicrous as his “solutions” are, his diagnoses of simulation and social control strike me as more and more convincing the more media and technology continue along the path capitalism has set them on. It addresses AI with a critique that befits the ambitions implicit in its development.
Baudrillard describes the “true face of ultra-modern death” as being “made up of the faultless, objective, ultrarapid connection of all the terms in a system.”
Our true necropolises are no longer the cemeteries, hospitals, wars, hecatombs; death is no longer where we think it is, it is no longer biological, psychological, metaphysical, it is no longer even murder: our societies’ true necropolises are the computer banks or the foyers, blank spaces from which all human noise has been expunged, glass coffins where the world’s sterilized memories are frozen. Only the dead remember everything in something like an immediate eternity of knowledge, a quintessence of the world that today we dream of burying in the form of microfilm and archives, making the entire world into an archive in order that it be discovered by some future civilization. The cryogenic freezing of all knowledge so that it can be resurrected; knowledge passes into immortality as sign-value. Against our dream of losing and forgetting everything, we set up an opposing great wall of relations, connections and information, a dense and inextricable artificial memory, and we bury ourselves alive in the fossilized hope of one day being rediscovered.
The weighted parameters of AI models attempt to convert thoughts, ideas, images, etc. into quantities and numerical relations that banish altogether what Baudrillard calls the “symbolic” — aspects of existence capable of transcending market exchange, abstract equivalence, the capitalist “law of value,” and the arbitrary system of signifiers in a language with no fixed meaning. Instead, with AI models, we get a simulation of meaning so total that it makes meaning as it was previously understood impossible. This would be death in life, life as a prearranged document of itself rather than a living process. It would be less than meaningless, just like a chatbot’s current output.
Rob, this is a very nice piece; thank you for sharing. I have been following your writings on AI reverently--surrounded by a world enamored with its false promises, I have struggled to put into words my exact critiques of AI. This does so nicely. I particularly liked this quote: “The cryogenic freezing of all knowledge so that it can be resurrected; knowledge passes into immortality as sign-value.”
I will add a though if my own to this, based in part on some research I did on technology as a toxin for an anthropology seminar I took last semester.
Wisdom is knowledge contextualized. The deeper we fall into the digital world, the more knowledge is codified for resurrection. That resurrection, however, can only be done my today’s necromancers: big tech, global hegemonic powers, and the most influential capitalists. We laypeople do have access to that knowledge, but only through those necromancers. It is they who get to contextualized it--to turn it into wisdom that dictates the world of tomorrow. That process of re-contextualization (it seems to me that the translation of all worldly knowledge into data is a deliberate act of decontextualization) may be done obviously through AI bots and image generators, but it is also down in less overt ways: the search results we are funneled through Google and the ads we are delivered daily.
Following from some writing by James C. Scott, the industrial settler-colonialist world has acted to reduce all vernacular language and life into homogeneity. Modern semiotics are no longer done regionally or with vernacular. The people I we are physically closest to are digital worlds apart--if I pick up my sister’s phone and open her Instagram, it looks nothing like mine. And my other sister’s Instagram is nothing like that. Who makes symbols now? And for what ends?
It seems to me that *we* are no longer the semioticians of today; that role has been subsumed by the necromancers of the likes of Google and the NSA. Chatbots and image generators act as another layer of translation between those necromancers and us, cementing the symbology they create for their own ends (generating capital) as facts of life and the world. Perhaps we need to learn to raise our own Dead.
Baudrillard: "Here is my premise. Now we must journey deep into Crazy Town."
Me: "I don't really want to go to Crazy Town."
Baudrillard: "Shut up! No questions! Here is your fork!"