Algorithms sometimes seem to have an ambiguous status in procedural artworks: Are they agents or tools? (Or do they challenge that reductive binary, both-and etc. etc.) In describing Coded, a recent show about computer art at LACMA, art critic Jed Perl claims that “the urge to collaborate with the computer, to see it not only as a tool or technique but maybe even a source of inspiration, percolates” throughout the exhibition. But when are artists not “inspired” by the materials they use? At what point does a technique, which can be construed as an aspect of an artist’s skill, become a “technology,” which Perl seems to believe makes art too easy for artists by doing the work for them?
For Perl, the LACMA show as well as the current video art show at MoMA suggest that “technological savvy and artistic naiveté all too often go hand in hand” and that “the creative spirit is no longer the maker, the homo faber of classical thought, but is now an operator or facilitator with a shiny high-tech toy.” The algorithms have become the real actors, the technology itself is doing the making, and artists have been tempted by their tools into abdicating their responsibility, “degenerating into an elaborately engineered amateurism.” Chisels have made sculptors into such lazy phonies; real artists claw the stone with their bare hands.
Perl’s praise for the computer printouts made in the 1960s by painter Frederick Hammersley is telling: “Because Hammersley is a highly sophisticated artist, he knows what to do with the technology, which is inherently inert, inexpressive. He’s fully in control.” What Perl wants to see in art is evidence of that control, of human mastery and, by implication, a revelation of those who are fit to be masters. He dismisses the idea raised by Jasia Reichardt, the curator of a 1968 show about computer-assisted art, that “‘anyone able either to write a computer program, or to use a computer program,’ can become an artist” as “Dadaist swagger.” No, art matters because it can reveal an elite group of “highly sophisticated” makers, and the elite cadre of talent scouts who can identify them. Everybody else should be understood to be as “inherently inert” and “inexpressive” as the machines they resort to in their amateurish attempts to create.
I sometimes fall into a similar way of thinking when it comes to “real writers” and those capable of appreciating their texts, as opposed to those willing to “debase language” (scare-quoting to disavow my own thinking at those moments) by using LLMs to generate the text they need in various situations. What will happen to eloquence if it is readily available to everyone? (Never mind that access to generative tools will hardly be equitable.)
This seems like the wrong set of concerns, stemming from a misunderstanding of technology as “democratizing” because it can serve, in Reichardt’s words, as a “means of reformulating the boundaries and definitions of creative activity as a whole.” Such reformulations do not automatically trigger a redistribution of influence or power; they don’t undo prior exclusions, which have never been primarily predicated on merit or skill. Technology — in Perl’s sense of the term as “high-tech” media and algorithmic processing power — is far more likely to fortify existing hierarchies than to threaten them.
If it became easier to make artworks with technology, it would become correspondingly harder to get them acknowledged as such by the art world. It would make status and access to power even more central to the social constitution of what counts as art. Algorithmic processes don’t make art “easier” or more available to outsiders; they make who you know far more important. It’s not as though the “good art” will suddenly be buried under an avalanche of “fake” or “bad art”; it’s more that the same people who are seen as qualified to make art have new territories with reformulated boundaries to explore. (“Content creators,” who largely come from outside that world, are called “creators” precisely because the established art world doesn’t recognize them as artists. And even there, access to resources and distribution is not “democratic” but inflected by commercial interests and inherited forms of privilege.)
Perl’s review basically rejects the possibility that a computer can be a collaborator: “I’m not saying that the computer doesn’t have its creative uses. But a lesson to be learned from ‘Coded’ is that you can’t get out more than you put in.” In other words, a computer is strictly a tool; it can’t act with intention. One can see this as simply a truism, or opening up semantic, Latour-ish concerns: What does “to act” mean, really? When a speed bump slows down a car, has it acted? When a door is opened, has the door knob acted?
But as Perl deploys it, it champions not the inherent and singular value of human creativity and execution so much as the economic value of an artist’s charisma and accrued social status. This is what can still be “put in” to computer art when machines are perceived to be doing the rest of the art-making. Similarly, humanistic concerns about generative AI can end up tacitly endorsing the existing social relations (with all the inequities they entail) that make “human creativity” economically viable in its current form: The alibi of human genius sustains the hierarchical structures in society for which the art world and its tastemakers are a paradigm and a justification. There is a short step from “only humans can make art” to “some people are more human and there art is more significant; some people are subhuman, as their inadequate forms of expression demonstrate.”
If paeans to “true human creativity” ultimately lend support to rationalizations for sorting people, does the reverse hold true? Are strong claims for the agency of algorithms a way to challenge those hierarchies? It seems like a hard case to make, given how algorithmic systems amplify biases and obscure decision-making processes in ways that entrench the status quo. But might there be some critical value in nonhuman forms of agency in and of themselves, if you could somehow isolate that (theoretical) agency from consequential situations in the real world?
In a 2012 essay “On Algorithmic Theater,” Annie Dorsen explores whether a “theater without human actors” can be developed. If “part of the cultural work theater does is to preserve a collective understanding of what a human is and to assure us that we are as we have always been,” then perhaps putting computer actors — chatbots, for example — on stage can make legible how the definition of “human” is malleable, situational. Dorsen described her theater-making practice then as “collaborating with algorithms as full creative partners, allowing them enormous freedom to operate unsupervised and letting them perform instead of human actors.” In 2012, when chatbots were still fairly primitive, that statement probably read a lot differently. Words like “full creative partner” and “freedom” had to be understood as metaphors or provocations, given how limited the algorithms’ capabilities were. With the inflated claims now being made for the agency of LLMs, it seems necessary to posit not their “freedom” but their limits.
For the 2010 piece Hello Hi There, Dorsen staged a dialogue between two chatbots, placing two laptops on stage and projecting the text they generated on a screen while text-to-speech software spoke it aloud. “Though algorithms are not, obviously, conscious living beings,” Dorsen writes, “they do evoke something like minds at work. They produce thought, they make decisions, they act.” Do they, though? The statement seems pointedly ambiguous: The computers “evoke” minds, thought, decisions, agency without exemplifying them; they “produce” representations that can stand in for those concepts, prompting “thought, decision, action” in audiences.
The people who program algorithms and place them in consequential situations are responsible not merely for what those programs happen to output in a particular instantiation but every possible output that they could generate, regardless of how massive a range of possibilities that comprises. Saying “the computer acted” is bit like saying, “The car had an accident, and I was just behind the wheel.” This seems to be part of the point of a piece like Hello Hi There: “algorithmic performance creates an asymmetric relationship, in which the human spectator confronts something that can’t confront her back,” Dorsen writes. You’ve arrived at the accident and no one seems to have been driving.
Back in 2012, Dorsen could write that “computer-generated language seems to oscillate between sound and sense, never quite becoming fully believable human speech, but never settling into a gibberish that would relieve the listener of the burden of trying to understand.” One still had to project “meaningfulness” onto a chatbot’s outputs; one had to palpably suspend disbelief. Now computer-generated language tends to be more fluent than human-generated language, coherent and grammatically correct even when nonsensical on a conceptual level; one would have to specifically prompt it to be incoherent. We have to remember to suspend belief in systems that generate output according to the probabilities that it sounds plausible rather than any specific standard of verifiable or authoritative knowledge. It may once have been surprising to find that we readily project intention on a piece of software’s output, anthropomorphize it, but now we’re being surprised by how hard it is to withhold that projection, to hold on to the fact that they are simulations.
The question becomes how to prevent an overestimation of nonhuman agency (which would provide cover for the humans actually responsible for algorithmic agents) without lapsing into the sort of ideology that reflexively proclaims the necessity of human geniuses (and the corollary of nonhuman nongeniuses).
In “Live From Cyberspace” (2002), performance theorist Philip Auslander argued that chatbots would shift the valance of the term “live performance,” which, as he pointed out, originally emerged with the advent of radio to indicate the status of an otherwise ambiguous broadcast. In that context, “live” meant “not prerecorded.” With the advent of chatbots, a new ambiguity comes to the fore: It shifts to a Turing-test question of whether “live performance” is carried out by living beings. “The chatterbot forces the discussion of liveness to be reframed as a discussion of the ontology of the performer rather than that of the performance,” he claimed. Rather than wonder whether something was pre-recorded, we will wonder whether it was algorithmically generated rather than organically performed.
But why would ontological ambiguity even matter? The ontological distinction seems like a distraction from the more pressing ambiguity of who can be held responsible for something that has occurred. The ontological distinction matters only in so far as it gives responsible parties plausible deniability: I built it, but I didn’t tell it to do that specifically. Auslander asserts that “chatterbots are not playback devices” but “are themselves performing entities that construct their performances at the same time as we witness them.” That is ultimately as ambiguous as Dorsen’s remark that “algorithms act” in evoking the appearance of acting. “Chatterbots” are put in a position to appear as agents; they do not put themselves there. They serve as a simulation or a representation of what agency looks like, which is different from the thing itself, an agent acting with willful intention. Chatbots are better understood as procedural “playback devices,” or as “stochastic parrots,” to use the current phrase, than as “performers” or “actors.”
Algorithms pass for “actors” because they are programmed to produce a range of conditional outputs. But this doesn’t make them spontaneous agents. Dorsen suggests something like this in a 2022 interview:
I teach a class on chance operations in performance. And usually about two-thirds of the way through the class, the students start to get that fundamentally, chance is other people. What others do is what creates true indeterminacy.
Algorithmic actors are plausible to the degree to which human action has already been codified into data and made predictable. The distribution of algorithmic “agents” throughout society will extend that codification, allowing data to foreclose more and more of the space of human indeterminacy, replacing freedom with the procedural simulation of it. Generate enough output, program enough feedback loops, and eventually it will seem to set us free.
On Twitter, Jason Read happened to point to two apposite quotes that are useful here: One is from Marx: “All our invention and progress seem to result in endowing material forces with intellectual life, and in stultifying human life into a material force." The other is from Donna Haraway: “Our machines are disturbingly lively, and we ourselves frighteningly inert.” The deployment of algorithms doesn’t merely reflect that condition but compounds it. This is not simply because algorithms “act,” but rather because they are deployed.
Earlier this week I was at MoMA, where a massive work by Refik Anadol called Unsupervised is currently installed in the lobby. The documentation describes it as “a meditation on technology, creativity, and modern art” that uses a “sophisticated” — that word again — “machine-learning model to interpret the publicly available data of MoMA’s collection,” but I thought it basically looked like a two-story screensaver. It seemed like something you would see in an airport, something expensive and highly engineered-looking that establishes an atmosphere without making you late for your gate. From a balcony I watched people join and leave the ever-changing crowd gathered in front of it as it burbled on, blobs of color morphing into different blobs of a different color. What were they thinking? Did they attribute any agency to the work itself? Did they understand that there was a specific data set behind it, and a rationale for the endless visual transformations on display? Did they think they were glimpsing something tangible about the processes of algorithmic metamorphosis?
Anadol seems to think such questions about an audience’s conscious responses are irrelevant. When the work was first installed, the artist claimed that the work had “the potential to generate new discourses about how our faculties of perception are changing now that machines are inseparable witnesses of our activities and environments.” But these are not discourses that an audience would be invited to participate in as a rational, deliberative party. “In fact,” Anadol explained, we are currently designing a research protocol about the immediate effect of Unsupervised on the viewer by collaborating with neuroscientist Dr. Adam Gazzaley to measure brain signals, heartbeat, body temperature, and skin conductivity at the moment of experiencing the work.” This will perhaps be the characteristic aspiration of algorithmic art going forward, to showcase the agency of machines to justify the final neutralization of the agency of audiences.
"But these are not discourses that an audience would be invited to participate in as a rational, deliberative party."
Can you think of a practical way to do that? Would there be some sort of threshold or qualification for participating?