Slide machines
Recently, the above work of AI-generated art was auctioned for $432,500. It was based on code written by Robbie Barrat, a researcher who didn't see any of that money. As this Verge article by James Vincent details, his code was adapted by a group of French students who took the project in a more entrepreneurial direction. The resulting artwork is less the image above or even the code that produced it than the process by which one manages to get paid for it by the art world. The work, that is, is the social networks leveraged and the media coverage produced; the image is a kind of residual documentation of that performance.
Vincent attributes the students' success in part to "their willingness to embrace a particular narrative about AI art, one in which they credit the algorithm for creating their work." Giving algorithms undue credit is nothing new: this example is a variant on what Astra Taylor has called "fauxtomation," and what Jathan Sadowski wrote about as Potemkin AI, where production processes are represented as AI to devalue the human labor that is actually doing the work. But here, AI is in a way protecting the sanctity of the human impulse to create. If algorithms and not humans made the art, then the market can do whatever it wants with it without betraying anyone's ideas of aesthetic integrity. The attribution to AI absolves the art market of commercializing some precious gesture of a human artist and emphasizing monetary value over aesthetic value.
In this case, the work's monetary value is its aesthetic value — the image is significant only because someone paid that much for it, that price is its "content." No artist-villain like Jeff Koons or Damien Hurst is required if you can have algorithms producing the works. Then Christie's can take centerstage, as Vincent points out: "it has since presented the auction as a provocative gesture that refreshes the company’s brand and stakes its claim in any lucrative new art market."
Often algorithms are deployed as if they could eliminate bias or at least distract from it; they obfuscate the human input into a particular decisionmaking process so as to make it seem more objective. But this means that instead of the bias affecting a direct human decision, it's displaced into the data — what was chosen to be collected and fed to the algorithms, what sorts of biases the data reflects — or into the assumptions governing the programmer's coding. The AI process can make it appear as though the machine decides for its own reasons. It reproduces the biases of the past as if no one is responsible for them.
AI art figures this displacement as a kind of creativity that can be seen in the works that machine learning algorithms produce. The disavowal is laundered in the celebration or valorization of that creativity, as in the Christie's auction. AI in general functions as a useful scapegoat for the expropriation of value, which isn't some tangible quality that inheres in things but is entirely a matter of human relationships. Where value is created, people are generally exploited almost by definition, if you take "value" to express the relative power humans can exercise over one another. The fantasy that art can be made without human artists crystalizes the idea that you can "create value" without injustice, without political struggle over how it should be distributed.
***
Barrat and Janelle Shane — an AI researcher who writes the AI Weirdness blog — seem like they've emerged as the friendly faces of machine learning research, designing experiments that highlight the fun side of AI generativity: how you can use neural nets to make otherwise inconceivable halloween costumes or designer clothes, or to play flarf-style language games. In an interview with Arabelle Sicardi, Barrat says, "working with AI and generative art is nice because people can't really misuse your software or your results." (Which seems like a generous thing to say when your work has been hijacked and used to bolster the art world.)
In these projects, the algorithms typically produce material that is close enough to evoke the genre the researchers are toying with, but still off enough to mainly be amusing and reassuring. That sweet spot is same one I think that most ad-targeting algorithms and recommendation engines are zeroing in on: they are off enough to get you to let down your defenses, to feel as though you are still in control of who you are.
I tend to find this stuff irresistible — I'm susceptible to a version of the fantasy of "creativity" without conflict, only I'm drawn to creativity that appears separate from any human creators I would need to feel envious of. AI creativity appears as creativity with no human strategy behind it, no intentionality that could sully the creativity. The algorithms aren't trying to score points with anyone; they're not trying to be cool. It seems like art without ego. (I have always liked bubblegum music like The Archies and the 1910 Fruitgum Company for similar reasons. The identity of the musicians is superfluous to the strict adherence to a formula. The music isn't uncalculated but hyperconformist: as with machine learning algorithms, it so strictly adheres to rules that it produce something uncanny. It foregrounds "being manufactured" rather than the pretense that it is somehow "creative.")
Human intention is easy to read as predictable, no matter how surprising it may at first appear. You can work backward from how a thing is received and impute a calculated strategy after the fact. AI output feels more surprising to me because I don't read it as motivated, as trying too hard. The way it "learns" what to make comes across as a compelling lack of purpose.
Of course, there is still human intention driving these projects, but it is abstracted a step further away from the output. Barrat suggests AI can "augment artists' creativity" in producing "surreal" combinations that the artist can then sift through or refine. "A big part of my role in this collaboration with the machine is really that of a curator, because I'm curating the input data that it gets, and then I'm curating the output data that the network gives me and choosing which results to keep, and which to discard." As Sicardi puts it, "When you actually put an algorithm in your hands, it forces you to create versions and derivatives. It draws conclusions you wouldn’t have considered, because it lacks the context that may inhibit you." The AI programmers are in the paradoxical position of producing intentional accidents, which probably holds for most process-oriented art. It makes me think of Warhol's silk-screen process (a Whitney retrospective just opened), and how he sought to assume an artistic posture of passivity, of plausible disavowal — works that "just happened."
Often the write-ups about generative AI projects stage what the algorithms produce as a progressive revelation. First, some funny, random-seeming results are curated, followed by examples that feel increasingly inevitable as the adversarial networks refine themselves. It's not aesthetic purposefulness per se but some kind of deeper destiny being put on display. Referencing Google's "Deep Dream" project, Hito Steyerl describes this refinement process as a kind of "automated apophenia" that can "reveal the networked operations of computational image creation, certain presets of machinic vision, its hardwired ideologies and preferences." What the neural nets zero in on is not accidental or creative; it's instead reflects the cumulative results of networked surveillance that has fed them their training material. What becomes legible to human viewers are the visual traces of the ideology the algorithms work to reproduce — the reality they try to impose on and through the data sets they work on.
In an essay for New York magazine, Malcolm Harris described machine learning logic as an expression of glitch capitalism: Machine learning's amoral, uninhibited-by-context way of processing provides concrete illustrations of how the market, another amoral calculating machine, functions more broadly.
Because these programs are looking for the best (read: most efficient) answers to their problems, they’re especially good at finding and using cheat codes. In one disturbing example, a flight simulator discovered that a very hard landing would overload its memory and register as super smooth, so it bashed planes into an aircraft carrier. In 2018, it’s very easy to feel like the whole country is one of those planes getting crashed into a boat. American society has come to follow the same logic as the glitch-hunting AIs
Capitalism as a whole rewards ruthlessness; algorithmic decision making is a means of optimizing for it, while seeming to insulate humans from responsibility.
***
Procedurally generated art or memes (like the inspirational post above, generated with a click of a button) makes me want to proceed immediately to procedurally generated selves that could surprise and delight us. Not bots or AI-operated personalities, but versions of ourselves extrapolated through machine learning procedures. It seems to me that this is where the disavowal of agency in favor of "authenticity" leads.
The copious amounts of data collected about us can be fed to neural nets and adversarial networks to iteratively generate unbiased, totally authentic lives that we can live in vicariously without having to face any of their consequences. Algorithms can solve for a "true self" and present any number of plausible answers, as many as we want, as many times as we click the button, and we can curate our best life from among these preposterous futures made entirely from the detritus of our past.