Alone with information
In a recent essay for Artforum, Claire Bishop assesses the trajectory of “research-based art,” which is something like the highbrow version of a conspiracy wall, making a spectacle of an artist’s efforts to gather evidence and stage suggestive connections and counternarratives. In Bishop’s description,
the genre is characterized by a reliance on text and discourse to support an abundance of materials, distributed spatially. The horizontal axis (vitrines, tables) tends to be privileged over the vertical, and the overall structure is additive rather than distilled, obeying a logic of more is more.
These works defy audiences to fully process them; even if you felt up to sifting through all the materials, there is no guarantee that the exhibition space would remain open long enough for you to attempt it. While specific research-based works take on specific subjects, the point doesn’t necessarily seem to be to present findings that convince viewers of certain conclusions. At the same time, a kind of informational sublime is offered, a demonstration of how the process of research itself means confronting awesome and unfathomable archives that exceed the capacity of the imagination to absorb.
As a genre, research-based art, Bishop argues — “its techniques of display, its accumulation and spatialization of information, its model of research, its construction of a viewing subject, and its relationship to knowledge and truth” — reflect how internet technology has altered our relationship to information. Whatever else such works are about, they are also about how to cope with being confronted with too much information, modeling different dispositions one can assume toward the relentless production of data and connectivity.
Whereas the 1990s works tended to undermine master narratives and hegemonic authority — they “critiqued linear history as evolutionary, univocal, masculinist, and imperial” to present the complexity of conditions in themselves — in the 2000s they turned toward idiosyncratic narratives of local meanings and the capacity of the individual subject to generate them: “Drifting from signifier to signifier, the artist invents meandering trajectories between cultural signs,” Bishop writes. Eventually, in the “post-internet condition,” artists begin to create works that resemble image-search results. “Artists no longer undertake their own research but download, assemble, and recontextualize existing materials in a desultory updating of appropriation and the readymade,” Bishop argues. “What results is a conflation: Search becomes research.”
Search involves the adaptation of one’s ideas to the language of “search terms”—preexisting concepts most likely to throw up results—whereas research (both online and offline) involves asking fresh questions and elaborating new terminologies yet to be recognized by the algorithm.
That distinction seems significant not just to research-based art but to our emerging relations to LLMs and “generative AI.” LLMs are frequently touted as research tools that will help humans process a far vaster archive of potential information, itself the product of an ongoing information-technology “revolution.” Because so much more information is being produced and has become accessible, previous research approaches have purportedly become outmoded, incapable of coping with the vastness of the potential corpus. Hence the “computational turn” in critical analysis and the ideology of “digital humanities,” which applies data analytics and computer-science-derived methodologies to culture, a subject Bishop critiques in another recent essay, “Against Digital Art History.”
Quantitative methods are often presented as an alternative to theory-driven research in the humanities, as Gary Hall details in a 2013 American Literature paper that Bishop cites. Close readings are replaced by statistical assessments across a far wider range of works, as in Franco Moretti’s “distant reading.” This approach is typified by data visualizations and n-grams showing word frequencies and other patterns. But “if we do not explicitly do theory,” Hall argues, “we end up doing simplistic and uninteresting theory that remains blind to the ways it acts as a relay for other forces.”
As an example, Hall examines Lev Manovich’s Cultural Analytics project, which frames itself in these terms. As Manovich puts it, the “key idea of Cultural Analytics is the use of computers to automatically analyze cultural artifacts in visual media, extracting large numbers of features that characterize their structure and content.” That, in essence, is what “generative AI” does as well, only it takes the additional step of presenting results not as statistics but a set of executed probability calculations. The word “automatically” can be made to bear a lot of weight and conceal a lot of assumptions about the meaning of statistical patterns, concerns which disappear into the maze of complexities within a model’s “neural nets.”
Bishop argues that a computational approach to cultural analysis typically demonstrates “a limited grasp on how to frame a meaningful research question” and “perpetuates uncritical assumptions about the intrinsic value of statistics.” Hall interrogates Manovich’s aspiration to “create much more inclusive cultural histories and analysis—ideally taking into account all available cultural objects created in particular cultural area and time period,” asking:
What would all the available cultural objects created in a particular cultural area and time period be? What theory of the cultural object—or cultural area and time period, or indeed culture—is being used to underpin such research? And, again, what types of analysis and questions are being privileged? How are all these images and objects being structured for retrieval and analysis? What is being left out? (At the very least everything that cannot be so digitized and structured presumably?) And how do such (non)decisions affect the analysis?
Generative AI shares these problems and compounds them. It presumes that everything can be made commensurate as data, regardless of the context and conditions under which that data was collected and regardless of those collection processes’ specific measurement techniques, biases, and omissions. Instead, an ever-growing database is taken as a fully determinable totality in which everything can be made to explain everything else. “Computational metrics can help aggregate data and indicate patterns, but they struggle to explain causality, which in the humanities is always a question of interpretation,” Bishop writes. “In effect, a post-historical position is assumed: the data is out there, gathered and complete; all that remains is for scholars to sequence it at will.” LLMs propose to save everyone the trouble of this sequencing as well. Any kind of question or prompt can be given an immediate answer; how that answer is arrived at is sidelined as irrelevant.
Much like algorithmic filtering on social media platforms “solved” the problem of too much information coming in from too many sources by pre-empting the user’s capacity to discriminate for themselves, generative AI arrives as an ostensibly mandatory solution for coping with a broader cultural information overabundance, what Bishop calls “post-digital fatigue.” It presupposes the impossibility of effectively doing your own research under the given conditions of data overload and replaces it with “prompting” as a manageable alternative.
In a previous post, I described LLMs as bringing the post-structuralist “death of the author” to life as a readily accessible service: It produces texts as pastiche without any authority behind them, seeming to undermine the integrity of the “author function” and center the interpretive capacity and intentionality of the reader. LLMs make you bask in your own agency when you see the efficacy of your prompts to garner immediate results that you can use and discard as you see fit.
But Bishop’s account of research-based art brings out the opposite implication, that LLMs reinforce our sense of interpretive inadequacy. LLMs are put forward as the only means of imposing a narrative coherence to information when we are left agog at the sublime surfeit of it all; they are authors when we can only be spectators. Bishop quotes a recent review of a 1990s work of research-based art: “the viewer browsed around, forever waiting for the artist to arrive in some authorial form to tell her how it fit together. This is what it feels like to be alone with information: awash in abundance, forever waiting for the connection to go through, confronted with the generous and endlessly frustrating opportunity to make sense of matter.”
ChatGPT shows up as that “authorial form.” It is presented as a literal companion to keep us from feeling “alone with information.” Faced with the exhausting task of sifting through the “abundance” of data now available to us, it digests it for us, presenting it as already authored into whatever form we might request. Since any connections we might make among the ridiculously minuscule amount of data we are capable of analyzing are of course insignificant, we can relinquish our desire to make any of our own connections at all: The connections are all there, ready-made, in the vast architecture of the model. Instead of making connections, instead of researching, we can assume an aesthetic attitude toward information, invest ourselves in the elegance or cleverness with which it’s presented, preen ourselves on how well we’ve had it framed.