I am mildly trypophobic — that is, I am revolted by certain images of symmetric profusion: seed pods, virus cells under a microscope, voids appearing as positivities, that sort of thing — my mind balks at dredging up examples. I felt the fear even before I had to look up how to spell the word trypophobia, as I knew the search results would helpfully present me with some extremely repulsive specimens.
When I generate images for these posts, I sometimes have a similar moment of panic when the images first pop up, that they will present not what I asked for but instead a glimpse of machinic metastasis. There is always the threat that their usual uncanny-valley wrongness will tip over from being preposterous or unconvincing into something more grotesque. (See the fourth window in the image above.) In an essay for the Verso Books site, A.V. Marraccini notes that with an image generator, “the algorithm has seen the pattern in images of hands, they go finger-finger-finger-finger, but it doesn’t know when to stop adding fingers, or how they bend.” Similarly, when generating images of cities, it “treats columns like fingers too. There are a lot of them, in vast rows, growing uncannily in to the distance. The images have too many columns, and too many cupolas.”
In these anomalies the algorithms themselves become visible not as guided applications to solve specific problems but as seemingly undead chthonic forces that proceed blindly and relentlessly without purpose, consuming time and space with pointless mutations threatening to stuff your eyeballs full of fractal filigree. Supposedly the models are being improved on these fronts, but I still can sense the extra fingers below the surface, waiting to emerge en masse from an image’s overworked seams and textures. I can imagine being repeatedly goatse-ed by an AI programmed to send me images generated by algorithmic iterations on trypophobia prompts.
It’s undoubtedly a projection on my part that I see in specific generated images what I take to be the implications of the method by which they are made. Each image implies an infinite number of others produced along the same lines, yielding the same slight variations that make the trypophobic images of profusion so disgusting. But this is trypophobia at the level of form rather than content, a fear of procedural generation as an unmoored phenomenon, poised to blanket the world with lacunae.
In an effort to better understand procedural generation not so much as as a technique but as an affect, I recently read the 1997 book Hamlet on the Holodeck: The Future of Narrative in Cyberspace, by Janet Murray, who was then a professor at MIT and apparently deeply steeped in techno-enthusiast ideology. As the title suggests, the book wants to explore how computer simulations can be considered “great art” and the various avenues the “cyberbards” of the future should traverse to make the inevitable masterpieces of new media. Most of its descriptions of interactive works seem tedious now, but they were likely aimed at justifying the once academically scandalous idea that video games could be emotionally engaging and function as literature, as if that were the aspiration that would legitimate them.
If its details and nuts-and-bolts prognostications now seem beside the point — its account of “chatterbots,” for example, fails to anticipate LLMs and presumes they will always need human dialogue writers — its effusive excitement for new media technology offers a useful comparison with the current hype. In a characteristic paragraph from the introductory chapter, hackers are described as “the most creative” students at MIT, the “wizards and alchemists” who were “playfully breaking into other people’s computers.” How fun!
Computers themselves are positioned as a boon to the humanities, a new and powerful facilitator for the human imagination. “Although the computer is often accused of fragmenting information and overwhelming us,” Murray writes, “I believe this view is a function of its current undomesticated state.” Because of course, we will inevitably “domesticate” computers rather than be “domesticated” by the well-capitalized powers pushing them into everyday life. “Using the computer, we can enact, modify, control, and understand processes as we never could before,” Murray writes, overlooking the reciprocal implication that those processes would also modify and control us, all while becoming increasingly opaque and incomprehensible.
But the title of Murray’s introductory chapter — “A Book Lover Longs for Cyberdrama” — gets at what seems most myopic about the book in general: the repeated assumption that because a new media technology affords certain new possibilities, those possibilities are automatically desired.
The current discourse about the metaverse is obviously one long extended riff on this bias: As simulations become more capable, of course they will become more popular, because new capabilities automatically impose new desires — they tap into latent demand for whatever possibility was also at the same time beyond consumers’ wildest dreams. Murray uses this ouroboros to rationalize the appeal of procedural generation, then mostly a matter of choose-your-own-adventure hypertexts and primitive open-world games. In a series of chapters on the “characteristic pleasures of digital environments,” Murray tries to narrate these pleasures into existence by simply describing what they are supposed to be. It feels like an extended sales pitch presented under the guise of literary analysis, in which identifying an aspect of digital storytelling is equivalent with identifying a new “pleasure.”
This section of the book culminates with a passage that lays bare the legerdemain:
The three aesthetic principles described in this section — immersion, agency, and transformation — are not so much current pleasures as they are pleasures we are anticipating as our desires are aroused by the emergence of the new medium. These pleasures are in some ways continuous with the pleasures of traditional media and in some ways unique. Certainly the combination of pleasures, like the combination of properties of the digital medium itself, is completely novel. To satisfy our desire for this new combination of pleasures, we will have to invent techniques of authorship that are similarly eclectic.
Murray seems to want to have it both ways: We already supposedly want to have the kinds of procedural narratives we are only now learning about through rudimentary examples. The “combination of pleasures” possible in new media are “completely novel” yet also reflect a “desire” that we already long to “satisfy.” Even as the emergence of new media “arouses” our desires, it also demonstrates the shortcomings of the existing “techniques of authorship” that nonetheless manages to provoke that arousal. But this kind of incoherence is unavoidable if you want to claim that new media technology caters to a universal desire it also singularly invents. Those “pleasures” don’t derive from human nature or the intrinsic properties of a media technology; they are sustained by social norms and political-economic conditions that frame what is experienced as pleasure, what registers as “immersion” and “agency” and permissible “transformation.”
The proponents of generative AI now are preaching something similar to Murray, that the desire for generated texts and images is self-evident, automatically “aroused by the emergence of the new medium.” But the constant dissemination of generated output suggests something different, that an extended training session for consumers is under way, attempting to teach us how to enjoy the idea of procedural generation and all its novel powers — how we can come to feel that we have domesticated them.
If we can be made to accept that the demand for new technology flows automatically from its capacities, we might not pay as much attention to the means by which that demand is actually manufactured or coerced, or what sorts of desires and relations its manufacturers are trying to pervert.
Roland Meyer posted a mini-essay to Mastodon speculating about a potential post-photographic implication of generated images. Over time, everyday life has been reshaped in terms of what photography makes possible. “Even as it has sought to represent the world, photography has also transformed it, making reality more photogenic by turning places and events into photo ops.” Given that generative models can synthesize photogenic images and fabricate photo ops that never occurred, reality will perhaps no longer need to be so photogenic itself. Meyer writes:
As AI becomes more and more integrated into everyday image production, we need less and less "reality" to produce more and more impressive pictures of it. The world only gives us the raw data, everything else happens in post-production. We don’t need to wait for the perfect sunset, our dinner doesn't have to look flawless, and we don’t have to worry about other people ruining our perfect shot.
Meyer wonders if this will undermine “shared visual reality.” But that conclusion would seems to depend on treating “photogenicity” as something intrinsic instead of seeing it as a mediated social relation. What makes something photogenic is not some objective quotient of attractiveness but its ability to convene shared visual reality: You can imagine that it can reasonably compel someone else’s attention.
If generated images make it too easy to re-create current standards of photogenicity, then those standards will be negated, and people will inspire to impress each other with something else. The pleasure in a pretty photo ultimately depends on its potential social use, even if its only imagined. A new technology that simulates the surface appearance of what is currently socially useful doesn’t also convey that social use; rather it empties those appearances out. Likewise, the demand for generated images doesn’t depend on their novelty but on their social currency, which depends on a broader range of factors.
When I go running in the park, one of my routes takes me through a section of woods where people go to birdwatch. (I’m sure this endears me to them). Invariably I see someone carrying a camera with one of those telephoto lenses attached that looks like several 64-ounce Big Gulp cups fused together, and I assume they are trying to get some finely detailed image of a bird that a generative model could no doubt simulate. The fact that the wildlife photos could also be faked doesn’t seem likely to deter anyone from trying to capture their own trophy photos. I assume the process itself is rewarding on some level, that their basic and outmoded technology gives them a story to tell, or, if you prefer, embeds them in an immersive narrative in which they have a particular kind of agency that they perhaps see as being stripped away from other people. The photos may be of birds, but they also depict that act of resistance or defiance, and in that way they have social implications even if no one else ever looks at them.
It may be that new technologies inspire a fantasy that we can escape the sociality of pleasure and desire, that we can find a way out of the endless negotiations of recognition and reciprocity. When ASMR first emerged as a fad, I tried watching some videos that were supposedly engineered to provoke the phenomenon. I thought that if I watched enough, I could perhaps convince myself that I too was experiencing the sensation that people were describing, that I was participating in the ASMR moment and the new kind of feeling that the internet seemed to have consolidated. But at the same time, the essence of ASMR seemed to be that its tingling feeling came on unprovoked as a sign of a pure sensuous engagement with what was otherwise content. I wanted it to prove that I could participate in something social without it feeling social, that instead it would happen to me at some pre-discursive, noncommunicative level.
I never ended up experiencing ASMR; instead I began to be freaked out by the sheer volume of videos available, an indiscriminate profusion that seemed ready to expand into whatever networked space was made available to it assimilating any kind of encounter or scenario and reducing it to over-miked sounds and repetitious visuals. Instead of feeling soothing and relaxing tingles, I experienced a vague sense of irrational panic, as if my trypophobia was kicking in and I was seeing the holes everywhere.
Understanding the essence of technology in our modern age is to understand that technology is an ordering and calculating seeking of efficiency, for its own sake. Of course this is driven by capital. This idea has infused itself so intensely in society that every discussion of technological potential is wrapped around the idea of replacing something old and less useful or efficient with something more superior. Every new technology is providing an opening for our modern way of living our lives by giving us more mastery over something that we have previously had to overcome. We may not have known we’d need to overcome it, but technological thinking has brought forth our desire to do so by providing the solution.
This is something much more devastating to our nature as humans than the technology itself. This way of being human has become wired into us so deeply that we strive to order things efficiently, so much so that we even do it to ourselves. We keep ourselves as resources to tap in to when we need to overcome problems to solve, we entrepreneurialize ourselves, with even more options to do so in our new age of AI.
So many of the articles responding to AI are caught in this dilemma of efficiency and flexibility. It’s going to change our lives! For the better, for the worst! A new way to lose ourselves.
I did not know that I needed more efficiency in experiencing “reality.” That we needed to find a more efficient way to impress each other. I like the example of the bird watchers because it’s such a contrast to losing oneself in efficiency because it’s all about finding oneself in the world, in nature. Face to face with a way of being that brings fulfillment. It’s a testament that we can choose our way of being, we do not need to be in such a state where reordering ourselves is the paramount task. As soon as we recognize this and let the world gather us more than we need to master it, the more we can be in touch with ourselves.
Isn't this just another iteration of Walker Percy's "Loss of the Creature?"
I've never been interested in taking photos and generally don't in order to avoid that diminishment in the same way that I avoid being a tourist. If I take photos, it's rarely of people. Pre- and post-digital photography, I just want to document or find a new way to see and represent a special landscape. So the technology has always been a way to slow down and see things more clearly or just differently. Just as you describe the birdwatchers. I'd never want to mistake their trophies with an AI-generated image, and I'm not worried about that happening because the real thing comes with that experience and story attached. If National Geographic tries to fake us out, we'd reject it — and I suppose that would signal the end of anything new to see. It will all be online, grist for a collective AI dream.
If there is any resistance or defiance at work, it's the refusal to be a tourist/pornographer always looking to consume and extract attention-grabbing novelty from a fully documented and exhausted planet.