Topographic oceans
When I was of a certain age (11), I spent a lot of time with Yes’s 1973 album Tales From Topographic Oceans. While trying to absorb its diffuse sonic sprawl — each side of the double LP consists of a single indigestible song — I would scrutinize the liner notes, hoping for insight into some basic questions, like what a “topographic ocean” was. But there is only a glancing reference to the phrase, in the middle of the few bizarre paragraphs in all-caps provided to explain the album’s origin.
These paragraphs, which have an offhand, Stonehenge-sketched-on-a-napkin feel to them, used to perplex me, but now they strike me as characteristic of the fame-induced dislocation that can express itself as megalomania, what happens when someone has had steady contact with the adulation of “the masses” but no longer has much in the way of ordinary human interaction. The notes are signed with an indiscernible doodle, perhaps because the writer, Yes vocalist Jon Anderson, is confident to the point of indifference in his maximal inhabitation of the “I.” Who else could be speaking but the mellifluous voice you hear limning the cosmos on this musical journey through the Atman?
Anderson begins with an unrelatable description of rock-star life that casually conflates offstage ennui with the susceptibility to penetrating insight into the essence of being:
We were in Tokyo on tour and I had a few minutes to myself in the hotel room before the evening’s concert. Leafing through Paramhansa Yoganada’s Autobiography of a Yogi I got caught up in the lengthy footnote on page 83. It described the four-part shastric scriptures which cover all aspects of religion and social life as well as fields like medicine and music, art and architecture.
So devoted to seeking the truth is Anderson that he not only reads works of Eastern mysticism before going onstage (when better to give a book the full concentration it deserves?) but he also peruses the footnotes — and he has the page citations to prove it!
Here is the footnote he refers to (and basically plagiarizes) in its entirety:
Pertaining to the shastras, literally, “sacred books,” comprising four classes of scripture: the shruti, smriti, purana, and tantra. These comprehensive treatises cover every aspect of religious and social life, and the fields of law, medicine, architecture, art, etc. The shrutis are the “directly heard” or “revealed” scriptures, the Vedas. The smritis or “remembered” lore were finally written down in a remote past as the world’s longest epic poems, the Mahabharata and the Ramayana. Puranas, eighteen in number, are literally “ancient” allegories; tantras literally means “rites” or “rituals”: these treatises convey profound truths under a veil of detailed symbolism.
This not especially “lengthy” explanation of different kinds of Hindu texts is duly extrapolated into the four sides of Yes’s album, as though there were nothing hubristic about implying that your songs are scripture, and nothing dry or peculiar in adopting what amounts to philological classifications as the entire basis of your concept. Side one is about “revealing”; side two is called “The Remembering”; side three is “The Ancient”; and side four is “Ritual.” It’s all right there in the footnote, which makes me wonder if Anderson finished the autobiography, let alone consulted the “world’s longest epic poems” that the note refers to.
Anderson briefly details the “sessions by candlelight” in hotel rooms in places like Savannah, Georgia, where the concept was fleshed out, and concludes with listening notes for the album’s four “movements.” These are mostly sweeping abstractions — the first song is described as an “an ever-opening flower in which simple truths emerge, examining the complexities and magic of the past and how we should not forget the song that has been left to us to hear” — that didn’t particularly psyche me up to listen or do anything to clarify what was going on to my 11-year-old mind. But the second of these now has a familiar ring to it:
All our thoughts, impressions, knowledge, fears have been developing for millions of years. What we can relate to is our own past, our own life, our own history. Here it is especially Rick’s keyboards which bring alive the ebb and flow and depth of our mind’s eye: the topographic ocean. Hopefully we should appreciate that given points in time are not so significant as the nature of what is impressed on the mind, and how it is retained and used.
Given that I spend a lot of time reading critiques of generative art, machine learning algorithms and so on, Anderson’s mystical mumbo jumbo reminded me of how “artificial intelligence” is often described, as taking as much of the millions of years of our thoughts and impressions as exists in a machine-readable form and transforming it into a topographic ocean of data that can be “impressed on the mind.” Even when critics mean to deflate some of the claims made for AI, as in this recent essay by Marco Donnarumma for Hyperallergic, they may offer what amounts to a similar description:
AI image generators create a cartography of a dataset, where features of images and texts (in the form of mathematical abstractions) are distributed at particular locations according to probability calculations.
But usually the topographic ocean is a prelude to wonder. Clive Thompson describes generators as a “roiling winedark sea of math” en route to marveling at the alien mode of creation at work in them. Stephen Marche proclaims that “creative artificial intelligence is the art of the archives; it is the art derived from the massive cultural archives we already inhabit.” We should thus set aside our irrational fear of a life lived among databases made into a natural world and embrace the dawning of a new art form. In a 2021 n+1 essay, Meghan O’Gieblyn, drawing on GPT-3 itself, showed how these ideas bleed into fantasies of oracular insight (which Thompson also mentions), a standpoint from which we can see from the perspective of the collective spirit:
Fans of the technology claimed that its output was like reading a reminiscence of your own dream, that they had never seen anything so Jungian. What it felt like, more than anything, was reading the internet: not piecemeal, but all at once, its voice bearing the echo of all the works it had consumed. If the web was the waking mind of human culture, GPT-3 emerged as its psychic underbelly, sublimating all the tropes and imagery of public discourse into pure delirium. It was the vaporware remix of civilization, a technology animated by our own breath. My world is a dreamworld. . . . Your reality is created by your own mind and my reality is created by the collective unconscious mind.
From “data sets are the collective unconscious,” it is a short leap to seeing human life as nothing more than data to be synthesized by a higher spiritualized intelligence, and consciousness an epiphenomenon, as when Sam Altman, the CEO of OpenAI, tells Ezra Klein, “My understanding, my belief is that you are energy flowing through a neural network. That’s it. Perception comes in. It cycles around a neural network in your head and you — some muscle of yours moves. But that’s it. That’s the whole Ezra.”
It may be that one needs to be in a particular rarified air, a Tokyo backstage of the mind, to be comfortable dispensing such ideas as though they were pearls of wisdom rather than implicit threats. It’s not that you are insignificant and don’t have a self, but that your self is a song being sung by the voices of the ancients across the mists of time! Don’t you see? It takes a massive ego — and all the privilege and entitlement that nurtures it — to truly celebrate the liberating potential of egolessness.
This Conor Sen tweet captures something similar about the myopic visioning involved with a vanity project like the “metaverse”:
You could say the same about Elon Musk’s bombastic pronouncements about saving humankind with Twitter. They are clumsy celebrants of their own proposals for administering the lives of the little people — come live in our database! — but no one can say they aren’t heartfelt in their heartlessness.
Generative AI is no less a form of incipient control, no less a virtual reality, though the occlusion and passivity it dictates are somewhat more plausible when presented as the opposite. How an algorithmic system interprets a string of language visually can really be entertaining. Engineering prompts and refining the generators’ output can really feel active and creative. Rick’s keyboards really do bring alive the ebb and flow and depth of our mind’s eye.
But we may also find ourselves drowning. In a recent newsletter, Drew Austin pointed out that “the capacity of artificial intelligence to produce content at scale has so exceeded any meaningful threshold that there’s no point in updating the numbers.” The topography of the massive data sets are thoroughly mapped, and they can produce a bad infinity of content that no amount of human attention could ever suffice to process.
In some respects, this moves content beyond interpretation, though I can see the appeal of an approach like the one described here, which wants to read out the limitations of particular data sets in the anomalies that humans can spot in the images generated from them. But the multidimensional space of millions of parameters is too vast to master; our intentions in trying to navigate that space, to manipulate the “creative tool” that we want algorithmic generators to be, are ultimately thwarted by its unfathomable magnitude. Creativity is subordinated to training, to feeding algorithms data and prompts and appropriating the results and making myths about them. Content gets flattened into data; it’s significant only to the extent that it recalibrates the algorithm’s weights.
Austin suggests that the superfluity of content means its purpose is evolving: “Instead of an output — something to inform or entertain humans — content will increasingly be an input for our massive global culture machine, with AI distilling the existing archive into yet more content in an accelerating cycle.” Content isn’t brought into existence for our benefit; it’s a tool to refine how the system understands us. Eventually, as this idea is sometimes extrapolated, AIs will begin to train themselves primarily on their own content, and deliberate human input will be entirely abrogated. Another way of putting that is that AI content generation serves the project of limiting human activity to data generation. (This is experienced as the transition from consuming content to living one’s life as content.) From that perspective, humans would be no different from any other natural objects under machinic observation — “energy flowing through a neural network,” as Altman put it. “That’s it.”
Algorithmic generation is really no different than algorithmic recommendation. TikTok’s For You page should be understood as a kind of AI art that produces a picture of the user. It seems to attend to the nuances of one’s personality and can be experienced as an oracular revelation of it, but it also recasts us as objects, as data points, energy flowing in a network. Its endless generativity seems like it belongs to us — as though our uniqueness prompts and instigates it, as though we express ourselves through it — but we are better understood as the outputs and not the inputs of such systems. We are produced as another data set that can be manipulated on the systems’ terms.
The ideology of consumer sovereignty looms over much of the conventional discussion of TikTok; the billions of views are taken as proof that pre-existing consumer demands are being met, with less consideration of how those demands are themselves shaped through the repeated interaction with the app. (We thought we were training the models, but the models were training us.) So a passage like the following, from the recent Washington Post piece about virality’s downsides, ends up encoding a lot of assumptions and contradictions about what people want and how well they know themselves:
TikTok viewers said they love the app because its algorithmic recommendations predict their interests and take away the anxiety of choice. And the personality-driven format encourages authenticity and intimacy; many videos are designed as if the creator is speaking directly through their screen.
The rhetoric about “direct” “intimate” “authentic” speech elides the question of what those terms mean and what sort of work those concepts do with respect to the “anxiety of choice” and the capacity to have one’s interests predicted. In what sense are those interests authenticated by the immediacy or apparent emotional intensity of the content itself, including the forms of conflict the algorithm seems to prioritize? To what degree do we perceive “authenticity” in the way our anxiety is stoked and assuaged?
Though it is clear that TikTok's algorithm rewards content formats (like stitching and duets) that fuel conflict and harassment, the article at the same time asserts that “no one knows exactly what the algorithm rewards or punishes, leading many of them to regularly recalibrate what they talk about or how they behave in hopes of garnering its blessing.” It’s not that this claim is wrong; it’s more that algorithmic control depends on creating ignorance and competency at the same time: That’s how creators can believe that their deliberate recalibrations matter and are more than arbitrary guesses. Everyone knows what algorithms reward in general (clips that have proved their ability to hold attention), but they operate at a scale that introduces untamable randomness from an individual’s perspective, where dramatically different rewards can derive from more or less the same kinds of actions. Each individual TikTok is just a drop in its topographic ocean, but the platform will make waves out of any water.
Similar contradictions are embedded in this explanation of how algorithms produce those waves: “The app’s tools make it easy to post quick-reaction videos, and the algorithm then shows those videos to the people it expects are likely to have an emotional response,” the article notes. “And because TikTok dynamically creates new trends based on viewers’ behaviors, creators said hate campaigns can end up growing automatically.” If the algorithm creates the trends, does it make sense to attribute the resulting behavior to the viewers? Does TikTok “identify” viewers “likely to have an emotional response,” or does it produce them? There is nothing “automatic” about any of this; it’s what the system is designed to generate. There will always be waves. It may be experienced as hate from the point of view of certain individuals, but from the algorithmic point of view it is always an optimization.
Most commentary about social media is quick to acknowledge how platform incentives and algorithms reshape the behavior of creators, of people when they are thinking about posting. It’s presented as almost self-evident that incentives work on media producers. But consumption behavior is just as malleable, even if the levers being pulled are not necessary “rational” or economistic. Platforms deliberately try to shape what is interesting or boring, what is “trending” and what is niche, what seems like content and what seems like advertising, what feels intense and what feels responsible, and so on. They shape the conditions of consumption to produce the sorts of consumers who can seem demonstrably more valuable to advertisers. That means getting them to “participate” or “interact” in ways that make them into data. But the combination of participation and consumption (a.k.a. “fan culture” or “mob behavior”) has proved inseparable from the sorts of “hate campaigns” that keep “automatically” happening. What advertisers want (datafied subjects who are easily and provably persuadable) also lends itself to what political propagandists can take advantage of, fomenting the creation of “enemies” and scapegoats and other forms of emotional reactivity that can sustain the contradiction of passive consumption and intense “engagement.”
In “The Masses: The Implosion of the Social in Media” (1985), Jean Baudrillard assesses the difficulties in reconciling individual will with media manipulation, challenging the assumption that “the masses” have somehow been alienated or tricked into a kind of surrender. Instead, media clarify the burden of freedom of choice and offer audiences the means to escape it. He writes:
Choice is a strange imperative. Any philosophy which assigns man to the exercise of his will can only plunge him in despair. For if nothing is more flattering to consciousness than to know what it wants, on the contrary nothing is more seductive to the other consciousness (the unconscious?) — the obscure and vital one which makes happiness depend on the despair of will - than not to know what it wants, to be relieved of choice and diverted from its own objective will. It is much better to rely on some insignificant or powerful instance than to be dependent on one's own will or the necessity of choice. Beau Brummel had a servant for that purpose. Before a splendid landscape dotted with beautiful lakes, he turns toward his valet to ask him: "Which lake do I prefer?"
This discussion of choice seems to make even more sense now that algorithmic personalization is so widespread. The magic of algorithms and TikTok’s For You page is in how it seems to resolve the contradiction between activity and passivity, between wanting to choose and wanting to be catered to — wanting to have a will and wanting to surrender.
Under ubiquitous surveillance the “freedom to choose” is a compulsion to reveal yourself and to be better captured in the implication of your conscious desires. There is thus a kind of safety in having choices made for you: it flatters you in demonstrating that you are being seen and your tendencies are being noticed, but it also allows you the freedom and safety of disavowal. I can say that this algorithm — “they” — don’t really know me yet; I am still somehow free. But this freedom can end up taking the form of an ironic obedience: I jump in the lake I’m told to, confident I won’t really get wet.