Periodically I’ll come across someone saying how they learn so much on TikTok or get so much out of it, and I’ll feel obliged to yet again make an effort to use it as I imagine a typical person does. I open it up and try to start watching with a Chauncey Gardiner–like blankness, ready to go willingly wherever its algorithm wants to take me. But impatience sets in almost immediately. I begin to swipe and swipe, not necessarily because I’m bored with a particular video or disturbed by the jarring music and stilted voices (which are very disturbing to me) but because swiping is more engaging than not swiping. I begin to feel like I am wrestling with the algorithm for the remote control, and I’m not sure if I am winning or losing.
Part of me is deeply resistant to the idea of being figured out by the platform, so I get more entertainment out of thinking that I am keeping it guessing than I do from any of the videos themselves. For me their content is entirely overshadowed by the algorithm that circulates them. By swiping constantly, I can pretend that my essential nature can’t be characterized positively in terms of some specific kinds of content; instead I am pure negation. Not this. Not this. Not this again.
Perhaps it’s generally the case that TikTok gratifies a desire to make changing the channel into the program itself. In this piece for the Knight Institute, Arvind Narayanan makes the case that TikTok’s success is less a matter of oracular predictive algorithms (which are no different, after all, than those used by other platforms) than interface design that maximizes the experiential potential of algorithmic feeds. Narayanan argues that TikTok’s default vertical video (originally a Snapchat innovation) and the one-handed swiping (Tinder’s hallmark) it permits make it easier to misrecognize the nature and power of the algorithm’s intervention.
On YouTube, every time you select a video but then decide you don’t want to watch it, it’s an annoying process of scrolling to find another one. On TikTok, swiping up is so quick that you don’t consciously notice. So even if YouTube’s and TikTok’s algorithms are equally accurate, it will feel much more accurate on TikTok.
That passage seems slightly off, though, in that you rarely “select” videos on TikTok but settle on them: They are configured as dating partners whose specific combination of desirable attributes can’t be known in advance but must instead be pursued, discovered. This is different, say, than typing Being There into a search box and clicking play.
At the same time, in Narayanan’s view, the minimal exercise of agency (swiping) works as a form of auto-persuasion: The more you reject the algorithm, the more it seems right when you don’t. The perfect match becomes a numbers game, and its “perfection” consists of the cumulative amount of rejection that preceded it and not some special quality internal to the thing itself. Perfection ultimately derives from an ongoing process of continual engagement.
Because your agency on TikTok appears as limited to saying no, you are freed up to enjoy any kind of content without feeling responsible for wanting to see it. You can drift along in a perpetual state of disavowal. “Eliminating conscious decision-making from the user experience means that videos that cater to our basest impulses do relatively well on TikTok, because people will watch these videos if they show up in their feed but won’t explicitly click on them,” Narayanan writes.
The conditions of consumption on TikTok, then, are that what you see both does and doesn’t reflect you; in a sense you are both onscreen and not. You hold the transmuted image of yourself at a distance that you control precisely by not controlling it. You can swipe whenever you want, except maybe when you are feeling something. Agency and pleasure are in tension, their relation to each other made ambiguous. Over extended periods of use, TikTok (like television in general) acclimates users to this kind of ambiguity, the idea that passivity is a form of agency and a protection against being manipulated rather than a precondition or a consequence of it.
Narayanan also cites as an aspect of TikTok’s successful design the disposability and interchangeability of creators. By choosing content for users from a vast pool (and discouraging them from choosing for themselves), TikTok neutralizes the leverage of individual creators.
The de-emphasis of subscriptions means that there are fewer superstars, and fewer parasocial relationships. This, in turn, has kept creators from getting too powerful or quite as invested: TikTok pays them a pittance, and didn’t pay at all until 2020 … What TikTok lacks in superstars it more than makes up for in its “long tail” of creators.
TikTok garners this surplus of creators, Narayanan suggests, not just by suppressing the emergence of influencers but also by offering tools that make content creation easy. In essence, the tools themselves are the creators, and the people using them are arbitrary inputs that unleash the tools in various ways. People watch not to see some specific person or their ideas but to see how the tools have been brought to life.
Something similar occurs when people use generative AI tools. The tools place themselves in the foreground, and users become “prompt engineers” rather than writers or artists. The models themselves become the influencers, the celebrities, even if they are transforming what you bring to them into what their algorithms make of it, as in the case of the app Lensa, which uses a person’s selfies to produce fantasized images of them — not the user’s own fantasies, but some grim reflection of society’s fantasies of who they should be.
Lensa’s output can be understood as a visual manifestation of what it means to use TikTok: It displays us as an algorithmic product. Posting such images to social media platforms advertises our willingness to become an algorithmic product, to submit. The volume of these posts normalizes that kind of surrender, just as all the posts of ChatGPT3 logs and DALL-E creations does.
David Golumbia takes this line of thinking to its logical conclusion, declaring that “ChatGPT Should Not Exist,” since it and all the other generative models are “built on very dark and destructive ideas about what human beings, creativity, and meaning are.” Such tools, he suggests, aim to not only replace but abolish human creative capacity. “The point of these projects is to produce nihilism and despair about what humans do and can do.” They invite us to happily regard ourselves as no more than “stochastic parrots,” as OpenAI CEO Sam Altman mockingly commented in a tweet that Golumbia cites.
“Stochastic parrot,” as Golumbia explains, is a reference to this paper by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and “Shmargaret Shmitchell,” which emphasizes that a large-language model is incapable of creating meaning because it “is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.” Altman tries to rebut this by suggesting that text generated by humans is equally ungrounded, ultimately devoid of intent, contextual awareness, or reciprocity. We only ingest language and expel it according to the sense of statistical probabilities we have retained through our engagement with various forms of sensory data. That is, we are all a bunch of Chauncey Gardiners passing through the world absorbing information without comprehending any of it and outputting strings of text onto which other people simply project their own meanings.
The companies building generative models like to claim that they are democratizing creativity and giving artistic “superpowers” to the apparently semi-literate and maladroit masses — much as TikTok’s suite of tools “democratizes” content creation on its platform and allows otherwise boring people to seemingly keep pace with influencers and celebrities. But the easier the tools are to use, or the more powerful the generative models are, the more they may convert people into no more than their users and consumers. They train us to enjoy not having intentionality or enjoying “intentionality” as an abstraction, as just flipping channels randomly to see what’s on or feeding language to a machine just to see what it can be made to spit out. We can show off our identity as something the algorithm makes of us, as though that is the best we can do.
Of course, people can use editing suites and generative models with intent and they can be deployed to help realize some creative vision. But if those tools are situated within platforms and interfaces that trivialize or empty out agency and reciprocity, emphasizing spectatorship and performative obedience, they can have the opposite effect. If flipping channels is making you feel powerful, you might need to turn the television off.
Chauncey Gardiner. Love it. Now I have rewatch Being There.