Odds and ends
This is another round-up post of various notes and fragments, prefaced again with apologies for not having written something more substantial.
“It’s hard to imagine that nothing at all could be so exciting,” according to the Talking Heads song ‘Heaven,” but what if “nothing at all” could be made into a lozenge? This New York Times article reports on a Japanese candy that “had no taste by design.” According to its manufacturer, the article reports, it was “developed for people who wanted to moisten mouths that had gone dry from all-day mask wearing but without a sugar rush.” But it then became popular in its own right for people seeking to savor the presence of absence, or any number of other existential paradoxes. One woman told the Times that she bought the product “maybe to put myself into a state of ‘nothingness.’” That sounds like a candy k-hole.
The article cites a “top editor at a confectionary industry publication” who makes the peculiar claim that “young people don’t want to keep food in their mouths for a long time.” I don’t doubt that is true, but I’m not sure older people enjoy keeping food in their mouths for a long time either. It also seems strange to think of a lozenge as “food.” The product manager’s description seems more apt: a “marble in your mouth that gradually melts and disappears.” I suppose it could be considered “slow food.” Experiencing the flavorless candy could even be considered a kind of endurance art for beginners.
One might conclude from the candy’s popularity that tastelessness has the strongest flavor, much as the sound of silence can be deafening. One could perhaps think of it as the ultimate flavor to end all flavors, the white-light taste that comprises all the other flavors in the spectrum. Or the appeal may be in an apprehension of “mouthfeel” unimpeded by the imposition of any specific flavor, much as Kant claims that beauty is a feeling of pleasure unconnected to conceptualization. The tongue and the taste buds are in a state of free play, allowing us to savor the transcendental conditions of “tasting” without any contingent chewing. No determinate concept restricts the flavorless candy to a particular rule of cognition, so it can harmonize our masticatory powers under the aspect of universality. In tasting nothing, we know precisely what we all enjoy in common.
In an essay for Harper’s Justin E.H. Smith laments what he perceives as the mounting insignificance of his (and my) generation.
I was well into my forties, and dimly aware that there were by now a few billion people in the world leading full lives of their own, who would consider anything I had to say irrelevant simply by virtue of the fact that it was coming from an “old” person. And yet I was still stubbornly churning out thoughts as if they had some absolute meaning independent of the age and the perceived generational affiliation of the person they were coming from. I had not yet fully admitted to myself that the world belonged to young people now—who plainly did not belong to my universe of values and did not share my points of reference—and that from here on out my presence was, at best, to be tolerated.
I can definitely relate to this, though it makes think that if someone with a job and books published by prominent university presses and essays in widely recognized national magazines feels this way, I should probably feel a lot lot worse about the thoughts I am churning out and I already feel pretty superfluous most of the time.
I also recognize something generational in the way people my age tried to be “alternative” as a way of coping with class anxiety: “Musical cultivation in this context,” Smith writes, “was a sort of currency by which one might hope to maneuver into an imagined aristocracy through seeking out the most obscure representatives of the narrowest genre niches.” Smith claims that he and his friends went so far as to make “a big show of listening to nothing but radio static for weeks at a time in order to cleanse ourselves” — reminiscent of this classic Scharpling and Wurster bit where a music snob talks about listening to “air.” (“Have you ever listened to 45 minutes of just silence? No? Oh my god, then you haven’t lived.”)
Smith characterizes the Gen X disposition as “doing our best to preserve postwar youth culture … against the rising force that would, soon enough, cast us into whatever came next: the world whose most important narratives are shaped by algorithms, and in which the horror of selling out no longer has any purchase at all, since the ideal of authenticity has been switched out for the hope of virality.” That seems basically accurate too.
But I don’t really relate to where the essay ends up, with a series of gripes about authoritarian cancel culture and the supposed betrayal of art’s autonomy and so on. It all sounds similar to what cranky academics said about “political correctness” in the 1990s, and much of it comes across as opting out of responsibility for trying to improve the conditions of the world. Human nature is posited as universally terrible, and the best we can do is be “honest about what we are as human beings,” a profoundly convenient credo for preserving the status quo. “I acknowledge that I am feeling defeated,” Smith concludes, “and it is a symptom of this defeat that I have withdrawn to live in the past, like old man Crumb with his vinyl 78s.”
I know that I frequently come across as a dyspeptic misanthrope with retrograde musical tastes, doom-mongering about the persistence of capitalism and its warped social relations, but please, please, don’t ever leave me to my vinyl 78s.
The consistently excellent Ali Griswold, who writes a newsletter about the “sharing economy,” wrote a post about the disappearance of surge pricing into a perpetual implementation of price discrimination.
The most significant change to surge … was the shift, led by Uber and followed by Lyft, from visible to invisible surge, or what Uber rebranded as “upfront pricing.” ... Uber frames upfront fares as simpler for riders: the price you see is the price you get. But in many ways upfront fares are anything but. The prices are still dynamic, but without any context. Instead of being told that demand is high and your fare will be too, you now have no sense of what a “normal” versus high fare even is. You simply get a price from the algorithm and have to decide if it’s one you’re willing to pay.
In short, there is no “surge pricing” because different fares depend on who you are, and not the traffic conditions affecting everyone. Whatever data these companies have about you will be used to isolate your situation — everything is an exception, there is no “normal” condition or shared sense of fair value — and then deceive or manipulate you into paying more.
It is a good example of how ambient data collection is less about “delivering consumer relevance” and more about “eliminating consumer surpluses.” The dream is to charge everyone the maximum amount they will pay for something, and using whatever leverage the company can garner over the point of sale to drive that number up. The result is “upfront” prices that are anything but, or the endlessly deceptive Amazon interface, rife with one-click ripoffs and cascades of misleading information on top of the discriminatory pricing.
Ride-hailing apps have the advantage of controlling a two-sided market: “The dominant ride-hail pricing model now is this: companies like Uber and Lyft charge one price to the rider, based on what they think that rider might pay, and offer another rate to the driver, based on what they think that driver might accept.” The companies get to combine price discrimination with wage discrimination to produce their take from every transaction — information capitalism at its finest.
David A. Banks wrote a good piece for Real Life about what happens when the logic of price discrimination is extended to all of everyday life, a condition he calls the “subscriber city”:
It would be able to wall off parts of the city on the fly without changing the physical landscape. Individuals would be unable to predict the behavior of doors, queues, and prices, as these would be subject to the whims of platform owners. One could be anywhere and suddenly find oneself outside looking in.
Ways to be wicked
OpenAI has recently been hyping the ability of GPT-4 to assist in content moderation — offering to help “to tackle a problem that its own technology has exacerbated,” as this Verge piece put it. Acknowledging that “content moderation demands meticulous effort, sensitivity, a profound understanding of context, as well as quick adaptation to new use cases,” the company nonetheless proposes its tool, which has no “understanding of context” because it doesn’t have understanding of anything. It is not capable of cognition or adaptation in response to the social and rhetorical deployment of signs and symbols.
It would of course be beneficial if humans were not subject to the rigors and abuses of content moderation work, but machines cannot eliminate the fundamental need for human judgment in performing it. The nature of what needs to be moderated is not fixed; it changes in response to the moderation policies put in place. If you accelerate the implementation of new rules as OpenAI proposes — using LLMs to make a “faster feedback loop” — you also accelerate the generation of new kinds of outrage, expressed in ever more elaborate ways. If GPT is moderating its own content, as seems increasingly likely, it will be like assembling a giant generative adversarial network for the production of more offensive content, not less. OpenAI can waste an even larger proportion of the world’s energy supply on ever more elaborate efforts to silence itself.
In Custodians of the Internet, a 2018 book about content moderation, Tarleton Gillespie details how social media companies have tried “to solve problems of interpretation and social conflict with computational methods that require treating them like data” — as though conflict didn’t imply dynamism, shifting meanings and distributions of power. As Gillespie notes, “the fluidity of culture, complexity of language, and adaptability of violators looking to avoid detection” make it impossible for a model to anticipate all the kinds of content a company might want to suppress. The data available is always looking backward.
The detection tool being trained will inevitably approximate the kinds of judgments already made, the kinds of judgments that can be made. This means that it will probably be less prepared for novel instances that were not well represented in the training data, and it will probably carry forward any biases or assumptions that animated those prior judgments.
Although automated tools are often touted as removing subjective bias, in practice they “detach human judgment from the encounter with the specific user, interaction, or content and shift it to the analysis of predictive patterns and categories for what counts as a violation, what counts as harm, and what counts as an exception.” They allow humans to disavow responsibility for the resulting decisions, as in the recent case of an Iowa school board using ChatGPT to hamfistedly select which books to remove from libraries. As Natasha Lennard succinctly pointed out, “Unremarkable algorithmic systems have long been used to carry out the plans of the power structures deploying them. AI is not banning books. Republicans are.”
I wrote a piece this week for Overland about Emily Hund’s book The Influencer Industry, which explores how “authenticity” is manufactured industrially. I think that’s true, but it seems like the role of the influencer, or the cultural significance of authenticity is changing a bit. It used to be that authentic meant “not just an effort to sell you something”; I suspect that with the rise of generated content and more automated labor, it will denote “the aspects of work that machines can’t do.” This means influencers will be showing us specifically how to add promotional labor to anything. After all, machines don’t care any more about selling out than non–Gen Xers do.
thanks for reading! to receive more posts like this one, become a paid subscriber