Sampler
I feel like I should point out straight away that this is not going to be a coherent unified essay but rather a rundown of some articles and whatnot that caught my attention over the past few weeks. I hope my failure to synthesize the manifold is not too disappointing, but I haven’t had much conviction in my own ideas lately.
1) AI “poisoning"
This post by Baldur Bjarnason offers a look at what SEO practices may look like in the context of LLMs and diffusion models: “If you can get an AI vendor to include a few tailored toxic entries—you don’t seem to need that many, even for a large model—the attacker can affect outcomes generated by the system as a whole.” This is often described as “poisoning” (Bjarnson provides a raft of links that do so), but of course it also sounds like a business model. The “poisoning” attacks, Bjarnson notes, “seem to be able to target specific keywords for manipulation. That manipulation can be a change in sentiment (always positive or always negative), meaning (forced mistranslations), or quality (degraded output for that keyword).” This would produce an “irresistible” incentive for spammers to try to attack models, but it would also be an irresistible source of income for tech companies wanting to monetize their chat clients. This would work for as long as consumers were duped by interfaces and the general climate of AI awe into thinking that LLMs’ output is somehow objective rather than opaquely biased in ways their operators can’t or won’t detail.
2) Kinetic food
In a Grub Street article, Ezra Marcus makes the case that TikTok’s popularity is pressuring restaurants into serving food that looks more interesting in videos, just as Instagram once drove a push toward photogenic foods. “Wherever the influencers land, once they are in a dining room, they’ll need something to record. The more action that can happen in front of them, the better,” Marcus writes. In a newsletter, Drew Austin describes this as “kinetic” food — it involves table-side preparations and the gooey, stretchy foods that are already staples of Chili’s and Olive Garden TV ads — and links it to a trend toward “maximalism” that repudiates Instagram-driven minimalism.
I’m no gourmand, but it strikes me as beside the point to look to influencers for advice on food or on anything, really, other than how to participate in something “viral.” Many people seem to experience great pleasure and comfort in doing what is demonstrably popular — walking around New York City continually reveals the great joy people have in waiting in lines for such mundane rewards as a cookie or a pizza slice — and there is undoubtedly a viral knock-on effect from posting about one’s personal experience of something that is already widely recognized. Austin suggests that influencers will play a key role in sustaining the viability of cities as they “transition from places of production to places of pure consumption.”
Even still, it seems like a good rule of thumb to personally avoid anything that attracts influencers, if you don’t have your own professional reasons for sustaining media attention for your various accounts. It may be that fewer and fewer people have the luxury to neglect their own virality. But in the end, nothing tastes better than dignity.
3) Ending the “representation war”
Brad Troemel, an artist whose medium is essentially internet trolling, posted this thread on Twitter about how generative AI at the point of sale could “end the representation war” by making any product reflect the identity of every individual consumer. “We already possess the AI technology to customize every advertisement, product, book, film, and TV show so that everyone could permanently experience the satisfaction of inclusive media representation. Why aren’t we doing this?” We already have algorithmic feeds that do this, so what not apply that logic to everything?
His point, I’m assuming, is that “we” don’t do this because no one actually derives personal satisfaction from “inclusive media representation” in and of itself; rather the signaling value of consumption involves making other people consume you consuming. (This is point of filming yourself at a restaurant eating goop, as noted above. Maybe I could have synthesized this material after all.) Consuming a personalized algorithmic feed is a way of consuming yourself as an interesting set of interests, but consuming other sorts of goods are outward displays of habitus, affiliation, in-group status, trend knowledge, taste, politics, and so on: The point is to communicate something about yourself to someone else, with the value of the consumer good serving as a proxy for the depth of your commitment to the message.
If consumer goods simultaneously appeared as different to everyone looking at them, then they would have no signaling function and would become indifferent commodities. “Generative AI” would not have the potential to generate meaning but only to negate it; every product would stand for everything and nothing. The same goes for “augmented reality”: By nullifying the social communication implicit in a shared experience of space, much of the point of being in public space is not augmented but diminished. Individualized information, whether it comes through “smart headsets” or “AI” chatbots, is fundamentally isolating; what is increasingly scarce is the feeling of having someone else’s reciprocal attention.
4) The ideological work of “sharing”
Ali Griswold, a reporter who focuses on “the sharing economy,” recently wrote this concise post about the trajectory of the word sharing. Early on, peer-to-peer enthusiasts found ways to use new technologies to try to extend the ethos of file sharing to other kinds of goods and services. Then profit-oriented startups moved in, who appropriated the term sharing to describe their predatory practice of ignoring regulation, gaming two-sided markets, and undercutting established labor standards, forcing new modes of self-exploitation on entire categories of workers.
“It’s been a long time since ‘sharing’ meant sharing,” Griswold writes. “Silicon Valley redefined sharing to mean something like ‘using a technology platform to get more use out of something you already have’ … What sharing has actually meant in practice is the formalization of services through trust credentials and risk management.” The only thing “shared,” Griswold suggests, “is liability” — companies like Airbnb and Uber “set up insurance to protect drivers and homeowners from the potential liability involved in offering rides and renting out homes.” But even that seems too charitable a characterization for a business model mainly built on classifying workers as independent contractors and shifting more and more of the risk of doing business onto them: They profit by standardizing goods and services and making the individuals providing them interchangeable and disposable, only personalized to the degree they can be held accountable and personally suffer consequences the company would be exempted from.
5) Machine-assisted novel writing
Adi Robertson describes writing a novella over a weekend using the LLM-driven tool Sudowrite’s “Story Engine”: “You start a chapter by either writing or generating ‘story beats,’ which are a step-by-step guide to everything that happens in the chapter. You then hit a button to turn the beats into prose, chunk by chunk, and send the result to Sudowrite’s main text editor.” Why not just call the beats the story and skip the part where a text generator bloats it out? Is it the same principle that governs podcasting, where the point is inefficient expression to kill time or fill space?
Also, these sorts of tools appear to assume that the writing part of writing is just an inconvenient obstacle to getting your ideas out there rather than the whole point of the activity. As you can probably tell, I rarely have an idea of what I want to say until I’m in the process of thinking it out through writing; I don’t start with a bulletpoint list of things to say that I then arduously transform into “writing.” I would just post the list, like I am basically doing here.
Sudowrite seems aimed at people who are under some compulsion to “have written a novel” but don’t enjoy the struggle of writing for its own sake, for how one can get absorbed in the different problems of expression it presents and the frustrations it occasions when inarticulacy reveals the limits of one’s thinking. In Robertson’s account, it seems as though Sudowrite is effective at producing the rote parts of formulaic genre fiction, checking the necessary conventional boxes. It makes me wonder why you wouldn’t aspire to write something with no rote elements; you could use AI tools to eliminate the clichés it wants to reproduce rather than furnish more of them.
Robertson’s sums up the experience this way:
If early AI writing felt like guiding a very well-read toddler into telling stories, Story Engine feels sort of like building a video game prototype. You start with an idea and try to get a computer to execute it. The results probably aren’t quite what you expect, but through trial and error, you can lean into what the system does well and find the fun.
That seems right to me. Writing with such a tool seems more like playing a game, one you beat by coaxing sparks of your own curiosity out of the system. No matter how much text a system generates, it is only meaningful to the degree that someone bothers to read it.
6) “Anthropomorphism in Dialogue Systems”
This research paper investigates “the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise, arguing that it can reinforce stereotypes of gender roles and notions of acceptable language.” Often the tendency is to blame people’s psychology for anthropomorphizing chatbots and the like, positing the temptation to project human qualities on any interlocutor as irresistible. Likewise, if you paste googly eyes on something, it seems more alive.
But this paper notes that “not all systems are equally humanlike,” and lists the sorts of things that make chatbots more susceptible to being anthropomorphized: giving them names, assigning them pronouns, letting them claim agency over their output, allowing them to express doubt or to use conversational pleasantries, and so on. These are typically intentional design choices, because the designers are aiming for anthropomorphism: It renders users more vulnerable to manipulation by their product. The researchers point out that “trust placed in systems grows as they become more humanlike, whether or not the trust is warranted,” as though this will discourage rather than encourage AI makers to do this. The risks this paper lists (increasing user dependence, reinforcing stereotypes) come across as business opportunities given existing economic conditions, making the first half of the paper a how-to tutorial.
At Crooked Timber, Kevin Munger argues that governments should ban LLMs from using first-person pronouns. Since we have not responded to a technology “that straightforwardly cheapens our humanity” with repudiation and “disgust” (proving that “we are greedier and lonelier than we are dignified”) the state must step in to prevent subjects from “being emotionally defrauded by overestimating the amount of human intentionality encoded” in chatbot text. This would protect the sanctity of writing, which Munger asserts lies at the “basis of the liberal/democratic stack that structures our world.”
This probably couldn’t hurt, but it also depends on the state actually having an investment in preventing emotional defrauding rather than functioning on its basis.