My New Year’s resolution this year is not to wait until New Year’s to make resolutions. I resolve to be full of resolve; I intend to be very intentional. I’m not sure if such meta-intentionality is self-cancelling or self-diluting; perhaps one has only a limited amount of willful energy for what cognitive scientists call “executive function” and too much self-consciousness about setting goals leads to accomplishing none of them, unless one counts the goal of establishing goals as a goal achieved.
But if one could deplete one’s executive function by reflecting on it, what would be the point of having it? One would be better off — at least in terms of getting things done — if one were guided by impulse or accepted without question the goals assigned by external forces and trusted those forces to keep lining up tasks like so many dominos to fall in some inevitably proper order.
It would be better for you if your “true intention” could be read from your outward behavior and reported to you so that you could invest all your energy into pursuing that end rather than conceiving it. Otherwise you may fall into the recursive trap of coming up with a desire, and then a desire to have that desire, and then a desire to desire that desire to desire, and so on until will resolves itself as paralysis.
This paper by Yaqub Chaudhary and Jonnie Penn posits that data about our intentions don’t necessarily have anything to do with our consciousness: We don’t intentionally produce behavioral data but our behavioral data can be analyzed to produce our intentions. “Intention, whatever it is at its core, is amenable to computation and can be operationalized as such,” they argue. This leads to what they describe as the successor to the attention economy: the “intention economy”:
We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.
The “data that signal intent” are more like poker tells than expressions of interest; they betray a person and render them exploitable to “hyper-personalized manipulation” and “emotional infiltration” — a euphemism for coercion that I hadn’t seen before.
Much of the “intention economy,” as the authors detail it, consists of the familiar strategy of using predictive analytics to target ads; the new wrinkle is the assumption that generative AI can use that intention data to issue irresistible commands. “In an intention economy, an LLM could, at low cost, leverage a user’s cadence, politics, vocabulary, age, gender, preferences for sycophancy, and so on, in concert with brokered bids, to maximize the likelihood of achieving a given aim,” the authors suggest. That aim, of course, is not the user’s but that of whatever company is deploying the LLM. The user is meant to be liberated from having aims and experience only fulfillment after fulfillment, as though the feeling of fulfillment didn’t require any preceding experience of lack. Perhaps this is fulfillingness’ first finale.
The tech industry want to sell this as anticipating rather than pre-empting a user’s intentions. Chaudhary and Penn quote Nvidia CEO Jensen Huang, who claims that in the future, “every single application, every single database, whenever you interact with a computer, you will likely be first engaging a large language model. That large language model will figure out what is your intention, what is your desire, what are you trying to do, given the context, and present the information to you in the best possible way.” Perhaps the ambition articulated here is supposed to be that the LLM will transform a user’s pre-formed but inarticulate intention into terms that a computer can act on, making using a computer feel like commanding a genie or something.
It could become something like what Will Whitney explains here, where models don’t work as agents doing things for you to the point of even desiring them for you, but instead generate better interfaces to you to work through, refining and extending your intentions. “Instead of acting like a person, the model will act like a computer,” and it will “generate something which resembles the interface of a modern application: buttons, sliders, tabs, images, plots, and all the rest.” Interacting with chatbots, Whitney argues, foregrounds the difficulty of communication through words: “With the overhead of communicating, model-as-person systems are most helpful when they can do an entire block of work on their own. They do things for you. This stands in contrast to how we interact with computers or other tools. Tools produce visual feedback in real time and are controlled through direct manipulation.”
But another way of reading Huang’s prediction is that it anticipates users who come to machines the way some already come to social media feeds, without clear intentions and willing to be told what they want, to be shown what is being done for or to them, “in the best possible way” for their own good. The “overhead of communication” will work to encourage users into passivity, to letting the model interpret things as it will and be content with whatever it provides, as though it automatically knows best. The aim of media and technology would be to produce that kind of dependent and vulnerable user, who has little social support to draw from (who needs friends when there are chatbots?) and finds themselves adrift in a listless torpor, incapable of proceeding in any particular direction until goaded by stimuli toward ends that someone else can squeeze profit from.
For examples of what this might look like, consider the bots being used to simulate OnlyFans models, described in this Vice article: “The AI’s ability to scan for inactive users and automatically initiate conversations has led to impressive results.” In the “intention economy,” it would not only be the “inactive user” that’s targeted but the one whose data signals peak susceptibility to a bot’s come-ons. It would not only address existing OnlyFans customers but work to extend the reach of OnlyFans as a paid substitute for sociality, intimacy, etc. Meta’s announced commitment to flooding its platforms with simulated users works similarly, making human contact seem remote and unverifiable, more like “overhead” that precedes fulfillment rather than the underlying goal of any intention.
LLM-generated sermons, described here, could work this way as well, targeting spiritually susceptible people with messaging adapted to occasions where they are most vulnerable. The religious leaders cited in that article are by and large skeptical:
“Our job is not just to put pretty sentences together,” Rabbi Hayon said. “It’s to hopefully write something that’s lyrical and moving and articulate, but also responds to the uniquely human hungers and pains and losses that we’re aware of because we are in human communities with other people.” He added, “It can’t be automated.”
But isn’t he really saying that sermons respond to data that signals intent (“human hungers and pains and losses that we’re aware of”) with timely and articulate-seeming responses? They have a plan for your life. If machines can become aware of those human hungers and pains and losses without being in human communities with other people, they can help with the project of dislocating those needs from those communities and automating their exploitation. People can be programmed into contentment if they see their needs not in terms of belonging and contributing to social life but as consuming a certain kind of content and being able to follow a certain script that always leads to an evaporative moment of satisfaction.
Every automated sermon (like every conversation with a chatbot) would suggest that reciprocal attention (or social being, or community participation) is not pertinent to satisfying a person’s intentions, which can be much more conveniently addressed if that person more thoroughly isolates themselves. Everything but human connection can be automated, so the prophets of automation have every incentive to denigrate it. It only makes sense that their “new frontier of persuasive technologies” would be put primarily toward that end.
It’s interesting that LLM engineers are trying to reverse the search engine. The advantage of google, at least before SEO nonsense, was how easy it was to find things online with just a few words. Now they are trying to reintroduce the confusion and tedium of conversing (with a non intelligent machine no less) at a time when most people’s articulation and rhetoric have atrophied.
It's an interesting thought experiment about this "intention economy," but I must say I find it hardly believable. A unavoidable aspect of a human being is agency, which is another way of saying we all have our own intentions which have a tendency to win out in the end. Yes, perhaps there are some mentally ill people who go on social media and are "willing to be told what they want, to be shown what is being done for or to them, 'in the best possible way' for their own good," and maybe tech advertisers have a desire for some sort of ability to simply pull a lever and turn regular people into zombies who can be ordered around with manufactured intent. But to portray LLMs as having the ability to induce this 'zombification of intent,' so to speak, is simply a fantasy. Maybe LLMs are good at convincing users they are being given facts, but people do seem to be figuring out a result from an LLM has some good chance of being a "hallucination" instead.
Maybe the framing of "intent economy" is more useful from a critical perspective anyway. What is the intent of the researchers who published this study, for instance? It's common knowledge there are literally hundreds of millions of dollars sloshing around, looking for a place to germinate. I mean, why else would such a a far fetched bit of speculative musing be given a second thought beyond some seventies-era sci-fi dime novel? With that kind of funding washing through such a struggling technology, why not go for gold where there is gold to be had?