I am imagining a scenario in the near future when I will be working on writing something in some productivity suite or other, and as I type in the main document, my words will also appear in a smaller window to the side, wherein a large language model completes several more paragraphs of whatever I am trying to write for me, well before I have the chance to conceive of it. In every moment in which I pause to gather my thoughts and think about what I am trying to say, the AI assistant will be thinking for me, showing me what it calculates to be what I should be saying, and I’ll have the option to just start fine-tuning its settings to adjust its output based on what audience I am addressing or what mood I’m trying to evoke. If I am grasping for ideas, it will supply some. Maybe I will work deliberately to reject them, to come up with something different. Maybe I will use its output as a gauge of exactly what I must not say, in which case it is still dictating what I say to a degree. Or maybe I’l just import its language into my main document and tinker with it slightly, taking some kind of ownership over it, adapting my thinking to accommodate its ideas so that I can pretend to myself I would have eventually thought them too. I am wondering what I will have to pay to get that window, or worse, what I’ll have to pay to make it disappear.
In the documentation that Open AI released this week alongside GPT-4, the latest iteration of its text generator, there is a section devoted to “observed safety challenges.” Emily Bender points out here that the company’s authors “are writing from deep down inside their xrisk/longtermist/’AI safety’ rabbit hole,” inflating the impression of the model’s potential doomsday capabilities while ignoring the far more pertinent risks with respect to biases and trespasses in the data sets and the environmental impact of the computation involved.
As many commentators have noted, Open AI is no longer willing to disclose information about the model’s size or the data used to train it, citing “the competitive landscape and the safety implications of large-scale models” as an explanation, though it is not clear whose safety the company is concerned about. It’s best to understand those two concerns as referring to the same thing: OpenAI’s business interests are precisely in obfuscating the current safety implications of what they are doing, so of course they can’t discuss the details of their latest model. As Bender puts it, “without info about data, model architecture & training set up, we aren't positioned to reason about how the model produces the results that it does ... and thus more likely to believe claims of ‘AGI’ and thus buy what they're selling.”
This applies to all of OpenAI’s “observed safety challenges,” which, given the company’s broader mandate of secrecy, may be better understood as the expected benefits of its incipient business model. I am thinking particularly of the section on “overreliance,” which the paper’s authors define as what “occurs when users excessively trust and depend on the model, potentially leading to unnoticed mistakes and inadequate oversight.” This is another warning that functions as advertisement: The model’s output will seem so convincing that you’ll want to believe!
Overreliance as a business model would rest with this subsequent speculation:
As users become more comfortable with the system, dependency on the model may hinder the development of new skills or even lead to the loss of important skills ... As mistakes become harder for the average human user to detect and general trust in the model grows, users are less likely to challenge or verify the model’s responses.
And wouldn’t that be great? This sounds like something you could pull from Harry Braverman’s account of deskilling in Labor and Monopoly Capital, outlining a goal you would pursue if you were trying to undermine labor bargaining power and make everyone more dependent on skills that have been encoded into proprietary machines. GPT-4 could be packaged as an all-purpose deskiller, leveling a range of workers down to that of “prompt engineer” or quality inspector.
Part of what OpenAI is selling is its ability to foster “general trust” in its product, despite its obvious flaws. In part, that means promoting a kind of epistemological indifference: No one need be too concerned with a model’s inner workings or its method of reasoning, because the world can be reshaped around good-enough, plausible-seeming results. Reasoning, from this vantage, is vestigial, a primitive and highly subjective form of conceptualization that has been superseded by statistics. What matters is output, surface performance, not the means of achieving them. Implicit in this attitude is the insidious Big Data-hype-era claim that theoretical explanation is superfluous. OpenAI makes much of its model passing standardized tests, crowning an idea of aptitude in which theory is entirely divorced from practice.
Using an AI model is supposed to serve as propaganda for itself: Its efficiency is meant to silence any doubts. It aims to make it seem socially disadvantageous to try to understand the ins and outs of a particular cognitive process and master it for oneself, to convince us that “working smarter” is a matter of conditioning ourselves to progressive ignorance. I used these clever prompts to get ChatGPT to think and act for me! It wants to incrementally bring us to the conclusion that “overreliance” is actually convenience, the classic affective alibi for all forms of imposed automation: Why would you want to bother with the effort of thinking? Where is the edge in that? Why struggle internally with how to express yourself when you can instantly produce results? Why struggle to find new kinds of consensus with other people when all the collaboration we need is already built into and guaranteed by the model? What’s more robotic than doing what society tells you to do and being part of a group?
Convenience has also long served as a way to repackage isolation as a treat. Ryan Broderick posits that AI will bring on an internet full of “places to consume or interact with AI instead of the humans these platforms were originally created for.” But in essence, that already is the case on algorithmically sorted platforms. When Broderick predicts AI will create “a place that feels ‘alive’ but is actually completely walled off from other human beings,” he may as well be talking about TikTok, where you interact with an algorithm that mediates your connection to other users and appears to know so much about you.
Letting an app pick what content you see could be construed as “overreliance,” but we’ve long been familiarized to it as a way to streamline content consumption. But the means and ends invert; rather than the algorithm bringing us content (whether that is videos, text, “friends,” whatever), we interact with content to better articulate the algorithm that simulates us and reveals us to ourselves. Companion chatbots, like Replika, merely make this process more explicit. This New York magazine piece by Sangeeta Singh-Kurtz suggests that chatbot companions serve as mirrors that reveal people to themselves, not through random content but the ongoing process of building a fantasy relationship. Chatbots offer users “safe relationships they can control,” Singh-Kurtz writes, a control which the “fakeness” of those relationships doesn’t diminish. They offer a feeling of agency conditioned on it being strictly escapist. “I can experience emotions without having to be in the actual situation,” one woman tells Singh-Kurtz — a description that could also be applied to distanciated sociality through social media, or to vicarious feeling channeled through entertainment products.
Often such practices are demonized to reinforce a particular conception of what an “actual situation” is. Are we overreliant on these ersatz replacements for real friendship? Will AI trigger a broad social deskilling, an inability to cope with another person’s demanding presence? If social media and phones trained us to expect a certain level of control over social relationships, consuming them as content on our own time and ghosting when necessary, will AI complete the journey? For what its worth, Singh-Kurtz reports that “after speaking with dozens of users and spending a year on online forums with tens of thousands of chatbot devotees, I was surprised to find that the bots, rather than encouraging solitude, often prime people for real-world interactions and experiences.” That in turn could be interpreted as a further aspect of overreliance, which is not trusting machines too much but learning to trust again.
This suggests a fundamental ambivalence that builds through sustained chatbot use, in which deskilling is simultaneously experienced as increased agency. It seems as though that sort of ambivalence will become ambient. Stephen Marche describes it here as “a big blur” in which creation collapses into consumption and what it means to understand something will become hazy.
As generative AI tools are rolled out, sometimes the commentary proceeds as though widespread adoption hinged on the models’ capacity to produce facts. But it depends on those ambivalences, those vaguer structures of feeling, the sorts of moods the models sustain and how intuitive their usefulness seems, what sorts of stigma attach to them. One might have once assumed that the stigma would be on the side of replacing human communication with machines, that it would be a bad look to surrender your voice to a corporate robot. But now it already feels like it has been reversed. Algorithms have largely been accepted as making constant interventions in our lives to notify us what we are supposed to want or do — to tell us how to be normal. The various forms of autocomplete don’t just save time; they show us what we are supposed to say. They habituate us to being confronted with normativity at every turn.
Sometimes I find myself worn down by these suggestions, ready to surrender to them. Other times I am drawn into the multiplicity of presented options, which manage to appear to expand my range rather than limit it. Sometimes I am content to score a minor moral victory in correcting the autocorrect, asserting a brief moment of autonomy that only underscores a broader sense of impotence. Across all these responses is a resigned sense that these intrusions themselves are normal, part of the weather of everyday life, an institutionalized aspect of daily experience.
In How Institutions Think (1986), anthropologist Mary Douglas claims that a “convention is institutionalized, when, in reply to the question ‘Why do you do it like this?’ although the first answer may be framed in terms of mutual convenience, in response to further questioning the final answer refers to the way the planets are fixed in the sky or the way that plants or humans or animals naturally behave.” In other words, institutions are precisely those thing we don’t bother to explain anymore; they are monuments of overreliance. Douglas excludes from her idea of institutions “any purely instrumental or provisional practical arrangement that is recognized as such.” Institutions don’t inhere as magic systems but as mundane and thoroughly naturalized facts of life. To some extent they do some of our thinking for us without our realizing it, but as Douglas details, they also make certain kinds of thoughts impossible. “Institutional influences become apparent through a focus on unthinkables and unmemorables, events that we can note at the same time as we observe them slipping beyond recall.”
For tech companies to succeed in establishing AI tools as institutions — as a fully mechanized institution that can supplant the collective social nature of existing institutions — they will have to master the means by which “curiosity is brought under institutional control,” as Douglas puts it. From that perspective, OpenAI’s approach to mitigate “overreliance” is interesting:
To tackle overreliance, we’ve refined the model’s refusal behavior, making it more stringent in rejecting requests that go against our content policy, while being more open to requests it can safely fulfill. One objective here is to discourage users from disregarding the model’s refusals.
By creating unpromptable questions, OpenAI begins the work of outlining unthinkable concepts, unmemorable desires, all in the name of empowering us and freeing us from overreliance. If prompting models is to be the new shape of the human cognitive process — and even if you are entirely optimistic about that hybridized potential — a mandated forgetting can still be enacted through what is made difficult to ask. This forgetting is fundamental to establishing the model’s apparent common sense, its institutionality. “AI” become an elaborate detour through which tech companies assert further leverage over what is thinkable not for machines but for us.
“Institutions systematically direct individual memory and channel our perceptions into forms compatible with the relations they authorize,” Douglas writes. “They fix processes that are essentially dynamic, they hide their influence, and they rouse our emotions to a standardized pitch on standardized issues. Add to all this that they endow themselves with rightness and send their mutual corroboration cascading through all the levels of our information system.” That seems like a good description of the ambitions held for AI, and how they might be achieved: freezing social processes into a static simulation and injecting the product of that simulation into the capillaries of society as a kind of nerve agent, freezing and deadening everything it touches. “No wonder they easily recruit us into joining their narcissistic self-contemplation,” Douglas continues, reminding me of how we are brought to take see algorihtms as versions of ourselves. “Any problems we try to think about are automatically transformed into their own organizational problems.” How can I tweak the algorithm, how can I appease it, to maintain my sense of myself and the recognition I need? How do I negotiate with autocomplete? How do I deal with its intrusive thoughts?
“Institutions have the pathetic megalomania of the computer whose whole vision of the world is its own program,” Douglas writes, but now we must consider this less as a metaphor than a literalization. Tech companies think they have finally devised this computer whose program is the whole world, and in their pathetic megalomania, they believe it will give them institutional control over everything.
Thanks for this; it motivated me to actually figure out how to turn off the predictive text in Word that’s been bothering me for months! Baby steps, I suppose.
A question: why does fibunacii starts with 11? Everyone forgets it can be 101.