In 2013, Facebook data scientist Adam Kremer and intern Sauvik Das published “Self-Censorship on Facebook,” a paper which was to spark some controversy. Not only did the authors repeatedly make the confounding assertion that “the act of preventing oneself from speaking” — i.e., starting to write a Facebook post but then not posting it — was “censorship,” a kind of self-imposed masochistic tyranny rather than privacy, reticence, or good sense; but their methodology also revealed that Facebook retained the data that users input but never posted. Anything typed into a box on Facebook was fair game for the company to do whatever they wanted with, regardless of whether the user ultimately intended to share it with others. Thus while one’s privacy settings may apply to other users, they didn’t apply to the platform itself, which tracked and stored whatever user behavior it could, wherever it could. Nothing a user could do could get anything removed from that database, and their intuitions about what was included in it were likely to be insufficient.
In Das and Kremer’s paper, a user’s subjective choices about what to post are targeted as irrational intrusions in the general flow of data. They describe self-censorship as potentially “hurtful” because the platform “loses value from the lack of content generation.” The user’s capability to decide what to share or not for purposes of self-expression was no more than a bottleneck to be routed around. Everything a user does is useful feedback for the system as a whole.
The paper obviously exemplified how platforms show no mercy in their surveillance and are of course unwilling to respect users’ choices about what they intend to reveal. But it also revealed how that lack of respect extended to treating users as experimental research subjects without their consent, seeking to manipulate their behavior against their sense of their best interests. As Jennifer Golbeck wrote at the time in Slate, Facebook studies “self-censorship” “because the more its engineers understand about self-censorship, the more precisely they can fine-tune their system to minimize self-censorship’s prevalence. This goal — designing Facebook to decrease self-censorship — is explicit in the paper.” In other words, while pretending to have handed the microphone over to users to use as they saw fit, Facebook instead collected data to try to override their autonomy, on the site, if not in general. The point wasn’t to give people tools of expression, but to put them in environments where continual involuntary disclosure is naturalized, and “expression” (as the choice over what and in what manner one wants to communicate) becomes impossible. If we use platforms long enough, we come to recognize that discretion is superfluous.
Facebook, the “self-censorship” paper made clear, was best understood not as a platform for broadcasting self-presentation but as a mechanism for harvesting a user’s behavior. Later, in the “emotional manipulation” study from 2014, users’ emotions were treated as clay to be molded at the whim of Facebook’s researchers. In each case, the user is regarded not as a living being that exercises freedom through thought and expression, but as an output-generating machine that can be programmed by the platform to operate at the desired rate.
The idea of “self-censorship” posits that thinking isn’t a cognitive process of developing more complex ideas out of simpler ones. Rather it treats the contents of the mind as a one-way stream of consciousness, a data flow in which every thought counts the same and nothing can be meaningfully revised or negated. Never mind the possibility that new ideas may be formed through the friction inherent in attempts to express them. Between thought and expression there lies a lifetime — but why? The inner monologue should not be treated as provisional but as an output log of your pure being, as long as you don’t become self-conscious about it, try to reflect on it, and “censor” it.
Changing your mind, clarifying or refining your thoughts, having second thoughts — these are illusions, no more than invalid, suppressive forms of communication, inefficiencies that can be corrected through more invasive interfaces that capture more of the data flow. Ideally one would be compelled to wear some sort of brainwave monitor that would broadcast neural data directly to corporate servers somewhere, as suggested by Mark Zuckerberg’s enthusiasm for brain-reading machines. This would confirm the full equivalence of thought and expression. “Self-presentation” was always just an alibi; there was nothing to present, only data to collect. The self’s choices about presentation were just more data points and not ones that needed to be privileged.
“Self-expression” is implicitly redefined. To express ourselves, we simply have to expose ourselves to the maximum amount of scrutiny, since data about ourselves speaks more truly than our choices about what to say. To express oneself is more a matter of being objectified, of having our “natural” unreflective responses tracked as carefully as possible, providing sufficient data for an external analysis to determine what sort of object we are and what properties we express.
This has become more explicit now that most platforms run on algorithms that purport to predict a user’s interests and desires. It’s taken as a kind of common sense that platforms know people better than they know themselves, and that users’ posting or conscious intent is not even relevant to it. (“Sharing with friends and family” is also marginal.) Consuming content is how one uncovers oneself, far more efficiently than bothering with various forms of self-expression.
“Self-censorship” isn’t much of a problem for platforms anymore, in part because platforms don’t care if you say anything. They are tracking and “engaging” you in other ways. But the privacy implications of “self-censorship” remain in effect. Applications that are ostensibly under your control and meant to serve your aims are monitoring you as lab rat. The blank spaces and empty pages in word processors and email clients other seem like invitations to produce our own thoughts, but they aren’t there to allow for our expression; they are provided so we can be experimented on, with “AI” serving as the testing probe.
In an op-ed for the Los Angeles Times, Jane Rosenzweig condemns “AI-assisted writing” for its increasingly intrusive autocorrections and autosuggestions: “Just as it’s hard to imagine life before spell check today,” Rosenzweig writes, “we may soon forget what it was like to open a blank document and start typing without an AI ‘assistant’ completing — or initiating — our thoughts.” This is not the result of a tech company’s overeagerness to be helpful. It may appear as though an AI “assistant” is meant to assist you, but you are in fact always already involuntarily enlisted into helping it, serving as its appendage — giving it data about how to complete other people’s sentences or generate them from scratch. “While AI assistants might be able to help us with our own thinking,” Rosenzweig writes, “it’s likely that in many cases they’ll end up replacing that thinking.” The assistant is not a means to your end so much as a way of making you a means to the company’s end: to restrict your thinking to the formulas it provides while assimilating any novel modes of thinking you might persist in exhibiting.
As with Facebook’s data collection on material before it was posted, AI-assistance intervenes before you’ve finished thinking your way through something and claims it as significant data. The documents aren’t ever really blank; they are always wired for microsecond-by-microsecond surveillance of your interactions with them. The blank page serves as a training module: Your inputs are training AI models, your reaction to the model’s suggestions offer further training data, and you yourself are being trained in how to be an efficient annotator, learning to reconceive whatever work you are doing as a yes-no reaction to AI-supplied options — as solving bespoke captchas.
AI-assistance assists us mainly in becoming AI custodians, joining the army of workers Josh Dzieza describes in this recent New York piece on the “AI factory” — the human labor that has been organized worldwide mainly through gig-work sites to train models, adjust their output, and provide feedback for refining them.
There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don’t get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
The division of labor is such that these workers usually have no idea why they are being asked to do what they do; instead they perform cognitive tasks that have nonetheless been deskilled (defying the received distinction between manual and intellectual labor). The thinking they do for their work — intentionally isolated decisions rendered with no larger sense of their purpose or orientation, as if such purposes should be seen as beside the point — remains atomized and personally inconsequential; they can’t profit by it themselves or see a bigger picture.
While as Dzieza notes, “the act of simplifying reality for a machine results in a great deal of complexity for the human,” those humans can derive no mastery from it, except that of making the decontextualized decisions more efficiently. They just learn how to emulate a machine in making classifications with no regard to context, producing an abstract commodity, “human feedback data.” Cognition and comprehension are reduced to annotation. No point in integrating smaller concepts into larger ones or developing ideas, no point seeing thought as a recursive, reflective process — it is just responding to stimuli, moving ever forward, clearing the queues.
Ultimately, this kind of processing should reach the point where it requires no conscious deliberation or interiority — then it can be done by machines, as is the aim. “The job of the annotator often involves putting human understanding aside and following instructions very, very literally — to think, as one annotator said, like a robot,” Dzieza reports. “It’s a strange mental space to inhabit.” That is because it is not supposed to be inhabited in any reflective sense. The work of annotation is to create a world in which consciousness is unnecessary. It should be as superfluous as premeditated self-expression is to a behavioristic account of the self.
When email or word-processing apps impose AI-assistance, our work is deskilled in the opposite direction. In these cases, we supply a context that gets machinically decontextualized, broken down into abstract components and possibilities, average conditions. We are limited to providing feedback on the machine’s decisions with regard to it, assessing how viable its abstract of our supplied context is, and how useful it would be in analogous situations. In the process we surrender the idea that meaningful thinking requires bringing context together with a means for intervening in it, that these bear a reciprocal relation that becomes more complex as our insight and understanding of that relation increases. With AI-assistance, our capacity for thinking (a.k.a. for “self-censorship”) is interrupted and steered away from the development of knowledge of the context and issues at hand, and is directed instead toward trial-and-error engagements with machine-generated options.
The result is thought without any personal connection in it — purpose without purposiveness. Our relation to our own intention is deskilled, much as algorithmic feeds deskill our relation to our desires. We don’t need to understand how to go after what we want or understand what if anything is singular or improbable about those intentions; we can consume our goals and desires as end products constructed and presented to us. Our consumption choices, our intentions, our life processes to the extent they are tracked and captured, are just forms of self-annotation — the opposite of self-censorship — taming and ironing out our complexity to have ourselves rendered machine-readable, which becomes the master goal, the key to all mythologies in an AI-assisted world. Meanwhile, our day-to-day work will increasingly take the form of arbitrary annotation, an endless series of captchas to unlock our access to even more captchas. The microphones will be pointed directly at the amplifiers, and there will be endless feedback.
Endless feedback
ouroborus
Just a word of encouragement - keep up the good work. These essays are incredibly insightful. Regularly reading them is almost a liturgical process, through which I am reminded at regular intervals how these forces are shaping my mind and body.