“Self-sabotage takes many forms,” Eliane Glaser writes in this essay for Aeon. “Self- sabotage is about deferring our stated goals and — when we are given a shot — blowing it, or subtly hindering our chances. The puzzle is why so many of us perpetually find ourselves getting in our own way and disrupting our best-laid plans.” But what if self-sabotage is the plan? Can you sabotage your desire for self-sabotage? What if one’s “stated goals” are at some level a cover story, obscuring the sources and heterogeneity of our aims? What if “being in our own way” becomes the only way to be, the only way to grasp our own agency? I, for one, feel most like myself in the midst of self-sabotage, and this newsletter in many respects serves as an ongoing testimony to that.
The question of self-sabotage raises the question of self-knowledge — how do we know what we want, whether we ever integrate different desires that operate over different timeframes or balance self-esteem and self-critique — and the function that knowledge should serve, if any: Is it necessary to “get the life we want,” to borrow the title from this recent London Review of Books essay by Adam Phillips; or to fit in better socially; or meet some standard of performance and do one’s duty; or simply to make life livable? Is self-sabotage simply another term for self-knowledge?
Glaser’s essay points out how self-sabotage, particularly when (dubiously) redescribed as addiction or procrastination, has been medicalized, and how neuroscience research seeks to explain it without having to contend with the conundrums of self-consciousness. This is one way to evade self-knowledge: to reduce it to a more thorough quantitative knowledge of the body’s mechanisms. The author of a book called Dopamine Nation is cited for the idea that people are “addicted to instant gratification,” which simply restates the problem — we want different things on different time scales — in biologistic terms that make self-reflection less pertinent and that make the specific desires that make up our conscious lives somewhat arbitrary and epiphenomenal. The degree to which these desires and inhibitions are sustained socially and are affected by social status is sidelined, as if life opportunities are doled out equally at the outset and only individuals’ flawed limbic systems upset this equitable distribution, such that some are more successful than others. All that matters from that point of view is hormonal regulation, which can be addressed without changing anything about the social order.
Glaser later turns to the idea that new technologies, which both promise greater efficiency and tempt us with endless distraction, are responsible for a supposed epidemic of self-sabotage: “If self-sabotage exists on a spectrum, the contemporary world — with its alluring screens and overwork culture — has made it far more prevalent.” This is posited as a kind of unfortunate irony: The tools that would make us so much more productive also “hack” our “reward system” and make us more prone to unproductive escapism. But it also frames the self as the ultimate problem that prevents technology from delivering on its economic promise. Self-sabotage figures as a kind of labor resistance, a type of work refusal, a stubborn resistance to exploitation, even if it’s self-exploitation. Once you eliminate the moment of subjectivity in which self-sabotage can be articulated, then “overwork culture” can flourish unabated.
Technology companies, as they become more ambitious and invasive, necessarily promote the idea that self-knowledge (or subjectivity itself) is an obstacle, and that we are better served if we let them provide both our goals and the means for achieving them. Insofar as we consciously think we can know what we want, we are wrong; insofar as we surrender to having our behavior collected and processed so that what we “really want” can be revealed to us, we can be put on the right track. If we direct our own curiosities and organize our experiences according to our own whims, we are bound to mire ourselves in confusion; if we let surveillant systems anticipate and negate our curiosity, we can achieve the satisfactions of perfect passivity.
Invoking Freud’s idea of a “death drive,” psychoanalyst Anouchka Grose tells Glaser: “We’re all after a kind of homeostasis and excitement has to be managed very carefully … not doing things is actually quite comfortable, except that it tips to the point where not doing things becomes morbid and deathly.” When tech companies cater to personal “convenience,” they are nurturing the death drive, which is perhaps even easier to harness as an economically productive force than the “pleasure principle” that otherwise appears to motivate our non-self-sabotaging self.
Phillips’s essay “on getting the life you want” is less about self-sabotage than Richard Rorty’s pragmatist philosophy, which Phillips compares and contrasts with psychoanalysis. In his usual meandering way, he proceeds by way of a series of paradoxes, parentheticals, and reversals and tends to deepen the fog in which his chosen topic was already enshrouded. But the animating question is whether “getting what you want” is even a coherent pursuit, let alone a pragmatic one.
Psychoanalysis, though rich with ideas about how life should be lived, famously hesitates from promising people that it is even possible to get the life you want. Instead it hopes to, as Freud put it, turn “hysterical misery” into “ordinary unhappiness.” By way of a quote from Žižek, Phillips notes Lacan’s even more astringent idea, that the purpose of psychoanalysis “is not the patient’s well-being, successful social life or personal fulfillment, but to bring the patient to confront the elementary co-ordinates and deadlocks of his or her desire.” Less depressingly, Phillips points out that psychoanalysis seeks to enable patients to “surprise themselves” (Winnicott) or free-associate as an end rather than a means (Ferenczi).
These last two point to not knowing what you want as being what you want — you should want to be unpredictable to yourself. Algorithmic recommendation deliberately aims to make this harder; it takes on the capacity to surprise you, so that “being surprised” is something to consume rather than something you can do to yourself. The ubiquity of algorithmic recommendation and prediction systems may also make it easier to recognize one’s own variability: When confronted with what past behavior suggests we should want, we may just as often balk and disavow it as let it frictionlessly proceed. In that situation, one is surprised only by one’s own refusal to comply — negation rather than discovery becomes the principle of self-realization.
Similarly, generative tools purport to be shortcuts that provide what we want without our having to work to achieve it. They simulate what “free association” might be like, effortlessly riffing on the prompts they are fed, and again allow users to consume that idea of freedom without enacting it or experiencing it. But generative models of course are precisely the opposite of free association; they are statistically determinant. They try to eliminate the possibility of living freedom, of the capability of being unpredictable to yourself or anything else. No associations are “free;” they are systematically generated and assigned numerical weights, given a calculable location.
If psychoanalysis is right about the good life having anything to do with unpredictability, then predictive technology serves to make the good life even more inaccessible. If a good life is a kind of striving for unquantifiable experiences of oneself, then being caught up in systems of intensive datafication makes that possibility more remote. Generative prediction and algorithmic recommendation become automated forms of self-sabotage that cheat you even out of its selfishness.
In Phillips account, Rorty’s anti-essentialism suggests that we can adopt whatever descriptions and interpretations of the world we need to further our purposes. He quotes Rorty’s view that “Even if a non-human authority tells you something, the only way to figure out whether what you have been told is true is to see whether it gets you the sort of life you want.” One might interpret that “nonhuman authority” as encompassing “AI” tools as well — that we can learn to take or leave them in so far as they suit our purposes. But that wouldn’t take into account how those tools attack our ability to form independent purposes and evaluate them. The tools can’t be picked up and put down without changing us; they instead habituate users to passivity, to delegating the determination of what they really want to a system that promises expedient clarification.
Phillips sets up a fairly straightforward contrast between Rorty’s pragmatism and psychoanalysis:
Pragmatism wants us to ask, what is the life we want — or think we want? Whereas psychoanalysis wants us to ask, why do we not want to know what we want? (According to Michel Serres, the only modern question is: what is it you don’t want to know about yourself?) Psychoanalysis wants us to ask — against the grain of traditional philosophy — why do we obscure the good that we seek? Pragmatism takes for granted that the good we seek is what we want and asks us how we are going to go about getting it. Indeed, pragmatism tells us that we are good at knowing what we want and good at letting our wants change.
At first I thought this could map directly onto “techno-optimism” and “techno-pessimism” — optimists assume we know what we want and can build tools and systems to expedite that pursuit; the pessimists recognize that the same tools can further obscure and complicate our already murky and conflicted desires. Such a view would confirm tech engineers understanding of themselves as cheerful, neutral pragmatists who “just want to get the job done” in some space beyond political considerations. But if Phillips is right, they wouldn’t be Rortyian pragmatists, as the tech they are currently building assumes we are not good at knowing what we want and that our wants shouldn’t change at our own behest. It doesn’t aim at helping us “become an autonomous self,” as Rorty advocated and defeat the death drive; it is the death drive. It is instead built on the principle that rather than being neurotic and paradoxically free, we should be entirely programmable. To protect us from any tendencies to self-sabotage, technology should work to refuse us a self altogether.
Enjoyed this piece. In some ways the algorithmic bubble is even more insidious than stated here. Said bubble seems to say, in ever more slick and sophisticated ways, “we know you better than you do” or “let us do your thinking for you.” Many tech people I know have a near religious, hypnotic fervor about all this. In this way we find yet another way to embraced the unexamined or quietly reflective life.
I love this piece--all the angles of self-sabotage. "Sabot" is French for a worker's shoe. "Sabotage" is from throwing a shoe into the machinery, literally gumming up the works. If sabotage led to workers' rights, what does self-sabotage lead to? A stoic manifesto? A monk's discipline? Where's the Zen acceptance of flawed humanity and try, try again?