A few weeks ago, the Verge reported on a startup making a device called the Rabbit R1, which is supposed to be “an AI-powered gadget that can use your apps for you.” I’m not especially interested in the device itself or whether it can actually do what the company promises. Apparently it “can control your music, order you a car, buy your groceries, send your messages, and more, all through a single interface” with the help of a “large action model” supposedly patterned after large language models, though that makes no conceptual sense and just seems like an effort to cash in on the buzz around LLMs. (Also, why do people find “controlling their music” so taxing?)
I am instead wondering about the nature of the sales pitch, the idea that people want a “gadget that can use your apps for you,” as though we have all been condemned to downloading all sorts of apps we have no interest in using ourselves. Most apps are premised on being either fun to use or capable of making your life easier in some way. If they turn out to be neither, it doesn’t seem as though purchasing a device to run your device will help end the ensuing infinite regress, as you will inevitably need a device to run the device that runs your device, and on and on. You always need a new convenience to make your existing conveniences even more convenient. To get off this hedonic treadmill of laziness would require that we interrogate why we need to use apps we don’t want to engage with and why we mistake them for conveniences in the first place. Or if they aren’t really beneficial to us, how are we compelled to use them anyway? Why and for whose benefit is life being set up so that nuisance apps serve as unavoidable mediators between what we want to do and being able to do it? (David A. Banks has one answer to that question here.)
In the promotional video for the Rabbit, the company’s CEO Jesse Lyu explains that the company’s mission is to “create the simplest computer, something so intuitive that you don’t need to learn how to use it.” That sounds convenient enough — a complementary inversion of the “I know kung fu” fantasy from The Matrix — but actually knowing how to use things and why they work as they do is a good way of making sure you know what you are using them for, and to direct your usage toward purposes you actively choose. It’s not necessarily advantageous to not know how to do anything, or to be the sorcerer’s apprentice who has no idea how the magic works and ultimately finds themselves at its mercy. If you don’t learn how to use something, it’s often because it has become the means of using you.
Nathan Jurgenson, who sent me the link, suggests that the video is could be viewed as an example of “modern tech people saying what their ideal reality would be,” one which promises “life as a passenger.” Who cares where we’re going? The car drives itself! The problem, according to this startup, is that the interfaces we use to interact with devices (and hence access our potential to do things in the world, since it is assumed that all activity must transpire through screens) are tedious; its solution is that we shouldn’t be permitted to use interfaces in the first place. The idea of “interfacing”
itself needs to be abolished in favor of the device’s direct instrumental control over us, by our direct absorption into the machine.
Lyu’s pitch stresses the word intuitive, which here plays a similar role as convenient or frictionless or authentic: If you don’t have to consciously think about something, if you are just driven by unreflective impulses, then you are letting the foundational level of your being express itself. Intuition is spontaneous and fun; consciousness is a burden. I want my music controlled for me.
There’s a placid confidence in Lyu’s presentation in the idea that of course everyone despises learning and that we all would like to possess a “delightful intuitive Companion” (the company’s branded name for its bot) that learns how to engage with the world for us and stands in our place as our agent so we can somehow enjoy the fruits of any activity without actually doing it. That learning could be an end in itself, that knowing how to do things could be a source of pleasure, that resisting impulsivity could be constructive — these ideas are sidelined in favor of a vision in which accessing the end product of thought as a external thing is more desirable than having to think for oneself. Why live life when you can consume it instead, under your delightful intuitive companion’s direction? A perfect life is one devoted entirely to saving time, such that everything that will ever happen to you has already happened in an instant; you undergo no experiences and thus will never die.
The ultra-competent AI assistant appeals to the basically masochistic fantasy that surrendering control is the crux of pleasure, that the secret apex of agency is choosing to give it all away to something that is commanded to command you, that to be an object is to become a pure subject. It’s as though we are presumed to be jealous of our devices: We don’t want to use the phone; we want to be the phone. (After all, it always seems to have everyone else’s attention.)
The Rabbit seems to promise that it will take over our engagement with apps so that we can simply enjoy what they do. But lurking behind that is what Slavoj Žižek described as “interpassivity,” a concept he derives from Lacan. He uses the example of a VCR, which allows us to consume shows by taping them rather than spending the time to watch them ourselves: As Žižek writes in the 2006 book How to Read Lacan (and who knows how many other places), “it is the object itself that enjoys the show instead of me, relieving me of the duty to enjoy myself.” Enjoyment itself comes to be structured as a trap to evaded, another encumbering interface with the sensual world. AI will promise to make enjoyment itself seem superfluous.
Much of what Žižek writes about interpassivity echoes what Baudrillard discusses in terms of the “silence of the masses,” in which the refusal to participate in media circuits constitutes a threat to the established order. Interpassivity serves as substitute for a genuine threatening passivity or withdrawal; it keeps people sewn into the media system while still affording them the apparent freedom of not choosing, and letting them seem to enjoy things by not enjoying them, by just sitting there. Remember, you still have to tape the shows; you can’t be allowed to imagine that you could simply ignore them altogether.
As Baudrillard puts it in "The Virtual Illusion: Or the Automatic Writing of the World" (1995), “the telespectator has to be transferred not in front of the screen where he is staying anyway, passively escaping his responsibility as citizen, but into the screen, on the other side of the screen.” This need not require engaged interactivity, because the spectator can just be flipped into media space like a Duchamp ready-made. The spectator can be seduced into “real time,” which is the process of having one’s life mediatized as it is happening.
Live your life in real time (live and die directly on the screen). Think in real time (your thinking is immediately transferred on to the printer). Make your revolution in real time (not in the street, but in the broadcasting studio). Live your love and passion in real time (by videotaping each other as it runs).
Baudrillard even presciently discusses immersive art shows:
Some new museums, in a sort of Disneyland processing, try to bring people not in front of the painting, which is not interactive enough, and even suspect as pure spectacular consumption, but into the painting. By insinuating them audio-visually into the virtual reality of the Dejeuner sur Ie Herbe, people will enjoy it in real time, feeling and tasting the whole Impressionist context, and eventually interacting with the picture.
And again, this interactivity is interpassivity — an engagement through disengagement and surrender, through just needing to be present. Immersiveness, Baudrillard claims, serves to “break [the masses’] resistance and destroy their immunities” to what mediatic forms of control are designed to accomplish. The Rabbit device can be construed as a similar kind of immersiveness, which consists of simply being and not doing, in which one is fully enveloped in the embrace of the delightful intuitive companion. The Rabbit promises us passivity to goad us into staying in the interlocking networks of apps and devices.
In participating through machines — in letting machines do the work of participating in social practices and exchanges for us — we use them, Baudrillard argues, “for delusion, for eluding communication (‘Leave a message ...’), for absolving us of the face-to-face relation and the social responsibility.” This kind of technology promises sociality without the social, without reciprocity — “convenience” as the absence of other people. This suggests that “real time” is generated to foster a universal asynchrony, in which no can be present in the same moment with anyone else. Every experience of time is individualized, even when they are made to overlap.
For Baudrillard, the “hegemonic trend to Virtuality” — i.e. “the unconditional realization of the world, the transformation of all our acts, of all historical events, of all material substance and energy into pure information” — expresses a deeper desire for “building an automatic world from which we can retire and remove definitively.” He proposes (strangely) that “we all dream of perfect autonomous beings who, far from acing against our will ..., would meet our desire to escape our own will, and realize the world as a self-fulfilling prophecy.” He argues that “all forms of High Technology illustrate the fact that behind his doubles and his prostheses, his biological clones and his virtual images, the human being is secretly fomenting his disappearance.” He goes on to claim that machines “help us to get free from our own will and from our own production.” Then man can “move on an artificial orbit, where he can revolve eternally.”
I don’t know that “we” all dream of that, but this is certainly the underlying fantasy that the Rabbit would like to sell us, and it is the implicit promise of AI models that purport to limn the trillions parameters of “latent space.” As Baudrillard asserts, “The highest definition of the information corresponds to the lowest definition of the event.” Or, to put that in more promotional terms, in terms that tech companies would perhaps better appreciate, “heaven is a place where nothing ever happens.” Maybe there is a licensing opportunity there.
As usual, wow. Great collection of critiques here. I instinctively opposed the Rabbit as a fad, but you’ve got far beyond my internal feelings and put them into an eloquent set of arguments.
I highly recommend Stanislaw Lem’s novel Fiasco. In part, it involves a civilization that has tried to exist solely in virtual reality.