A six paragraph blog post
As someone who likes to read, I have always been sensitive to how often reading is pathologized. It’s not regarded as an activity in its own right but as avoidance of activity. One couldn’t possibly enjoy it for its own sake — something that its ardent defenders seem to inadvertently confirm — so it must be a pose or strategy, a deflection. It is construed as anti-social and self-involved, a gesture of refusal. Look at him with his book — it’s so sad! On such a nice day too.
When it’s not seen as an outright obstacle, reading is treated as an inefficient means to some end that normal people would prefer to pursue in a more direct, less effortful way. Some publications announce at the beginning of articles how long you are supposed to take to read them, as if to challenge you to try to beat their best score. When I read on a Kindle, it calculates my “speed,” as if I were an overclocked microprocessor in danger of overheating. As I write this, the Substack CMS displays a hypothetical “reading time” along with the word count, setting up a dismal contrast with my writing time, as if to remind me that for all the hours I might put in, I should expect readers to spend a few minutes on it at best.
It’s not as though anyone thinks reading and writing time should be at par, or that the time required to put thoughts into words should be matched by a proportional amount of time extracting thoughts from them. It’s more that time is an irrelevant metric to the whole process: Thought doesn’t have clock temporality; a certain number of seconds isn’t a proxy for a certain number of ideas. But if not through time, how else are we to quantify thought? And if we can’t quantify it, how will we value it? How will we even know if thinking is profitable?
In this piece about what he calls “quantitative aesthetics,” Ben Davis references the “McNamara fallacy”: thinking that only what can be measured can be considered important. (He also references crypto criminal Sam Bankman-Fried’s declaration that “if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”) The fetish for measurement leads to an impulse to try to measure things you already find important to legitimize your interest, as though this reduction of quality to quantity won’t demystify interest but make it more tangible and available for some new method of maximization. But making numbers go up is an entirely different sort of investment in something than being aesthetically or intellectually interested in it. “A Quantitative Aesthetic often just becomes a way to deal with the problem of not wanting to spend much time thinking,” Davis writes. It’s the “opposite of deep thought.” If your problem is having to think deeply about the things you are interested in, quantification is a perfect approach.
If you don’t want to think about what you read, there are emerging tools for that. Among the uses for GPT-4 I saw being touted this week was something called ChatPDF, described in this thread as “an AI-powered app that will make reading journal articles easier and faster. Simply upload a PDF and start asking it questions.” Granted, many journal articles are not written to be read but to be cited, and often they are written in hermetic jargon to signal status to other members of a particular discipline. It’s easy to see that one might appreciate a translator capable of cutting through that and situating a paper in its context with less evasiveness and throat clearing, if for some reason you were allergic to reading abstracts.
But as Robin James noted, the “‘have AI read an academic article back to you by asking it questions’ method has some really ... narrow and positivist assumptions about how academic reading and writing work.” It’s not as though in all cases, journal articles contain some quantity of information that can be extracted in more or less expedient ways, and that those ideas are indifferent to the specific ways they have been expressed. You don’t have to close-read academic papers as if they were poetry, but the precision of the language and the nuances and emphases conveyed there, intentionally or not, are important to any piece of writing.
More than that, the reading process itself is not simply a matter of the more or less rapid intake of words in pursuit of information extraction. It’s a process of thinking through — of combining your thought with what’s on the page or between the lines to initiate new chains of ideas. Texts require an effort of interpretation and reinterpretation, endless deliberation among their different possible meanings and contexts. Passages might get underlined and notes taken in the margin. Screenshots will be taken and recontextualized, passages will be cut and pasted into blank documents, and so on.
Boosters of AI-assisted reading seem to regard such modes of engagement as primitive efforts to “chat with the document,” as if it you wanted it to speak with one generic voice and produce authoritative answers rather than generate your own ideas. The ChatPDF thread suggests that you skip reading altogether and engage with a text’s content in an entirely different way, as if reading were simply a flawed interface holding us back.
Game designer V Buckenham linked to Olia Lialina’s 2012 essay “Turing Complete User” (included here), which argued against the idea of “invisible computing” and supposedly seamless interfaces that in practice restricted what users could do. Under the pretense of liberating people who are presumed to be too busy to learn, “user experience” designers closed off the routes by which people come to understand the software they depend on. But that kind of experimentation allowed users to resist how software scripted their behavior and explore uncharted territory. As Buckenham puts it, “it's this kind of folk or real understanding of how the system works that lets people do unexpected things with the system.”
Lialina quotes an old Adobe ad for Creative Suite in which a web designer is depicted as celebrating the software for automating aspects of her work: “I have more time to do what I like most — being creative.” ChatPDF makes a similar a pitch, that chatting with a document frees you from the effort of deal with reading it and as a result you will have more time left to think your own thoughts, as if reading interfered with them. “So often the way AI is pitched is... ‘this will let you be creative without having to mess around dealing with the actual substance of the work’,” Buckenham writes. “And ... you can't be creative without getting your hands messy. That's where the ideas come from!” This is true of reading as well: Thinking your way through a piece of writing without an interface overlay to expedite your engagement and turn it into an experience is usually “where the ideas come from.”
In a follow-up essay, “Rich User Experience” (2014), Lialina argues that “experience design prevents thinking and valuing computers as computers, and interfaces as interfaces. It makes us helpless. We lose an ability to narrate ourselves, and — on a more pragmatic level — we are not able to use personal computers anymore.” It may seem strange to apply this to AI, and to describe the apparently imminent ubiquity of chatbots as making us “unable to use computers anymore.” But Lialina’s implicit emphasis is on the word use. Just as “chatting with a PDF” is not reading it; chatting with a computer is not using it.
Like the earlier interfaces and aggressive UX designs Lialina critiques, LLM-chat is implemented to channel users toward “experiences” while restricting their options. When a chatbot does its “as a language model I can’t …” spiel, it’s only the most obvious example of this.
In a recent essay for Post45, Christopher Grobe details the work of tech companies’ “conversation designers,” who have established standards for how various “virtual assistants” communicate with users. This is his assessment of the “house style” of Google Assistant:
The goal is to mold the system to the grooves in a median user's brain. That brain, universalized by cognitive science and borne out by user testing, will have predictable needs, and thus the dialogue will have a predictable shape. This shape will be designed entirely for one purpose: to elicit actionable commands from the user.
Most of what Grobe documents in the essay precedes the latest wave of LLMs, focusing on human-scripted dialogue developed for automated systems to draw from. But generated text can achieve the same “shape” and elicit the same patterns of interaction with much greater “clarity and efficiency,” to use what Grobe describes as the “watchwords” of Google’s head of conversation design. The patterns steer users toward a specific and limited kind of responsiveness: to continue to give the machine orders and allow the machine to perform the cognitive work. The “grooves in a median user’s brain” don’t pre-exist such interactions but are etched by it, in accommodation to repeatedly encountering automaticity, by having lots of AI-assisted “experience.” One learns to narrate oneself in a restricted and instrumental way.
To be creative when confronted with LLM interfaces as we encounter them more and more frequently, of course will remain possible, but it’s not what they are for. It entails going against their grain, stacking oneself against their billions and billions of parameters oriented toward saving you the trouble.
This is a scripted prompt designed to encourage you to subscribe