One thing I mastered in failing to get a Ph.D. was an ability to research things for their own sake. That is, I never learned how to properly research anything at all; I just mutated procrastination into a taste for curiosity in itself and would search not for answers to any specific problems but for further questions. One book would lead to fifteen others, and so on, and I never got anywhere close to organizing any of my “findings” or even developing a dissertation topic. I just wanted to be lost in the library, and I’ve been a dilettante ever since.
I was reminded of those days by this short post by computer science professor Ben Recht, which contrasts learning how to do research with learning how to prompt an LLM. His point is that to become an expert in a field, one must know how to search through a discipline’s literature:
Only part of math research is being able to find the right answer. Another essential part is learning to see patterns so you don’t have to look up the answer as you piece together an argument. There’s a complex interaction between this pattern recognition and understanding how to engage with external literature. And, you know, there’s also the ability to know when something is correct. When I translated solutions from library books, I had to work through the logic of the proofs and know when pieces didn’t fit.
If you don’t know how to navigate a discipline’s canon — if you can’t map it, situate different resources ideologically, recognize disputes and contested points, recapitulate the logic of different arguments from different points of view — then you probably don’t know what you are talking about, regardless of how much information you can regurgitate. LLMs can give you information but not the reasons why it was produced or why it has been organized in certain ways. And it certainly can’t identify what’s missing.
David Murakami Wood makes similar points here, noting that “research and writing are thinking” — they are not more or less efficient tools for finding and conveying reified “thoughts” that are pointlessly hidden in books but the necessary subjective aspect that transmutes inert data into thought, something that is being accessed and developed for a reason.
To put it in the terms Ted Chiang uses in this recent New Yorker piece on “why AI isn’t going to make art,” research and writing demand an ongoing and layered series of choices “at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.” These choices are not made through calculations of probability; they are more often made in defiance of what is merely likely.
What to consult, how to interpret, how to cope with polyphony and contradictions, how to combine sources, how to sequence words and thoughts, how to cut and omit, etc. etc. — anything worth engaging with conveys a sense of these deliberate subjective considerations and demands more of them, an infinite series of choices in how to respond. “Any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it,” Chiang writes. “Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.”
To counter that, tech companies promote what Erik Salvaggio describes here as “the productivity myth,” the idea that saving time and effort for its own sake is better than doing any particular activity for its own sake.
The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated. The importance and value of thinking about our work and why we do it is waved away as a distraction. The goal of writing, this myth suggests, is filling a page rather than the process of thought that a completed page represents.
Better than researching is knowing how to save the time necessary for researching — not so that you can do more research but so that you can go on to deploy other time-saving techniques. Better than reading is reading a summary of what can be read, and then a summary of that summary, and so on until you’ve disappeared completely into the mise en abyme you’ve taught yourself to pursue, a perfect form of inertia.
I wouldn’t call this a myth but an ideology that stems directly from capitalism’s demand for abstract alienated labor: workers compelled to do things they don’t care about, orchestrated in such a way that they reap as little of the profit from it as possible. Ideally the work becomes completely deskilled, so workers acquire no sense of mastery but are instead controlled by the work process, subject to it rather than subject of it.
Generative models support the idea that the “completed page” is a commodity, whose value is in what someone else pays for it and not in the subjective experience of whoever produced it or consumed it. Tech companies insert models into tools and processes not because they are necessary or even efficient but because they reinforce this sort of ideology, of pursuing efficiency instead of purpose, of efficiency as the only purpose to pursue, the one intention that invalidates all the others. Thus they program future generations of worker-users: Abstract labor time is the only thing that means anything; generative AI helps reduce everything to it — “completed pages” that are not beholden to subjective intention.
As Chiang insists, the point of art (or education, or thinking, or living) is to be confronted with intentionality — with irrefutable proof of subjectivity, the fact that something “derives from … unique life experience and arrives at a particular moment in the life of whoever is seeing [a] work” — and to draw from it the energy to enhance your own, to sustain the will to will. You decide to be present for something, and try to make the effort to come to terms with the presence of another subjectivity that is more than just a projection of your own. That presence of mind is itself thinking, the basic unit of intentionality. Tech companies seems adamant in insisting they can make money by extinguishing it.
Recht argues that researching to find the accepted solutions to certain problems “is more like learning guitar than it is taking a standardized aptitude test.” If you want to be a musician, you want to know how to play music, not merely what music sounds like. Chiang offers a similar metaphor: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Generative AI, Recht argues, “always seems to provide the minimal effort path to a passing but shitty solution,” which actually seems like a fairly charitable assessment. But it is obviously something that worker-users would employ when they don’t care about what they are asking for or how it is presented, for optimized producers who see research as an obstacle to understanding rather than the essence of it, for people conditioned to be absent at any presumed moment of communion. Generative AI is the quintessence of incuriosity, perfect for those who hate the idea of having to be interested in anything.
So much of my work when I was an academic librarian was impressing upon students the importance of taking time to think about their topic, ask questions about it, use those questions to find sources, use the sources to ask more questions, and so on. I wasn't an expert in any discipline (save that of organizing and work with information resources themselves) but helping students see that just grabbing the first ten things that matched their keywords was unlikely to help much in building their understanding (or impressing their professors) underlay what I tried to do in workshops. When a student found something really solid and germane to what they were exploring and got excited about it, it was so great.
"Once men gave their thinking over to machines in the hopes that this would set them free. But this only allowed other men with machines to enslave them."
Cross that Dune quote with the entirety of E.M. Forster's short story "The Machine Stops", and that is our future.