ZIRP, human blinders, robot jobs
The fixed cost of training the models has been well covered, with a GPT-scale AI costing billions to train, but even getting results out of a trained model is expensive, between the electricity required to operate and the risk of congestion in the data centers.
As a result, a single ChatGPT prompt has been estimated to cost around a hundred times that of a web search, and that was before OpenAI rolled out GPT-4, a substantially bigger model that is correspondingly more expensive to run.
That’s why so much of the cutting edge of this field is subscription-based.
How many people would subscribe for chatbot access? Businesses might subscribe for it without necessarily wanting to, as it will be rolled into the software licenses for productivity suites (i.e. Microsoft Office, Adobe Creative Suite) that they already pay for. But for individual consumers, it still seems more like a gimmick than a routinely useful tool.
Hern argues that advertising and venture capital funding has subsidized media and communication technology for so long that consumers have not really been asked to pay what they think that technology is worth to them. From that point of view, people joined social media platforms largely because they didn’t have to think about whether they were cost-effective. People got to “be the product” themselves, which is invasively objectifying (all the surveillance) and flattering (the system cares enough to attach a price to you) at the same time. No matter how “addictive” social media is supposed to be, it doesn’t seem like many people want to pay for it directly. No one wants to subscribe to Twitter, and everyone seems to laugh at those who do.
Likewise, chatbots haven’t reached anything close to a “real subsumption” stage in which people’s everyday lives have become (forcibly) reoriented to presuppose them. A certain level of learned helplessness would have to be reached for people to feel obliged to pay a subscription fee to support their sense of cognitive dependency. It seems far more likely that chatbots will also pursue a “you are the product” approach that flatters users and extracts value from their engagement. That interaction can be made directly into “audience labor” by showing people ads within chatbot responses, or it can be turned into data to further train the models and more precisely target ads. In other words, AI won’t be a departure from the business models of “surveillance capitalism”; it will be an intensification of them.
Bruce Schneier and Barath Raghavan write about how LLMs will enable a “tidal wave” of scams:
There will also be a change in the sophistication of these attacks. This is due not only to AI advances, but to the business model of the internet—surveillance capitalism—which produces troves of data about all of us, available for purchase from data brokers ... Combine the digital dossiers that data brokers have on all of us with LLMs, and you have a tool tailor-made for personalized scams.
But there is no reason to think that only “scammers” will put these pieces together; all the existing clients of data brokers will certainly explore these avenues as well.
An industrial designer saw fit to re-create a horse’s blinders for humans and rebrand them as “visual-noise-cancelling eyeglasses.” It seems like this would be the opposite of the apparatus that Malcolm McDowell is strapped into in A Clockwork Orange, but it’s just an implementation of the same idea, that you need to be forcibly directed to focus on the information you are supposed to ingest with your eyeballs.
Aaron Benanav, who wrote Automation and the Future of Work, here critiques the recent OpenAI-sponsored paper that claimed that “between 47 and 56% of all tasks” could be automated using the company’s software. (That paper also includes the chart of “automation-proof” work above, which basically just lists jobs that require physical labor.) Benanav compares it to the infamous 2013 paper by Carl Benedikt Frey and Michael Osborne, which the OpenAI researchers draw from, that also claimed that 47% of jobs (quelle coïncidence!) would be automated away, a prediction that has not come close to being realized. Benanav writes:
Researchers at the OECD reran Frey and Osborne’s numbers in 2016 based on a more realistic account of the variety of forms that jobs take and found that less than 10 per cent of jobs in the US were likely to be automated, and even that figure has so far turned out to be an overestimate.
Instead of automating jobs away, Benanav argues, technological developments like ChatGPT change how jobs are performed in ways that are inflected by local labor conditions in different countries and industries. The piece ends with a bit of a shrug about how that might play out — “Only time will tell whether ChatGPT will finally solve the problem of information overload – or will make it worse, by increasing the speed with which information and disinformation proliferate” — and doesn’t really go into the arguments developed in his book, which claims that widely publicized fears of automation are a way of distracting everyone from capitalism’s stalled growth engine. There he argues:
Put on the reality-vision glasses of John Carpenter’s They Live, which allowed the protagonist of that film to see the truth in advertising, and it is easy to see a world not of shiny new automated factories and ping-pong-playing consumer robots, but of crumbling infrastructures, deindustrialized cities, harried nurses, and underpaid salespeople, as well as a massive stock of financialized capital with dwindling places to invest itself.
By the same logic, ChatGPT discourse would blind us from the fact that, as Benanav claims, the capitalist economy generates persistent underemployment. Rather than mount the social struggle necessary to secure better labor conditions (or even a “post-work” future), society may instead focus their worries impotently on a technology promoted as inevitable. (As an underemployed person who writes incessantly about “AI,” I feel implicated here.)
In Smart Machines and Service Work (2020), another book that challenges the recent mainstream automation discourse, Jason Smith brings up what economists have taken to calling the “productivity paradox”: the “curious fact” that “the proliferation of computing technology and digital networks” occurs “alongside increasingly sluggish productivity growth rates.” Like Benanav, Smith links this low productivity to weak capital investment and chronic underemployment, dismissing the idea that technology is eliminating jobs en masse. “The types of labor processes many automation theorists suggest are vulnerable to replacement by smart machines in fact require an intuitive, embodied, and socially mediated form of knowledge or skill that even the most advanced machine-learning programs cannot master,” he argues.
Naturally the generative AI enthusiasts would be quick to insist that things have changed on that front. They would argue that LLMs have internalized the requisite social mediations and now chatbots can carry out various roles even in the service industry, where most newly created jobs are now to be found. But it seems as likely that chatbots will drive down the price of human judgment and human service without being particularly effective at replacing it. More of what once appeared as skilled labor (education, law, etc.) may be transformed something that appears more like service work. One thing that a machine can never replace is the satisfaction that some people experience in dominating and commanding other people, regardless of how productive they are by conventional measures.
Thanks for reading! Internal exile is a reader-supported publication. Please consider becoming a free or paid subscriber