Datafication is always political
In The Nation, Ben Tarnoff reviewed How Data Happened: A History From the Age of Reason to the Age of Algorithms, a book about the history of datafication by Chris Wiggins and Matthew L. Jones. I thought this was a useful definition of what data is and does:
When we talk about data, we tend to be talking about optimization as well. Optimization is what data promises. By representing reality with numbers and then using statistical techniques to analyze them, we can make reality more efficient and thus profitable. Relationships are revealed that have an explanatory power: for example, which people are most likely to click on a particular ad, or which financial asset is most likely to increase in value. This capability is sometimes discussed as “artificial intelligence,” but more prosaically it’s inference, albeit on an industrial scale.
Most references to “data” take it as self-evident that data is an accurate measurement of some empirical phenomenon, but what is chosen for measurement and who is entrusted with devising and applying the measures are not in any way naturally given. Data is an abstraction from the world, undertaken to facilitate further statistical abstractions, inventing norms and reducing masses of particulars in the world to manageable approximations, so that the few can more persuasively govern the many. Data is a legitimation procedure grounded in a seemingly natural faith in the honesty of math.
Understanding the world in terms of data is an ideological matter. Data is, as Tarnoff highlights here, a selective representation of the world that has embedded within its very terms the ideal of optimization — it represents the world as something to be optimized, something that is always optimizable along basic quantitative lines. This view is obviously complicit with capitalism’s existential commitment to growth as the only meaningful value, but it is also central to organizing and reproducing social categories and stigmas and power asymmetries that capitalism also requires.
As Tarnoff notes, “Datafication is as much a political project as a technical one, the authors remind us. Its current dystopian form is in large part the legacy of concrete political failures.” To put that more clearly, every technical project is political.
Datafication has always been a means of social control; computer technology has since made it far more efficacious. Datafication becomes synonymous with digitization and “information processing,” but first and foremost it is a kind of surveillance designed to impose classifications and norms on the surveilled while devaluing whatever ways they understand themselves.
The end of appetites
I felt like I didn’t really have a clear point to make about Ozempic in my post from Friday; I just found it interesting to speculate about a consumer product that would supposedly make somebody resistant to consumerism. When so much of our lives are about having our desires manipulated by advertising and our identities realized by conspicuous consumption, would we even want to be set free from it? What kind of life would we have if, for medical reasons, we had to retrain ourselves in how to want things in the face of the billions of dollars spent on messaging and infrastructure to keep wanting in the same ways that had been deeply unhealthy? How would that messaging and infrastructure be repurposed to maintain its profitability? Would it suddenly become “rational” when it could no longer exploit our appetites and compulsions, or would it just work harder to irrationalize the kinds of desires an anti-appetite drug doesn’t touch?
Anyway, that is a lot of hypothetical questions, and I’m not all that sure that the rise of Ozempic will ultimately pose them, let alone suggest how they will be answered. In this post, Josh Barro reiterates his earlier claim that “10 years from now, it’ll be obvious GLP-1 drugs were a way bigger deal than AI” and argues that “more than half the population ought to be on these drugs.” That would mean more than half of the people in our society have more orless been broken by our society and need to have their minds and metabolisms be medically corrected on a one-by-one individual basis. This kind of proposition makes clear how something like Ozempic lets us conceive widespread social problems in terms of individual solutions and responsibilities — a perpetuation of the root cause being superficially treated, if you assume as I do that these drugs don’t “fix” broken intrinsically broken human appetites but rather allow them to better survive within in an human-built environment that systematically warps human appetite.
Barro writes as though the obesity epidemic that Ozempic addresses has come from nowhere, is some sort of inexplicable curse that the drug can blessedly lift, along with magically making people more rational: “We may have stumbled upon a drug class that broadly improves people’s judgment and decision-making. Isn’t that amazing?” he asks. (This is not far from the hype about AI and how it will empower people to make more informed decisions and so on — a point Barro himself makes.) This will all be great for business, Barro suggests, because those workers whose rationality has been corrected by drugs will become more productive, and they will be trained to desire new things that don’t have the side effects of suppressing productivity. “You’ll get shifts away from specific products and services whose consumption is inhibited by the drugs (food, alcohol, gambling, certain medical treatments) which will then lead to a rise in demand for other products and services as consumers find they have more money available to spend on them.” Don’t tell Bataille, but we may have stumbled upon a drug that eliminates the accursed share.
In my post I assumed that since a powerful status-affecting drug will almost certainly never be distributed universally, its existence would be used to make sure more of the negative consequences of consumerism continue to fall on marginalized populations. Barro here suggests an alternative scenario where appetite suppression is made compulsory — a kind of reverse of Aldous Huxley’s “soma” from Brave New World. Instead of addicting people to euphoria to make them controllable, it would eliminate euphoria so that consumer desire can be matter of fully ration administration.
Real images
I’ve been watching a lot of baseball playoff games, which means I’ve been seeing a lot of ads, including one for Google’s new phone that includes “AI” photo-editing capabilities. In the ad, a kid takes a picture of himself with his friends and then digitally edits smiles onto their faces. They all nod along with this, as if they didn’t really care what they were made to communicate in someone else’s photo, since they would make their friends into mannequins in their own photos too.
I am not really the target market for that kind of technology. Most of the pictures on my camera roll are screenshots of text; the last actual photos are of some price tags at Ikea, and one of where I parked the car in the lot there. I’m generally not trying to make meaningful images of my life to supplement my memories, and I am not typically using photos to communicate with anybody, including myself. But I certainly enjoy editing photos a lot more than just looking at them. When I look bad in a picture, I feel bad about it; but when I make myself look bad in a picture, I feel a sense of control.
In a Wired piece, Jason Parham examines the fear that easy-to-use phone-based photo editing comes at the expense of “authenticity.” Since the editing suite is “making it easier to tailor reality however you see fit,” he writes, some may conclude we have given over to AI the “the substance of our lived realities.” They have their own integrity that is best assessed rhetorically, in terms of what they are supposed to achieve, and not with respect to some fictitious fidelity to “lived realities.” In fact, one might argue that you take photos not to capture reality but to refine exactly what it is that can’t be captured — what actually is “live” and not available to mediation and mechanical reproduction. In other words, photos create an “aura” of reality precisely for something that is understood to exceed the photo, something the image points to but can’t contain.
As is often the case when “authenticity” is invoked only to be placed under threat, I find these claims unconvincing and these fears overstated: This kind of argument conflates “reality” with representation and documentation, much like the ideological use of the word data does. When you edit a photo, you edit a photo — not reality. It’s a category error to think photos can’t be “real”; they are real by virtue of being made. Or to put that another way, all images are “counterfeit” by the standard of lived experience. They are all selective re-edits.
Parham talks to photography professor Tom Ashe, who points out “putting these tools into our phones does further democratize the ability for people to manufacture the image they want, instead of settling for what they were shown in the original exposure” — echoing an idea from Walter Benjamin’s “The Work of Art in the Age of Mechanical Reproduction” and John Berger’s Ways of Seeing— and that more familiarity with how easy it is to edit photos will generate more “healthy skepticism to our idea of the photograph as a document of objective truth.” He also talks to another professor, Derrick Conrad Murray, who notes that “self-representation and social media enabled many women of color to challenge culture industries that prop up beauty standards that have traditionally ignored and demeaned them.” Recognizing that images are not documentary but rhetorical opens space for counternarratives.
The problem with “AI editing” would not be that it “falsifies reality” but that it is likely to default to stereotypes and import biases into images in its effort to automatically “correct” them to some statistical norm. It could serve as an avenue for having one’s images written over with what is supposed to be more desirable, so that the AI-edited images appear as an ideologically corrected account of your own experiences instead of a means for individuals to make more persuasive images that say what they want them to say. AI capabilities (like algorithmic feeds) tempt us with passivity; they offer to pre-edit material into something we are expected to find entertaining or meaningful — we get to be passively entertained while we have it confirmed for ourselves that we are going along with someone’s idea of the proper flow. We aren’t settling for the “original exposure,” but we’re not taking an active stance toward interpreting the world either. With all sorts of AI tools, it will be increasingly important to figure out where “editing” stops and where autocorrection begins.