Whose fault was social media?
This review by Tarpley Hitt of Taylor Lorenz’s Extremely Online helped me focus my thinking about what I was trying to articulate the other day about “the end of social media” and how the teleology of commercial platforms was always some form of influencer culture. I think that the emergent purpose of platforms was to reshape the “social” to be more commercial and ultimately to equate those things (if they weren’t already) so that sociality seems to naturally appear as nothing but reciprocal sales pitches. But this claim can seem like ret-conning the origin story of social media to fit the eventual outcome, particularly if one really wanted to believe that internet technology intrinsically empowered users or that “connection” was good in and of itself.
The implicit framework for a lot of my arguments tends to assume that media platforms impose their logic, incentives, and rationality on users; the point of Lorenz’s book (as Hitt lays out clearly in the review) is that entrepreneurial users forced the platforms to accommodate their aim of selling their lives as content, as opposed to “sharing” it with “friends.” As Hitt notes, “self-commodification sits at the center of Lorenz’s new book,” as it should in any history of social media, but I tend to take it for granted, on the basis of sheer naïveté perhaps, that this self-commodification is at some ultimate level imposed on users, despite what they may say or do. I have always tended to argue that social media makes people be brands, whereas Lorenz’s book argues that people are brands first and they forced social media to become more and more about them. Not only does Extremely Online attribute the desire to be a personal brand to the users themselves; it builds a pantheon of self-commodification’s pioneers, lauding them specifically because they forced platforms to be more useful to influencers and marketers and less amenable to noncommercial sociality. Let us praise the mommy bloggers for turning their children into content! All hail Team 10! They showed us the way forward.
In what I was writing the other day, I was drawing on a dichotomy (untenable as they all are) between social media users who want to address "friends" and those who want to address "audiences," to argue that social media’s innovation was to convince us that there is no cost in collapsing those. I wanted to claim that “real” or “ordinary” sociality is talking to friends and the social media deviation is the self-serving packaging of your “friends” for advertisers. And I wanted to blame the platforms for imposing that deviation on people, or if not the platforms, then the dominant economic rationality at work, the neoliberal imperative to connect, be flexible, self-brand, exploit networks, optimize, entrepreneurialize all aspects of life, and so on. Social media platforms provided an infrastructure that changed how people thought of their own social life; they reformatted it for neoliberalism. (I peddled this idea back in 2012 here and probably dozens of other posts — my own imminent critique of self-expression hustling.)
But that all looks different from inside the process, wherein individuals assume responsibility for the aspirations and horizons that are more or less dictated to them by historical circumstances. From that perspective, the inchoate deployment of social platforms can be seen as a kind of incompetence that enterprising users corrected, rather than itself an expression of neoliberalistic flexibility. Here is Hitt’s summary of Lorenz’s point on this, drawing on the trajectory of proto-influencer Julia Allison.
Lorenz uses Allison’s story as a template to explain the subsequent rise of online influencing: how regular people gamified social media clout and converted it into income, often despite the hindrances of the platforms they used. Allison initially struggled to make money from her fame. The infrastructure of Tumblr didn’t make it easy for users to monetize their blogs; the platform did not disclose follower counts, and so advertisers could not quantify anyone’s particular “influence.” Many of the first wave of social media platforms had similar limitations: Facebook, which tried to re-create the connections of the real world, at first capped the number of “friends” any given user could have. Instagram, where anyone could subscribe to public accounts, allowed for a more porous kind of interaction, though one in which self-branding and ad partnerships were initially discouraged …
To make a career out of posting, they had to monetize their personalities and turn their online celebrity into something for sale as well. But the logistical task of converting persona into product was not always straightforward. Its omnipresence today is the result of years of internecine negotiations over sponsorship strategies, regular battles between social media platforms and influencers, and a byzantine network of advertising models that emerged to profit from the new media’s stars.
Part of the claim here is that social media users wanted to make content about something separate from themselves, but the structure of the platforms forced them to fuse their personality with their content (or if you prefer, it inspired them to innovate new ways to present content parasocially). “In Extremely Online,” Hitt argues, “the question of what these creators actually create seems far less important … than their ability to find brand partners.” Ultimately that is just what commodification is; that is what it means when exchange value subsumes use value.
I think this is what platforms were always for — turning life experience into media and audience commodities — and the sites’ underdeveloped designs were meant to extract more effort from users, not just in making specific content but in making ways to make that content profitable. The Lorenz view is that platforms wanted users to make content with no hope of profiting, but the users were too resourceful and forced platforms to move toward being about influencing. “Power users” wanted the metrics, the gamification, the algorithmic incentivization, and so on, and platforms grudgingly obliged.
At any rate, these oppositions are too reductive. Hitt’s account highlights a kind of dialectic at work that Lorenz lays out (a bit one-sidedly) between users and platforms as they react to each other and serially recalibrate their expectations and their sense of opportunities — what I labeled “complicity” the other day. Each reshapes the other, neither unilaterally dictates to the other, but both are constrained by the overarching imperatives of capitalism, making it somewhat beside the point of arbitrating who innovated what in social media. No commercial platform could manifest or pursue an anticapitalist or anticonsumerist agenda; no user could thrive on platforms by subverting capitalist values. (It remains ludicrous to believe that influencers somehow ruined commercial social media rather than realized it.) But perhaps neither can be solely blamed, as much as I would prefer to blame tech companies and their strategists and not content creators “pursuing their dreams.”
I find it deeply depressing to think the people aspire to become advertisements, but it also makes perfect sense in a world that privileges ads and ad space over virtually everything else and values it precisely. If you can be an ad, then the whole world revolves around you. Your place in the world is assured and affirmed anytime you look at a screen or take a glance around in public space. But the question then is what has made advertising so central to everyday life such that people adopt its imperatives as their own? It doesn’t really clarify much to blame capitalism or consumerism; it just renames the condition. But it is also inadequate to naturalize the desire to succeed on our broken world’s terms, as if everyone has always been awaiting the opportunity to fully monetize their existence, seeking that supreme apotheosis of alienation where work and life finally become one and unthinkable otherwise. I don’t think it is our fault that capitalism sucks.
Datafication of everything
And here is an addendum to the other part of my last post, about algorithmic culture. After Chris Gilliard rightfully mocked this Wired piece about a tech CEO who believes that “that life would be better if algorithms logged every spoken word so life events past can be lived and explored again” — a prime example of how the hegemony of algorithmic culture is built — Ali Alkhatib replied with a link to a paper of his (and an accompanying video) that puts the datafication of everything into better context.
Alkhatib draws on James Scott’s Seeing Like a State to make the point that, as he says in the video,
machine learning systems construct computational models of the world and then impose them on us, not just navigating the world with a shoddy map, but actively transforming the world with it. They're shaping the world by deciding whether an applicant gets a loan, or a job, or into a good school; who we get matched up with in dating apps; whether the content we put online immediately gets flagged for review or demonetized; whether we get released on bail or held on remand until our court date. And all their behaviors in turn shape ours, so that we appear more legibly to this incredibly limited system. These systems become more actively dangerous when they go from "making sense of the world" to "making the world make sense"; when we take all of this data and tell a machine learning system to produce a model that rationalizes all that data.
Adding more data won’t necessarily help correct the shoddy map once the map has become an engine. Instead, more of life will be exposed to the model’s distortions and simplifications, the brunt of which are borne by those without the power to dispute it.
Sometimes in accounts of how “AI” works, it’s implied that larger data sets make a model “smarter,” as though it somehow unlocks a higher mode of cognition. But the mode of processing doesn’t change; it just traces the same positive associations with more and more thoroughness. What is being proposed by these models is that there is no such thing as higher cognition, that there is only pattern matching, only associations of greater or lesser complexity, and there is no point in trying to understand why the patterns are what they are. They are arbitrary, brute facts.
When the Wired piece describes the fantasy of “capturing everything you hear into a master dataset where you can search and re-experience every conversation you’ve ever had,” the point of this would be to liberate the original experiences from your human framework of understanding them and regard them instead as meaningless patterns. By the same token, you can transform your life experience into data and render it equally meaningless, woven pointlessly into the entropy of the universe.
From the CEO’s point of view, this is great. His company, Wired reports, is
working on a feature called Avatar that would enable him to run a meeting without the bother of actually attending. It’s essentially a chatbot built around years of past data on his contributions in meetings. “I’m often double-booked, so for those meetings I can send my avatar, which can answer probably 90 percent of the questions people ask me,” he says.
Why not carry that forward even further and absent yourself from the entirely of your life? It is all just pointless patterns traced in the substrate anyway. Why would you want to pretend to exercise any agency under those conditions anyway? You should aspire instead to run your life without the bother of actually living it.
thank you for writing this! I have just two thoughts I'd like to share -- first, when I read you saying "Why not carry that forward even further and absent yourself from the entirely of your life?" I immediately wanted to share one of my shorts with you: https://amyletter.substack.com/p/gretchagain
It's part of my "Electronic Girls" series of quick-hit AI pieces, all from the point of view of the AIs.
The other thing was just to share an observation -- I teach at a university and one of the courses I used to teach was "Creative Writing for New Media." The title sounds downright quaint these days, but when I invented the course in 2011, there was a lot of great things going on online in the general realm of "creating new things using these new media tools but generally in an off-label non-commercial way." One of my most keen-sighted students in one of those classes observed that the internet as we know it is the manifestation of government funded research and commercial hegemony; by making art online for free by "mis-using" these commercial tools, we are implicitly making a political, counter-cultural, and counter-capitalist statement.
The last time I taught the course was 2019 and by then it already seemed passe. The last iteration of the course felt, to me, like a bi-weekly apology for the collapse of whatever counter-cultural and counter-capitalist movement might have once been. All the avenues for creation were gone. All that were left were avenues for "participation." Yes, I could still have my students code projects in Processing or design a non-linear CYOA / text game using nothing but HTML, but we all knew, by then, that no one would see it if it weren't on YouTube or Facebook or Insta or Twitter, and that playing by the rules of those social marketplaces left very little room for real creativity.
The course is defunct -- I replaced it with a seminar in Writing in a Networked World -- a course that by design focuses more on those political, social, military, and commercial networks and the ways in which we find ourselves enmeshed. If you're interested, here is more about that: https://amyletter.substack.com/p/writing-in-a-networked-world
On the datafication of everything and in particular the Avatar startup described, after the events of this weekend, I can’t help wondering how it would feel if the company that had collected every experience of your entire life was suddenly under entirely new management.