The integrity systems
Earlier this week, Instagram announced an option for users to adjust a setting for how much "sensitive" content it recommends to them in its "Explore" tab. You can now, after navigating through the usual complement of confusing menus put in place to discourage you, select "limit," "limit even more," or "allow," which actually means "limit less," because some of the company's moderation decisions are non-negotiable. The principles governing those decisions are laid out by its "Recommendation Guidelines," which is a somewhat weird way of describing a policy it enforces on itself.
The company doesn't meaningfully detail how it decides which content is to be marked as "sensitive," explaining only that it may involve firearms, nearly nude people, or illicit substances and/or vaping. So it is not clear how users could use this option to make a meaningful decision about what they might have recommended to them. You don't get to provide your own definition of sensitive; you just get to choose from among three levels, with no context. It's as though a restaurant asked you how spicy you want a dish whose ingredients they won't tell you.
Instagram's "how spicy do you want it" option seems more like a way to distract critics and placate users with a "mechanical placebo" they can fiddle with while the sorting algorithm continues to heedlessly recommend whatever is most likely to get them to look at Instagram more. When the algorithm eventually does show you something offensive, the company can now blame you for not digging through the menus and tweaking this setting. You said you like spicy — here's a habanero in your eye.
In a thread about this announcement, Tarleton Gillespie (who wrote a useful book about content moderation called Custodians of the Internet) describes this as an example of a "reduction" policy, something platforms use to throttle the reach of edge-case content. This is sometimes called "content suppression" or "shadow banning," though that falsely implies there is some natural amount of reach for a post that the company is artificially thwarting.
There is no "organic reach" for any piece of content — something Elinor Carmi debunks here — there is only a calculation that platforms make for every piece of content for every individual user based on what's best for their business. It's not like TV of old, where a network broadcast the same show to anyone who tuned in, and ratings told the story of its "reach" to a collectivized audience after the fact. Instead, a platform's algorithms dictate reach through an ongoing evaluative process of whether and when to display a particular thing in a individual feed. And that aim of that process is to make you more receptive to ads, not to make you "happy" or healthy, or to mind your sensitivities.
The "explore" tab sounds like it is about curiosity, but it is about passivity: Click here and you will be shown things to distract you from mounting the effort to be interested in choosing for yourself. The "recommendations" that emerge from such systems will always have reactivity built into them, because they must do the thinking and feeling for the user, who is progressively desensitized to the experience of being told what to do.