Beyond fake
Over the past few months, Samantha Cole at Motherboard has reported on "deepfakes," video clips in which celebrities' faces are pasted over someone else's with the aid of machine-learning algorithms. This relatively accessible technology means that anybody can stick anyone's face on a porn performer's and create ersatz sex tapes that can be used to harass or embarrass people. The implications, though, extend beyond that to the bespoke creation of ostensibly credible visual evidence. "The combination of powerful, open-source neural network research, our rapidly eroding ability to discern truth from fake news, and the way we spread news through social media has set us up for serious consequences," Cole warns. Namely, malicious people will make clips that seem realistic enough to be believable, and these will spread rapidly through social media, where there is a hunger to see what we want to believe. Or ingenious marketers will create fake people to create real desire and sell real products, as with Lil Miquela, a computer-generated Instagram influencer (see above; described here and here).
A recent New York Times op-ed by Henry Farrell and Rick Perlstein takes up the concern with media manipulation, pointing not only to deepfakes-style face swapping but also software that can manipulate audio and video to make anyone look as though they've said things they haven't. As a result, recordings may no longer be seen as having "documented" reality at all: "fake video and audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court," Farrell and Perlstein suggest.
Plausible fakes mean that actual documents can be plausibly disputed. As Julian Sanchez noted in this Twitter thread, "even events caught on videotape become deniable. Remember Trump bizarrely trying to pretend he didn’t say what he clearly said on that Access Hollywood tape? Suddenly even that kind of brazen lie becomes believable to those with a will to believe." This suggests that it will be harder to "reveal" truths about people by capturing them "off-stage"; maybe it will also mean we will pay a different sort of attention to what people openly do and say on stage. (It's not like Trump's attitudes needed "revealing," and it's not like that recording changed many people's few of Trump or the media.)
Farrell and Perlstein, like Cole, presume that in the click-driven media environment, fake clips will be cynically distributed with no regard for their veracity: First, algorithms in social media will amplify their distribution; then mainstream media will feel obliged to report on the controversy, adding a veneer of legitimacy. And then democracy dies in darkness.
But is any of this so different from the way news clips and soundbites have always worked? These are not outright fabrications, but they distort what happened and mislead audiences in the name of sensationalism or ideological indoctrination. Howard Dean's "scream" offers a good example: a moment stripped of context used to reinforce a particular interpretation of a candidate's character.
All video clips are obviously edited to hold viewers' interest; this necessarily involves condensation, distortion. Even the rawest of footage is chosen over other footage, other points of view, other subjects of potential interest. The "news" is a manufactured product that reflects the economic and political interests of the people making it — just like these deepfakes would be. The deepfakes would just be more explicit, and possibly easier to discredit on the basis of the wishes they seem to fulfill. Things that seem too good to be true become harder to believe when evidence is easily fabricated. We may drift of our own accord to a posture of greater skepticism, requiring more than just a single document to credit something that seems a little too convenient — if we are interested in facts and not just entertainment. People consume information for different reasons; they may not necessarily want the truth but to feel in the know or to feel that they knew it all along. They may not want "media literacy" if that means surrendering their beliefs.
"Democracy assumes that its citizens share the same reality," Farrell and Perlstein write. What they are implicitly suggesting in the op-ed is that we could once generally trust mainstream media to create that "same reality" for all citizens. If that "reality" wasn't particularly reflective of lots of people's lived experience, that was the price that was to be paid for "democracy." Of course, that "shared reality" was only ever shared by a small group of the country's elites, and it was imposed on the rest of us, usually as a form of entertainment rather than a civics lesson. If the citizens of the future also want to watch entertaining fake clips of politicians seeming to say the outrageous things that people believe they stand for, will they really be any less informed?
"Shared reality" probably has more to do with who we talk to and what sort of encounters we have on an ordinary day than with media literacy. "Reality" is taken as we find it; it is not something we feel usually obliged to create for ourselves through any kind of systematic protocol of information gathering. It is affective, a matter of what feels true — what seems necessary to make it through the day and to understand what other people are talking about. Nothing can extinguish that sort of reality; it generates itself out of our perceptions and social affinities.
Farrell and Perlstein are pointing to something different, a theoretical construct of "public opinion" that derives from mass media's ability to induce conformity. When broadcasting capacity was limited, it made sense to organize publics into the biggest possible blocks. There were only a few channels, and broadcasters wanted as many people as possible to watch them, lest they not watch at all. This dictated a centrist tack that sought to include everyone within the "reality" they posited. That is the reality that we're now seeing nostalgia for, as Gavin Mueller argues here, as if those institutional gatekeepers guaranteed a more objective public sphere.
But that center doesn't hold and never did. Once broadcasting resources ceased to be scarce, the imposed centrist "consensus" fractured along many different axes. It became more expedient to address individuals rather than audiences — individual are more vulnerable to consumerism the more they are isolated and made to feel that identity and social acceptance is contingent on what they buy. Now it is commonplace for small media producers to make content for micro-niches to reinforce those audiences' sense of distinction. Cable channels target increasing small slivers of audience, and social media (including YouTube) are largely premised on the "audience of one" idea. There is enough content for everyone to have an entirely unique media diet.
In this media environment, no one needs to define themselves against the blah "shared reality." You don't have to take up an oppositional position at all; you don't even need to know what mainstream culture or the "consensus" view is. Instead, one can experiment with all sorts of "distinctive" content, playing with identity, trying different ideas on, indulging different fantasies, different ways of being distracted from oneself.
Simmilarly, social groups are not organized around a common position on the shared reality so much as having a shared set of reference points. It is much easier to find an idiosyncratic group of people online who are invested in a certain set of articles and memes on any given day and keep yourself preoccupied with participating in that local reality. That sense of belonging seems to align a lot more with the daily experience of "shared reality" than seeking out "objective" information to be well-informed. The social groups we participate in create their own localized shared reality, and it doesn't have that much to do with accurate information; it has a lot more to do with creating feelings that perpetuate the groups, that define their boundaries.
"Fake clips," to the extent they are distributed through these smaller networks, also serve this function: They reveal truths about the groups that embrace them or ridicule them or condemn them or whatever. They disclose actual facts about the people who make them, watch them, share them; about what they want, what they need to hear, what sustains their bonds with others. As Baudrillard might say, the fake clips are more real than real.
Rather than treat media products as documents, we should probably default to treating them as constructions. Or in other words, we should treat them as documents of the deliberate acts of manipulation someone performed to generate a "reality" they thought people wanted to see. They are evidence of intentionality, not happenstance.