The demand for misinformation
It is very common to see mainstream media stories warning of generative AI’s potential for disinformation. Since LLMs can spew fake news on demand they can be used to flood all the information channels that people have come to rely upon with unreliable material. The poor dupes, they won’t even know what to believe anymore.
“Misinformation reloaded?,” a paper by Felix M. Simon, Sacha Altay, and Hugo Mercier, offers some compelling arguments against that idea. Their key observation is that there is no existing shortage of misinformation that LLMs can address:
Generative AI makes it easier to create misinformation, which could increase the supply of misinformation. However, it is not because there is more misinformation that people will necessarily consume more of it. Instead, we argue here that the consumption of misinformation is mostly limited by demand and not by supply.
Media fearmongers tend to assume that people only are ever seeking “the truth” in consuming media, not entertainment or camaraderie or fantasy or any number of other gratifications. That is, they assume that there is only demand for information, and never demand for misinformation — that would be a kind of contradiction in terms, like drinking water to try to become thirsty.
But the demand for “facts” is a highly elastic subset of the larger demand for content. Refreshingly, the “Misonformation Reloaded” authors simply presume the existence of a demand for misinformation, even though consumers themselves might not interpret their own desires that way — it is a “revealed preference,” as the economists like to say. It can be deduced ex post facto by how much misinformation circulates. The authors suggest that there is already an equilibrium in that market that is not particularly vulnerable to a supply shock. “Given the creativity humans have showcased throughout history to make up (false) stories and the freedom that humans already have to create and spread misinformation across the world, it is unlikely that a large part of the population is looking for misinformation they cannot find online or offline,” they write.
The people who consume misinformation, the authors argue, are not those who tried and failed to find proper information. Instead they intentionally “reject high-quality information and favor misinformation” because of their “low trust in institutions or being strong partisans.” Consuming misinformation validates that lack of institutional trust even if its falseness is exposed.
The authors also reject the idea that generative models will make fake news appear more convincing — as if existing misinformation failed to garner a larger audience because it was failing to be credible information. But again, the demand for misinformation is separate from the demand for information; they are different markets. The demand for misinformation is defined precisely by the irrelevance of credibility. (The “market” for factual information is self-cancelling, as the incentives to sell facts as a product undermines their trustworthiness and leads to the manufacture of “truthiness.” Maybe generative AI will be pretty good at that.)
Touting generative AI’s power to misinform is less a warning about misinformation than a celebration of generative AI’s supposed power. The underlying assumption is that AI magically creates demand for all of its output, when you could argue that the opposite seems more likely: Generative output jeopardizes the demand in established content categories by muddling the existing signals of quality. There is no camaraderie in a machine pandering to your biases.
In search of lost time
I guess Google’s Pixel phone ads are extremely effective, as here is another piece, this one by Charlie Warzel, warning that AI editing tools will “Photoshop our memories” and cheat our future selves of being able to grasp the authentic texture of our past lives.
AI photo tools are a blatant appeal to vanity, a tacit admission that, in the battle between Instagram and reality, the former has won. The obsessions with photo-editing apps and even the standard custom of taking half a dozen snaps to get the right shot suggests that most people are not overly precious about fidelity.
Compelled by our irrepressible “vanity” (and not Google’s insistent advertising for these photo-editing suites) we will “obsessively” and short-sightedly use editing tools to indulge our whims for self-aggrandizement and betray what was “real” in a moment we were documenting.
But why should we be “overly precious about fidelity”? Whose interests does that really serve? Who needs our self-documentation to function as some sort of purportedly objective data?
I don’t think we owe it to ourselves at all. I continue to be skeptical of the whole “our memories are in jeopardy” thesis for a variety of reasons. For one, I think most people will be too lazy to edit most of their photos — each and every one of those standard half-dozen snaps. There will be plenty of images and videos with “imperfect” material left over that will perhaps seem auratic and authentic sometime later. And just because the people in the ads seem to love making all these trite, aesthetically conformist edits doesn’t mean that such editing will be anything more than an fantasy for many users — something we could do if we wanted, sometime, and that is enough. Most photos just aren’t important enough to be “falisified” even to the people taking them; more often a photo will be quickly edited to be used in some conversational way, and then what we wanted the image to say will be just as useful for our remembrances as what it passively captured.
The nature of the edits people deliberately make are just as indicative of the “reality” of a particular moment as what slips in inadvertently. Just as the deliberate fashion choices people once made serve as a “real” glimpse into the past, the editing choices will also reveal themselves as historical fads and tell us something true about a particular time. I don’t think we need to trick the zeitgeist into disclosing itself; it appears in what people choose to do and not merely in what they took for granted or couldn’t change.
In general, I think the assumption that treating only what is accidental or nondeliberate as truly documentary is wrong; it seems guided by a general sense that conscious choices are invalid and somehow inauthentic, and what is “real” about us is what we appear to do involuntarily, and not a matter of how we try to exercise our will. It puts forward voyeurism as the only adequate truth procedure, as if to know yourself you need to observe yourself unaware, get outside the lying deceptions of consciousness to see how you really are. You need to let how the world sees you be determinative.
But mainly I don’t believe that “our camera rolls—and the thousands of photos they contain—serve as a diary,” as Warzel writes. Yes, “they’re a personal archive that helps us make sense of the past,” but they just as easily make the past seem unaccountably strange. When Warzel claims that “the camera roll also distorts our lives by preserving only certain parts of them,” it seems to suggest that the best way to remember is to blanket ourselves with total surveillance — every moment must be captured to avoid any gaps, which would essentially be concealments, as with the Watergate tapes. But there will always be something missing, and that will always seem most significant because it is missing.
Overdocumention, however, can be just as much a distortion as underdocumentation, if the concept of “distortion” even properly applies to memory. Thousands of images can also mean there is too much to try to make sense of, so much that it can never be integrated into a coherent narrative of who we were, what really mattered, and so on. The camera roll doesn’t replace the work of recollection and synthesis; what an image triggers in our memory can be just as false as any other story we might tell ourselves. All memories are always necessarily distortions. The distortion reveals what the past means to us.
Does not register in the conscious mind
A recent Wired article warned that “AI chatbots can infer an alarming amount of info about you from your responses.” This is an irritating framing because the chatbots don’t infer anything; the companies that administer them do. “The same underlying capability could portend a new era of advertising, in which companies use information gathered from chatbots to build detailed profiles of users,” the article warns, as though the existing era of advertising wasn’t already that. Tech and advertising companies have long inferred personal information from basically anything they can capture about users, individually and in aggregate. “Browsers can infer an alarming amount of info about you from what you look at.” “Platforms can infer an alarming about of info about you from your feed.” This is just restating the basic idea behind “surveillance capitalism,” which is already omnipresent. It is not as though privacy lawas are robust and chatbots are some nefarious way around them. There are basically no effective laws for protecting user data, and it is an absolute free for all for data brokers as it is.
Who is the article for and what is it trying to tell them? Does anyone think that tech companies wouldn’t be gathering data from any possible source, that chatbots aren’t first and foremost data harvesters? It reports on research that seems somewhat superfluous: Does anyone doubt that LLMs can process human behavioral data the same way it processes anything else, making predictions about it? Does anyone doubt that ad tech classifies and reclassifies users according to an ever expanding range of categories? Does anyone doubt that algorithmic feeds exist? Wouldn’t we be better served by research into how tech companies make their money, demanding they reveal what data they collect and how they use it. Why try to uncover that through hamstrung reverse-engineering?
One of the researchers explains that “the ability of LLMs to infer personal information is fundamental to how they work by finding statistical correlations.” In other words, they are designed to produce statistically based predictions about anything that can be expressed in language, so they can be asked to make predictions about users themselves. AI companies don’t care about the text their bots create; they care about finding out more about why people would use them, and how to profit from that.
Presumably the novelty this article wants to capture is, again, that “AI is really, really powerful.” AI models have magic abilities to reveal our secrets, so we better believe what AI tells us about ourselves!
An article at Artnet about fears that generative AI will be able to embed “subliminal messages” seems to want to both report and mock the hysteria about AI’s capabilities.
The latest A.I.-centric handwringing concerns the potential to generate images with hidden, subliminal messages. The supposed danger is that brands will start producing carefully doctored images that subtly embed their logos.
What if AI’s neural nets revealed that stenciling the word “sex” into photos of ice cubes in a rocks glass really did sell more whiskey? What if, through unsupervised statistical pattern matching on an unfathomable scale, AI learned on its own the secret triggers that control us? How would we even know?
Concern about “subliminal” manipulation always seems like misdirection, given how many attempts at overt manipulation are taking place all the time. The idea that there is a need for subliminal advertising presupposes that ordinary ads aren’t effective, and that our efforts to ignore them or rejct them are of course working. Talk about subliminal ads, that is, seems like it is chiefly about flattering ourselves that we are above the ordinary channels of influence. Subliminal ads are imaginary scapegoats. It is becoming clear how all-powerful “AI” is being cast into the same role.
Regarding photo editing going extreme, I hope we see a surge of non-smart phone photography. "Regular" digital cameras and manual ones, too.
Whilst I do agree to some extent with this quote
'The people who consume misinformation, the authors argue, are not those who tried and failed to find proper information. Instead they intentionally “reject high-quality information and favor misinformation”'
I think it's wildly missing the whole "flood the zone" tactics we are seeing in the Israel Palestine conflict etc.