Mazes containing loops
At Real Life, we've run several essays about "Web3" over the past year, and during that time, we began to question whether it was even appropriate to use the term "Web3," or if that was giving support to its supposed inevitability no matter how critical the piece was in which the term appeared. An ideological agenda is pretty much built in to even mentioning it (even here and now in this newsletter). Is it enough to try to delegitimize it with scare quotes every time it comes up? Should it simply be replaced with "crypto" in all instances? Should the subject be avoided altogether?
Once an invented hype term crosses over into widespread use, refusing to use it can begin to seem like perverse pique. Such refusal can necessitate wordy, pedantic-seeming locutions in place of a trendy term's shorthand that seem to call more attention to themselves than to the critique they are attached to. This can make a publication seem more stubborn than principled, wasting readers' time with what appear to be periphrastic euphemisms. Ideally, media outlets in general would be skeptical of hype terms from the start and they would only rarely get traction. But they are obviously incentivized in the other direction.
"Artificial intelligence" is one of these ideologically loaded shorthand terms that has long since become entrenched, a case where hype has become accepted through the concerted promotional effort of a variety of interested parties. Many critiques have been written about this; indeed it is an evergreen subject for the kinds of essays Real Life publishes. AI is really made of people. It's neither artificial nor intelligent. "Learning" is a bad metaphor for data sorting processes. And so on.
Emily Tucker, the executive director of Georgetown Law's Center on Technology and Privacy recently announced an initiative to use more appropriate and accurate language than "artificial intelligence" to describe the activities of that particular sector of tech industry. "That we are ignorant about, and deferential to, the technologies that increasingly comprise our whole social and political interface is not an accident," Tucker writes, suggesting that the threat stems not from the technologies themselves but from how they are being developed and implemented, discussed and popularized. "Whatever the merit of the scientific aspirations originally encompassed by the term 'artificial intelligence,' it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize," Tucker writes.
Accordingly, Tucker's institution plans to remove "artificial intelligence" and "machine learning" from its lexicon and adhere to these four principles that Tucker outlines instead:
(1) Be as specific as possible about what the technology in question is and how it works.
(2) Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency.
(3) Name the corporations responsible for creating and spreading the technological product.
(4) Attribute agency to the human actors building and using the technology, never to the technology itself.
These seem like sound rules not just for replacing "machine learning" and "AI" but all sorts of tech-company obfuscation and rodomontade. Media outlets will continue to be tempted to use terms that make them seem in the know about emerging trends or on the side of the "innovators" who are "creating the future"; for that same reason, refusal to play along will continue to be construed as ridiculous or cumbersome or cynical or somehow self-interested. But that is no reason to do tech companies' marketing for them.