Automatic outs
Earlier this week, the White House’s Office of Science and Technology Policy released what it calls a “Blueprint for an AI Bill of Rights.” Not only does the name mix policy- cliche metaphors (are we building something or amending the U.S. Constitution?) but it also embeds the obfuscating hype term “AI” rather than pointing at the automated systems and their human designers behind them. The White House describes the document as a “framework” that offers “five principles that should guide the design, use, and deployment of automated systems,” and also as a “handbook for anyone seeking to incorporate these protections into policy and practice.” So ultimately it is a handbook for a blueprint for a framework.
No aspect of this “bill of rights” is actually enforceable. The legal disclaimer on page two couldn’t be more explicit: “The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument … These principles are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities.” Instead the report is meant more as a “message: I care” from the Biden administration to signal good intentions without doing anything to interfere with how automated systems are developed, marketed, and deployed by private companies, which often operate on an international scale in inaccessible jurisdictions.
Because it has no enforcement protocols behind it, the report is left to make ethical pronouncements — e.g., “You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse” — that serve only to remind readers that such transgressions are already happening. You should be protected, but you’re not. Many of the report’s suggestions have been crafted in direct response to abusive practices that have already been observed, so in a sense, the document might be considered a blueprint for how the likes of Facebook, Palantir, and Google — among the many tech firms consulted — can keep up with their peers in violating the report’s cardinal principles in search of leverage and profit.
Companies haven’t adopted harmful forms of data collection and implementation out of carelessness; they do it because they perceive a competitive advantage in them. The companies would be derelict in their fiduciary duty if they discontinued these practices, given their legality. Government agencies — if you believe they are not out to reproduce the status quo and are capable of serving “the people” — could possibly abide by the report’s guidelines, that is, until they are supplanted by a more “efficient” private-sector alternative that exploits the conveniences and advantages of automation.
The principles detailed ion the report for a more ethical approach to automation are all laudable enough. A Vox piece by Sigal Samuel sums them up efficiently:
AI should be safe and effective. It shouldn’t discriminate. It shouldn’t violate data privacy. We should know when AI is being used. And we should be able to opt out and talk to a human when we encounter a problem.
But if those self-evident principles of human dignity and decency were integral to American society, a document like the AI Bill of Rights wouldn’t be necessary.
The report spends little time explaining why these principles are so routinely and readily discarded, and it doesn’t examine why it is that violating privacy, exercising discrimination, and cloaking it all in “trade secrecy” are good for business. It certainly has no ideas for how to disincentivize them; just a lot of statements of what system designers “should” do, provided they weren’t operating under the constraints and contradictions of capitalism. Many of the recommendations seem to overlook that automated systems depend on coercion and non-optionality to work; they are premised on the idea that it is better to collect data about people than to solicit their opinion, and that decisions that leave humans out of the loop are more lucrative than following due process.
In other words, many systems are automated precisely to violate the recommendations and expectations in the report; much like the corporate structure itself, they allow companies to disguise responsibility for decisions and outcomes and veil exploitation in false neutrality. Many automated systems are opaque and confusing by design to prevent those subjected to them from “gaming” them (i.e., escaping their coercion). They process people as data to avoid having to confront them as discrete human beings that can’t be abstracted into quantities or proxies. Depriving people of autonomy is often the main point with systems that work to, for example, deny people benefits or predictively police their neighborhoods or implement price discrimination or assign them to social categories against their will, so developing standards for determining whether such systems are “effective” is beside the point — they work by being harmful, and it would be much better to sabotage such systems than to benchmark them.
The report’s refusal to take an adversarial stance toward the violators, despite the extensive documentation of their abuses that it draws on, becomes increasingly exasperating, especially if you are already familiar with their prevalence and diversity. But if you aren’t aware of how pervasive automated systems are and haven’t been following the interrelated critiques of surveillance and invasive data collection; biased data sets; the recursive reinforcement of stereotypes, inequality, and other forms of social stigma and disadvantage; deceptive and opaque disclosures; coercive forms of algorithmic management and discrimination; and the numerous revelations of indifference and malfeasance toward the people who are subjected to automated systems, then this report at least offers a chance to get caught up.
This, and not policy, seems to be its main purpose, as Samuel points out. It takes the conventional journalistic form of tech critique and inverts it: Rather than conclude with platitudes and open-ended gestures toward a solution, these serve as a pretense to catalog what automated systems have already perpetrated and provide examples of how their threats have been carried out. You could skip past the principles and the litany of “shoulds” and read the articles linked in the footnotes, which chronicle outrage after outrage: “A patient was wrongly denied access to pain medication when the hospital’s software confused her medication history with that of her dog’s.” “A formal child welfare investigation is opened against a parent based on an algorithm and without the parent ever being notified that data was being collected and used as part of an algorithmic child maltreatment risk assessment.” “An algorithm designed to identify patients with high needs for healthcare systematically assigned lower scores (indicating that they were not as high need) to Black patients than to those of white patients.” And on and on.
Presumably the subjunctive mood of the report is also meant to suggest what demands people should make when they begin to protest automated systems in the future. It implicitly asks readers to supply the political will and logic that the document itself elides. But in continually pointing to the systems themselves rather than their designers and implementers, the AI Bill of Rights threatens to direct whatever political energy it inspires off-target. It’s critical not just to amend the systems that exist but also to understand why they were ever built and permitted in the first place.