Self-exclusion schemes
In New South Wales, Australia, facial recognition technology will now be used to allow people to “self-exclude” themselves from gambling venues.
Cameras will scan patrons' faces as they enter a venue and compare their faces to a database of problem gamblers who have consented to be part of the self-exclusion scheme. If a problem gambler is identified, an alert is sent to venue staff who can then intervene and refer the person to support services.
This could seem like a benevolent enough program, helping prevent people with an addiction from harming themselves. But viewed another way, problem gamblers serve as the pretense for compiling a log of every person who enters a gambling venue — which would no doubt be of interest to financial institutions, employers, and insurance agencies — while producing a data set that can be used to refine facial recognition capabilities and abetting who knows what sort of phrenological efforts to correlate certain physical traits and behaviors with the propensity to gamble.
On Twitter, Evan Greer speculated that such technology could be used “to detect when someone at a slot machine is getting frustrated” and “give them a little payout to keep them gambling” — an extension of the already existing techniques that Natasha Dow Schüll described in Addiction by Design (2014). For example, the slot-club loyalty cards that double as unique identifiers:
Typically worn around the neck or attached to the wrist by colorful bungee cords, so-called loyalty cards connected gamblers to a central database that recorded the value of each bet they made, their wins and losses, the rate at which they pushed slot machine play buttons, when they took breaks, and what drinks and meals they purchased. Instead of earning points for the amount they wagered in one sitting, players now earned points for the amount they wagered over time. In effect, tracking technology brought low stakes “repeat players” into the scopes of casino managers, where formerly only high rollers had appeared.
Of course, your face would be an even more efficient, inescapable, and unfalsifiable identifier, provided the recognition technology can be sufficiently improved. And it could work to track you anywhere you have a face. Indeed, as Schüll documented, cameras in casinos were already being used to surreptitiously enroll people into “loyalty programs” of a sort in, for example, “Bally’s method for tracking players regardless of their participation.”
The system incorporates biometric recognition into gambling machines via miniaturized cameras linked to a central database; when a player activates the machine without using a player card, the camera “captures the player’s image and stores it along with their game play,” creating a “John Doe” file. Although the casino does not know the patron’s actual name, it can track his behavior over time “for a total view of the customer’s worth.” “Invisible to the user,” the system ensures that the opportunity to cultivate a relationship with the uncarded patron will not be lost.
Whether or not such camera systems already exist in Australia, the idea that they are necessary to protect gamblers from themselves becomes an alibi for how surveillance can be used by gaming companies to profitably identify and develop problem gamblers in the first place.
A similar logic comes into play with voice-recognition technologies. This recent Axios piece details how companies are developing software that purports to use “voice biomarkers” in analyzing a recording of someone’s speech to identify conditions like depression. It cites an April article in the New York Times that credulously reports on this emerging “AI” field, claiming that “the use of deep-learning algorithms can uncover additional patterns and characteristics, as captured in short voice recordings, that might not be evident even to trained experts.” If you pause a lot in speaking or sound “constricted,” this software may conclude you likely suffer from depression or schizophrenia, regardless of what a doctor thinks.
It’s doubtful that such a system could “work,” particularly given the difficulties of rendering a mercurial and intersubjective condition like depression into a set of datafied benchmarks (let alone the usual automation problems of biased data sets, dubious methodologies, and ultimate indifference to causal explanation). As Os Keyes argues in this Real Life essay on diagnostic tech, diagnoses of psychiatric conditions are always political and always changing, but AI systems seek to cement definitions and mask the ideology involved in forming them. Given the damaging effects of false positives and negatives ("‘From as little as 20 seconds of free-form speech, we're able to detect with 80% accuracy if somebody is struggling with depression or anxiety,’ Kintsugi CEO Grace Chang tells Axios”), it would be unethical to deploy them.
Mental health conditions are extremely susceptible to the sort of feedback loops machine learning systems enact; these systems would be capable of producing the depression (and the associated stigma) they seek to merely identify and would likely produce it disproportionately in those who are already disadvantaged or marginalized in other ways — they would be both misrepresented in the data sets and more frequently subjected to such evaluative systems, compounding the problem.
Approaching mental illness by paying attention to a person’s speech, of course, has a long history. But to altogether ignore the content of what a person says and analyze its acoustics instead takes positivism to new heights, treating something experiential, emotional, and affective as something that can be empirically measured, bypassing conscious and unconscious alike. Being diagnosed with depression — in part, a disorder of the will — against your will, as though your own understanding were irrelevant to the condition, would seem to inevitably exacerbate it. Diagnostic technologies like “voice biomarkers” are self-exclusion schemes of a different sort.
Nonetheless, it is not hard to imagine these detection systems being rationalized as aid to the afflicted and the vulnerable, necessary for their own good. But they will remain systems of control, capable of being installed anywhere sound can be recorded, ready to generate specious evidence that can be used against anyone at any time, without any kind of due process. It’s akin to social media platforms’ algorithmic systems meant to detecting suicidal users, which Emma Bedor Hiland describes in this essay for The Hedgehog Review as “a form of techno-reliant medicalization: The algorithm is manufacturing medical data and information about its users based on the content of their posts.” That is, everyone on the platform is medicalized and subject to therapeutic ambush.
Like “emotion” detection systems, such diagnostic systems will classify anyone they come into contact with according to the labels for-profit companies choose to assign to particular data patterns, and their “accuracy” will be a matter of how effectively they can be instrumentalized and not how much their inferences correspond to people’s sense of their own states of mind. They are predicated on refusing people the capability of self-definition; instead they systematically categorize people against their will, not only in terms of already stigmatized traits and behaviors but in entirely opaque terms readable only by machines tasked with conducting even finer gradations of discrimination.
In a larger sense, it is part of the attempt by “AI” firms and their supporters to “use statistical methods to turn the diversity of lived experience into a single space of equivalence, ready for distant decision-making,” as Dan McQuillan notes in Resisting AI. “Whatever we think of specific AI applications, accepting AI means we are implicitly signing up for an environment of pervasive data surveillance and centralized control.” Whether or not an AI system “works” by its own standards is besides the point; they always work to ideologically authorize more surveillance.
The posited benefits of these kinds of automated systems are immediately cancelled out by the enormous potential for their abuse. “AI is sold as a solution to social problems, when what it is really doing is applying algorithmic morality judgements to target groups while obscuring the structural drivers of the very problems it is supposedly solving,” McQuillan writes. More than that, they function only by invalidating the subjectivity of the people targeted and subjecting them to inscrutable and unauditable classificatory regimes that have been devised primarily for economic exploitation. The more these systems are used, the more their simplistic definitions of complex, dynamic social phenomena will be reified and made operational.
The overlapping deployment of algorithmic prediction and decision-making systems render individuals as statistical objects, making their consciousness a kind of useless remainder, an epiphenomenon with no relevance to how the world is administered. Self-reflection is superfluous under such conditions, easily construed as a discardable burden, an inconvenience, a barrier to the frictionless flow of information and activity that might carry us along, for better or for worse, depending on what that data foretells. This clarifies the point of “artificial intelligence”: It is not to produce machines that think but to preclude humans from the trouble of thought.