In the wake of protests over police brutality, senators Cory Booker (D-New Jersey) and Kamala Harris (D-California) and representatives Karen Bass (D-California) and Jerrold Nadler (D-New York) introduced a police reform bill in the House of Representatives that includes limits on the use of facial recognition software. But not everyone is pleased. ACLU senior legislative counsel Neema Guliani, for example, pointed to the fact that facial recognition algorithms are typically not as accurate on darker skin shades.
Wired reports that Amazon, IBM and Microsoft have stopped selling facial recognition tools to U.S. police and asked Congress for regulations, while “several cities, including San Francisco, banned use of the technology by government agencies.” Advocates of regulating facial recognition pointed out that “a five-page summary of the bill’s main provisions doesn’t mention” facial recognition. The bill states that facial recognition requires both a judge’s warrant and the existence of “imminent threats or serious crimes.”
But, at Georgetown’s Center on Privacy & Technology, policy associate Jameson Spivack said that, “those restrictions wouldn’t affect many of the ways facial recognition is used by U.S. law enforcement,” because it is “more commonly applied to footage from sources other than body or dash cams, such as surveillance cameras, sometimes solicited from private citizens or businesses.”
If the legislation is passed, he added, facial recognition software “companies could go right back to selling to the police and not much will change.”
Surveillance Technology Oversight Project founder Albert Fox Cahn said that Americans of color are at higher risk of wrongful arrest than white Americans, which was backed up by a National Institute of Science and Technology report finding that “many commercial facial recognition algorithms reported more false positives for American Indian, black, and Asian people.”
Other academic studies found similar results for “services offered by IBM, Microsoft, and Amazon that try to identify a person’s gender from their face.”
MIT researcher Joy Buolamwini, who founded the Algorithmic Justice League, coauthored a paper “suggesting the creation of a new federal agency to regulate facial recognition, modeled on the Food and Drug Administration.”
Elsewhere, Wired reports that an OpenAI report “suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions.” Because “resolving social trade-offs requires that many different voices be heard,” Wired says, “citizens should have a say.”
For inspiration, it points to ancient Athenians who built citizen-run institutions, more specifically the Council of Five Hundred, “a deliberative body in charge of all decision-making, from war to state finance to entertainment.” Simple organizational rules “facilitated broad participation, knowledge aggregation, and citizen learning.”
“A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population,” it says. “Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines. This is an opportunity to reverse the trend.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.