One Bad Algorithm? Advocates Say Facial Recognition Reveals Systemic Racism in AI Technology
The growing controversy over police use of facial recognition technology has accelerated after a Black man in Michigan revealed he was wrongfully arrested because of the technology. Detroit police handcuffed Robert Williams in front of his wife and daughters after facial recognition software falsely identified him as a suspect in a robbery. Researchers say facial recognition software is up to 100 times more likely to misidentify people of color than white people. This week, Boston voted to end its use in the city, and Democratic lawmakers introduced a similar measure for federal law enforcement. "This is not an example of one bad algorithm. Just like instances of police brutality, it is a glimpse of how systemic racism can be embedded into AI systems like those that power facial recognition technologies," says Joy Buolamwini, founder of the Algorithmic Justice League.