Porcha Woodruff, a resident of Detroit, Michigan, found herself in a distressing situation when she was eight months pregnant and arrested by the police on charges of carjacking and robbery. As she was getting her children ready for school, six police officers arrived at her door with an arrest warrant. Woodruff initially thought it was a prank, questioning the absurdity of being accused of carjacking while heavily pregnant. She sent her children upstairs to inform her fiancé that she was being taken to jail. After being detained and questioned for 11 hours, Woodruff was released on a $100,000 bond. However, she was immediately admitted to the hospital due to dehydration caused by the ordeal.
It later emerged that Woodruff had become a victim of false identification through facial recognition technology. Her image was mistakenly matched to video footage of a woman at the gas station where the carjacking occurred. The victim, when shown a photo lineup, selected Woodruff’s picture as the person associated with the robbery. Notably, the investigator’s report failed to mention that the woman in the video footage was pregnant. A month later, the charges against Woodruff were dismissed due to insufficient evidence.
Woodruff’s case is the third known instance of false arrest resulting from faulty facial recognition technology within the Detroit police department, and the sixth case in the United States overall. Disturbingly, all six individuals falsely arrested were Black. Privacy experts and advocates have long raised concerns about the technology’s inability to accurately identify people of color, highlighting the privacy violations and dangers associated with a system that claims to identify individuals solely based on their facial features. Despite these warnings, law enforcement agencies in the US and around the world continue to engage with various facial recognition firms, including Amazon’s Rekognition and Clearview AI.
Countries such as France, Germany, China, and Italy have also employed similar technology. Chinese police, for instance, have utilized mobile data and facial recognition to track protestors. In France, legislators passed a bill earlier this year granting police the authority to use AI in public spaces ahead of the Paris 2024 Olympics, making it the first EU country to approve the use of AI surveillance (though real-time facial recognition remains prohibited). Wired reported on controversial proposals last year that would enable EU police forces to share photo databases containing individuals’ faces, potentially creating an extensive biometric surveillance infrastructure.
Woodruff’s lawsuit has reignited calls in the US for a complete ban on the use of facial recognition by police and law enforcement. In response, the Detroit police have implemented new restrictions on the technology’s usage, such as prohibiting the incorporation of facial recognition images in lineups and requiring a detective not involved in the case to present the images to individuals asked to identify a person. However, activists argue that these measures are insufficient. Albert Fox Cahn of the nonprofit Surveillance Technology Oversight Project asserts that only a complete ban can prevent false arrests resulting from facial recognition technology. He further emphasizes that many Americans likely remain wrongly accused without ever receiving justice due to the racist and error-prone nature of these systems.
As governments worldwide grapple with the implications of generative AI, the well-documented harms associated with existing AI applications, such as surveillance systems, are often overlooked or excluded from discussions. Even the EU AI Act, which proposes limitations on high-risk uses of AI like facial recognition, has been partially overshadowed by the hype surrounding generative AI. Sarah Chander, a senior policy adviser at the international advocacy organization European Digital Rights, points out that the emergence of generative AI, particularly ChatGPT, has complicated conversations about the actual harms caused by AI technologies.
Similar to other AI-based systems, facial recognition is only as accurate as the data it is trained on, often reflecting and perpetuating the biases of its creators. Amnesty International has highlighted the problem of predominantly using images of white faces to train facial recognition systems, resulting in poor accuracy rates for identifying individuals who are Black, female, or aged between 18 and 30. While false positives are prevalent across the board, a study by the National Institute of Standards and Technology found that false positive rates are significantly higher for individuals from West and East Africa and East Asia, and lowest for those from Eastern Europe. The study revealed a difference of 100 times in false positive rates between countries.
Critics argue that even if facial recognition technology were perfectly accurate, it would still pose significant risks. Civil liberties groups assert that the technology has the potential to create an extensive and limitless surveillance network, eroding privacy in public spaces. Individuals can be identified wherever they go, even when engaging in constitutionally protected activities like protests or attending religious centers. In the aftermath of the US Supreme Court’s decision to overturn federal abortion protections, facial recognition technology becomes particularly dangerous for those seeking reproductive care. Furthermore, certain facial recognition systems, including Clearview AI, employ images scraped from the internet without consent. This means that social media pictures, professional headshots, and other publicly available photos can be used to train facial recognition systems that are subsequently used to criminalize individuals. Several European countries, including Italy and…