ACLU test reveals Amazon Rekognition misidentified 28 Congress members as criminals, showing racial bias

| Importance: 8/10 | Status: confirmed

On July 26, 2018, the American Civil Liberties Union (ACLU) released results of an independently verified test demonstrating that Amazon’s Rekognition facial recognition software incorrectly matched 28 members of Congress with mugshots from a database of arrest photos. The test, which cost only $12.33 to conduct, revealed significant accuracy problems and documented racial bias in Amazon’s surveillance technology—nearly 40 percent of the false matches were of people of color, despite people of color comprising only 20 percent of Congress. The findings provided concrete evidence of the dangers of deploying facial recognition technology in law enforcement and sparked immediate Congressional demands for a meeting with Amazon CEO Jeff Bezos.

Test Methodology

The ACLU built a face database using 25,000 publicly available arrest photos, then used Rekognition to search that database against public photos of every current member of the House and Senate. Critically, the ACLU used the default match settings that Amazon sets for Rekognition—the same settings that would be used by law enforcement agencies deploying the technology without technical expertise to adjust confidence thresholds.

The test was straightforward and inexpensive, demonstrating that anyone with access to Rekognition and publicly available photos could build a surveillance system with documented accuracy problems. The entire experiment cost just $12.33, highlighting how accessible and affordable Amazon had made powerful surveillance technology, regardless of its reliability.

Documented Racial Bias

The test revealed stark racial disparities in Rekognition’s error rates. Nearly 40 percent of the false matches were of people of color, even though they made up only 20 percent of Congress. Six members of the Congressional Black Caucus were among those falsely matched with mugshots, including civil rights legend Rep. John Lewis of Georgia.

This racial bias was particularly troubling given Amazon’s aggressive marketing of Rekognition to law enforcement and immigration agencies. The disproportionate false positive rate for people of color meant that the technology, if widely deployed by police, would subject Black and Brown communities to higher rates of false accusations, wrongful stops, and potentially wrongful arrests.

The ACLU noted that this wasn’t a hypothetical concern—police departments were already using Rekognition, and Amazon was actively pitching it to agencies like ICE that were targeting immigrant communities of color. The test demonstrated that deploying this technology in real-world law enforcement would exacerbate existing patterns of discriminatory policing.

Congressional Response

Representatives Jimmy Gomez and John Lewis, along with other members who were falsely identified in the test, wrote to Amazon CEO Jeff Bezos demanding a meeting to discuss Amazon’s face surveillance product. The letter, signed by multiple members of Congress, expressed serious concerns about the technology’s accuracy and the implications of Amazon selling it to law enforcement.

The fact that members of Congress themselves had been falsely matched with criminal mugshots helped crystallize the dangers of the technology in a visceral way. If Rekognition couldn’t accurately identify well-documented public figures like members of Congress, how reliable would it be when used by police to identify ordinary citizens—particularly people of color from communities already subject to over-policing?

Reps. Markey, Gomez, Lewis, Gutiérrez, and DeSaulnier were among the 28 members falsely identified, representing both Democrats and Republicans from diverse backgrounds. The bipartisan nature of the false matches underscored that the technology’s problems weren’t limited to specific demographic groups, though the racial bias in error rates remained pronounced.

Amazon’s Response and Deflection

Amazon responded to the ACLU test by suggesting that the ACLU should have used a higher confidence threshold than the default 80 percent setting. This response was widely criticized as deflection—Amazon had set the 80 percent confidence threshold as the default precisely because it was intended to be appropriate for general use, and many law enforcement agencies would likely use the default settings without understanding the technical implications of confidence thresholds.

The ACLU countered that law enforcement agencies with limited technical expertise would likely use default settings, making Amazon’s response an implicit acknowledgment that Rekognition shouldn’t be used for law enforcement with its out-of-the-box configuration. If the technology required careful technical calibration to avoid unacceptable error rates, that raised fundamental questions about whether it should be sold to police departments at all.

Broader Implications for AI Surveillance

The ACLU’s test became a landmark demonstration of racial bias in commercial facial recognition systems. It provided concrete, easily understandable evidence of a problem that researchers had been documenting: facial recognition systems trained primarily on white faces performed significantly worse on people of color, particularly Black women.

The test also highlighted the dangers of allowing tech companies to set their own standards for surveillance technology accuracy and deployment. Amazon’s willingness to market Rekognition to law enforcement despite known accuracy problems and racial bias illustrated the need for external regulation and oversight of AI surveillance systems.

The revelation that such a significant test could be conducted for just $12.33 demonstrated the urgent need for independent testing and validation of commercial surveillance technologies before they were deployed by government agencies. It also showed that civil society organizations and researchers could play a critical role in exposing problems that tech companies had little incentive to publicize.

Impact on Facial Recognition Debate

The ACLU’s Rekognition test became one of the most cited examples in debates about facial recognition technology over the subsequent years. It provided concrete ammunition for advocates calling for moratoria on law enforcement use of facial recognition and helped shift the conversation from abstract concerns about surveillance to documented evidence of racial bias and misidentification.

The test contributed to growing pressure on Amazon that would eventually lead to the company announcing a one-year moratorium on police use of Rekognition in June 2020, following the George Floyd protests. It also influenced several cities to ban government use of facial recognition technology and prompted Congressional hearings on AI bias and surveillance.

By demonstrating that Amazon’s technology could falsely identify sitting members of Congress as criminals—with a disproportionate impact on members of color—the ACLU test made the dangers of facial recognition surveillance impossible for lawmakers and the public to ignore. It represented a crucial intervention in the debate about whether and how such powerful surveillance tools should be regulated before being deployed against vulnerable communities.

Help Improve This Timeline

Found an error or have additional information? You can help improve this event.

✏️ Edit This Event ➕ Suggest New Event

Edit: Opens GitHub editor to submit corrections or improvements via pull request.
Suggest: Opens a GitHub issue to propose a new event for the timeline.