ShotSpotter Accuracy Crisis - Chicago Inspector General Report and Williams Case

| Importance: 8/10

The reliability and accuracy of ShotSpotter’s gunshot detection technology face a major crisis in August 2021 as the Chicago Office of Inspector General releases a damning report on the system’s effectiveness, while the Michael Williams case exposes evidence that ShotSpotter analysts modified forensic reports at police request.

The Chicago OIG examines ShotSpotter alerts between January 1, 2020 and May 31, 2021, finding that only about 9% of alerts—roughly 1 out of every 11—led to evidence of a gun-related criminal offense. This contradicts ShotSpotter’s marketing claims of 97% accuracy and raises questions about the value of technology costing cities millions of dollars annually.

A separate analysis by the MacArthur Justice Center covering April 15, 2021 through April 13, 2022 finds even worse performance: 90.4% of ShotSpotter dispatches did not lead police to find evidence of gun-related crime on arrival, averaging 87 unfounded police deployments every day. In Houston, less than 2% of ShotSpotter alerts between December 2020 and September 2021 resulted in arrests—just 54 arrests from 2,330 alerts.

The Michael Williams case exposes more troubling problems. Williams, a 63-year-old Chicago man, spends nearly a year in jail after being falsely accused of murder based on unreliable ShotSpotter evidence. Court documents reveal that Chicago prosecutors used audio picked up by ShotSpotter sensors as critical evidence in charging Williams with shooting a man inside a car in 2020.

Critically, documents from the Williams case and other trials suggest that ShotSpotter analysts frequently modify alerts at the request of police departments. While the company claims it “did not change the location of the gunfire by a mile,” the fact that analysts alter forensic reports based on law enforcement requests raises fundamental questions about the technology’s role as objective evidence. The company’s practice of revising initial automated alerts through human analyst review creates opportunities for confirmation bias and evidence manipulation.

When ShotSpotter learns of the prosecutor’s theory in the Williams case, the company reminds prosecutors that ShotSpotter’s evidence is only guaranteed to locate shots fired outdoors, not inside vehicles or buildings—directly contradicting the prosecution’s theory. Prosecutors ultimately withdraw all ShotSpotter evidence and drop charges against Williams due to insufficient evidence.

Williams later files a lawsuit alleging that investigating officers “put blind faith in ShotSpotter evidence they knew or should have known was unreliable.” The case becomes a symbol of the dangers of over-reliance on algorithmic technology in criminal investigations and prosecutions.

The simultaneous release of the Chicago OIG report and the exposure of the Williams case evidence modification creates a credibility crisis for ShotSpotter. The company’s claimed 97% accuracy rate is revealed to be based on voluntary customer complaints rather than independent verification—essentially “just a tally of customer complaints” rather than actual accuracy measurement.

These revelations contribute to a wave of cities reconsidering or terminating their ShotSpotter contracts, as officials question whether the technology’s costs—both financial and in terms of over-policing communities of color—justify its limited demonstrated benefits.

Sources (3)

Help Improve This Timeline

Found an error or have additional information? You can help improve this event.

✏️ Edit This Event ➕ Suggest New Event

Edit: Opens GitHub editor to submit corrections or improvements via pull request.
Suggest: Opens a GitHub issue to propose a new event for the timeline.