A critical report on the ShotSpotter gunshot detection system issued today by the City of Chicago’s Inspector General (IG) is the latest indication of deep problems with the gunshot detection company and its technology, including its methodology, effectiveness, impact on communities of color, and relationship with law enforcement. The report questioned the “operational value” of the technology and found that it increases the incidence of stop and frisk tactics by police officers in some neighborhoods.
The IG’s report follows a similarly critical report and legal filing by the Northwestern School of Law’s MacArthur Justice Center and devastating investigative reporting by Vice News and the Associated Press. Last week, the AP profiled Michael Williams, a man who spent a year in jail on murder charges based on evidence from ShotSpotter before having his charges dismissed when prosecutors admitted they had insufficient evidence against him.
Shotspotter installs 20 to 25 microphones per square mile in the cities where it is installed, and uses those microphones to try to identify and locate the sound of gunshots. In the past, we have scrutinized this company and its technology from a privacy perspective. Placing live microphones in public places raises significant privacy concerns. After looking at the details of ShotSpotter’s system, we didn’t think it posed an active threat to privacy, but we were concerned about the precedent it set (and others agreed).
But aural privacy is not the main problem with ShotSpotter, it turns out. There are several other very significant civil liberties problems with the technology.
First, as the MacArthur Justice Center details, ShotSpotter is deployed overwhelmingly in communities of color, which already disproportionately bear the brunt of a heavy police presence. The police say they pick neighborhoods for deployment based on where the most shootings are, but there are several problems with that:
- ShotSpotter false alarms send police on numerous trips (in Chicago, more than 60 times a day) into communities for no reason and on high alert expecting to potentially confront a dangerous situation. Given the already tragic number of shootings of Black people by police, that is a recipe for trouble.
- Indeed, the Chicago IG’s analysis of Chicago police data found that the “perceived aggregate frequency of ShotSpotter alerts” in some neighborhoods leads officers to engage in more stops and pat downs.
- The placement of sensors in some neighborhoods but not others means that the police will detect more incidents (real or false) in places where the sensors are located. That can distort gunfire statistics and create a circular statistical justification for over-policing in communities of color.
Second, ShotSpotter’s methodology is used to provide evidence against defendants in criminal cases, but isn’t transparent and hasn’t been peer-reviewed or otherwise independently evaluated. That simply isn’t acceptable for data that is used in court.
The company’s sensors automatically send audio files to human analysts when those sensors detect gunshot-like sounds. Those analysts then decide whether the sounds are gunshots or other loud noises such as firecrackers, car backfires, or construction noises. They also triangulate the timing of when sounds reach different microphones to try to establish a location for the noise, and if it is believed to be the sound of gunshot, they make an effort to figure out how many shots were fired and what kind of gun is involved (such as a pistol versus a fully automatic weapon).
ShotSpotter portrays all of this as a straightforward and objective process, but it is anything but. Vice News and the AP note examples of the company’s analysts changing their judgments on all of the above types of results (which ShotSpotter disputes). In addition, the company uses AI algorithms to assist in the analysis — and as with all AI algorithms, that raises questions about reliability, transparency, and the reproducibility of results. The company turned down a request by the independent security technology research publication IPVM to carry out independent tests of its methodologies.
Further calling into question the appropriateness of ShotSpotter evidence for use in court is a third problem: the company’s apparent tight relationship with law enforcement. A ShotSpotter expert admitted in a 2016 trial, for example, that the company reclassified sounds from a helicopter to a bullet at the request of a police department customer, saying such changes occur “all the time” because “we trust our law enforcement customers to be really upfront and honest with us.” ShotSpotter also uses reports from police officers as “ground truth” in training its AI algorithm not to make errors. A close relationship between ShotSpotter and police isn’t surprising — police departments are the company’s customers and the company needs to keep them happy. But that isn’t compatible with the use of its tool as “objective data” used to convict people of crimes.
Finally, still up for debate is whether ShotSpotter’s technology is even effective. We can argue over a technology’s civil liberties implications until the end of time, but if it’s not effective there’s no reason to bother. A number of cities have stopped using the technology after deciding that ShotSpotter creates too many false positives (reporting gunshots where there were none) and false negatives (missing gunshots that did take place). The MacArthur Justice Center’s report found that in Chicago, initial police responses to 88.7 percent of ShotSpotter alerts found no incidents involving a gun. The company disputes whether this means its technology is inaccurate, pointing out that someone can shoot a gun but leave no evidence behind. But a review of the accuracy debate by IPVM concluded that “while public data does not enable a definitive estimation of false alerts,” the problem “is likely significantly greater than what ShotSpotter insinuates” because the company “uses misleading assumptions and a misleading accuracy calculation” in their advertised accuracy rates.
Given all of these problems, communities and the police departments serving them should reject this technology, at least until these problems are addressed, including through full transparency into its operation and efficacy.