Predictive Policing: Gazing into Law Enforcement’s Crystal Ball

Security Agent Filming Protest
Congress Must Tell the FBI and DHS: Activism is Not Terrorism
March 8, 2016
Police Officers
Still Seeking Police Transparency: A Letter to the Office of Justice Programs
March 14, 2016


A deluge of military-grade weaponry and armored vehicles aren’t the only crossovers from the battlefields in Iraq and Afghanistan to creep into domestic law enforcement. Increasingly, police departments across the country rely on an unproven practice called predictive policing that combines elements from a Phillip K. Dick novel with the algorithm used by Wal-Mart to determine demand for strawberry Pop Tarts during a storm.

Made famous by the same man who helped pioneer the Broken Windows theory, NYPD Commissioner Bill Bratton (and his acolytes) now peddles predictive policing as the next generation of crime-fighting, even crime prevention. It works, according to its advocates, by analyzing data like arrest records and open court cases, to create models for the probability of future criminal activity. In other words, it predicts where crime will likely take place and sometimes who is likely to commit it.

But predictive policing doesn’t end there. Using commercially available programs such as PredPol and HunchLab, law enforcement then determine where and how to allocate resources, and who to monitor and interact with based on their program’s proprietary results.

If that sounds like profiling it’s because that’s exactly what it is. Whatever happened to investigating criminal activity based on actual suspicion of wrong-doing? Instead, police act on predictions that mostly target people already tied to the justice system, such as convicted criminals and victims of crimes, and the people in their social circles.

This means that criminal activity in heavily policed areas, such as densely populated urban centers, are more likely to be uncovered than similar offenses elsewhere, creating a self-fulfilling cycle of funneling more cops and surveillance into certain neighborhoods. This can distort the perception of where crimes occur, further biasing future policing action. Given that blacks are almost 4 times more likely to be arrested for marijuana offenses than whites, despite similar rates of usage, the troubling implications of this approach are obvious.

Being on the “Heat-List”

The popularity of predictive policing with police departments in major cities like Los Angeles, New York, and Chicago follows the current trend of going where Big Data leads. Predictably, it leads (and keeps) law enforcement’s attention narrowly focused mostly on underserved communities.

In February 2014, for example, the Chicago PD applied a predictive computing model to generate a “heat list” of approximately 400 people most likely to be involved in criminal activity. The selected individuals that supposedly had factors that implied they were among the city’s residents most likely to be either a victim or perpetrator of violence.  Some on the list were children, others had no criminal history.

“Are people ending up on this list simply because they live in a crappy part of town and know people who have been troublemakers? … If so, are we just closing ourselves off to this small subset of people?”, asks Hanni Fakhoury, a staff attorney from Electronic Frontier Foundation who has written about the CPD’s use of predictive policing.

One man who had no arrest record expressed surprise after being visited by the police and being told he was on this list, according to the Chicago Tribune. A 17-year-old girl was surprised when told she was included, having done nothing wrong. But if you ask the police, they tell a different story. A Chicago police officer is quoted as saying, “If you end up on that list, there’s a reason you’re there.”

But that might not be the case. Much of the underlying principles behind Chicago’s predictive policing is based on controversial academic research and flimsy theories that seem closer to Six Degrees of Keven Bacon than actual detective work.

Andrew Papachristos, an associate professor of sociology at Yale University, studied homicide statistics in a handful of Chicago neighborhoods and made sweeping conclusions not only about the perpetrators of the shootings but also about victims and bystanders. “If you hang around people who are getting shot, even if you’re not actively doing anything, then you become exposed,” Papachristos told the Chicago Tribune. “… It’s just like sharing needles. It puts you at risk because of the behaviors of your friends and your associates.”

As specious as those claims sound, what about Chicago’s usage of what its police department calls “two degrees of association”? Under this framework, police rely on a predictive model to identify which of the city’s estimated 100,000 gang members, and the people they know, are most likely to commit a crime or be a victim of one. In 2012, the algorithm identified 14,000 people who fell into this ambiguous category.

What Data?

Few outside of law enforcement or the companies that create these technologies know what data is entered into or prioritized by predictive programs. Because most of these programs are proprietary, there is no comprehensive explanation of the algorithms’ input. A FOIA request to learn more about the Chicago program was denied, saying that information could “endanger the life or physical safety of law enforcement personnel or [some] other person.”

The Christian Science Monitor reports that various types of data are fed into the programs, including “arrest records, parole status, warrants, acquaintances’ records, having been a victim of a shooting, prison records, open court cases, and victims’ social networks.” But each program is different and with varying levels of cooperation with local police departments.

In San Francisco, SF Weekly reported that the SFPD “quietly handed over troves of crime data to PredPol and allowed the company to integrate this software with the city’s new police information technology systems.” PredPol, which has its origins as part of an application to track insurgents and predict casualties in Iraq, is also being used by police departments in Seattle and smaller departments across the country.

This happened despite the fact that there is scant evidence that the program works. While supporters say predictive policing reduces crimes like theft and burglary, no independent analysis has shown that predictive policing is effective or a good use of police resources. A 2014 RAND study analyzed the use of a predictive model by police in Shreveport, Louisiana, and concluded that the program, “did not generate a statistically significant reduction” in crime.

Responding to growing criticism, cities and activists are starting to push back. In Oakland, the Oakland Privacy Working Group sent an open letter to the City Council asking to stop paying for and using PredPol because it’s ineffective and raise serious civil liberty concerns:

“The act of sending police to a designated small area to watch for suspicious activity will inevitably lead to those police sent there being more suspicious than usual of everyone they encounter. This will lead to more “reasonable suspicion” stops which are in fact not reasonable, leading to civil rights violations, all the more problematic because hotspots are so likely to be in minority neighborhoods.”

The use of HunchLab by St. Louis county police departments drew scrutiny in the wake of the shooting of the teenage Michael Brown.  Along with crime data, reports indicate that this particular program considers “dozens of other factors like population density; census data; the locations of bars, churches, schools, and transportation hubs; schedules for home games — even moon phases” in its analytics. If true, it’s alarming.

While St. Louis area police say they use the program to go after serious crimes, residents there and across the country complain that predictive policing is another step away from community policing and accountability. This is supported by the DOJ’s investigation that found the Ferguson’s police targeted black residents to raise revenue, calling the tactics “profoundly and fundamentally unconstitutional.”

Looking Forward

The ability to anticipate or predict crime represents a disturbing paradigm shift in law enforcement. Not only do the computer models infringe on civil liberties with little accountability, especially when the companies that create them keep their methods secret, but it also perpetuates racial profiling. Misuse and overuse of data can amplify biases. And the impact on individuals and communities that are caught in the middle of these predictions can be staggering, from more surveillance of young people to a spike in non-serious arrests that can result in jail time and disenfranchisement. Police must not rely on these untested programs at the expense of developing better community relations. If not, it won’t take a fancy algorithm to predict what will happen.



DONATE