8 May 2025
In a busy city, officers begin their shift by receiving a list of areas flagged as high-risk for crime that day. The list is based on patterns in historical crime data, such as where certain crimes have occurred in the past, and other factors like economic trends and weather. The officers are then instructed to patrol these areas proactively, anticipating potential criminal activity. Driving through one of these flagged neighborhoods, they spot a group of teenagers loitering on a corner. The officers decide to stop and question them without any evidence of wrongdoing, simply because the area is considered a “hotspot.” The teens feel confused and frustrated—there’s nothing suspicious about them. However, following the data, the officers remain on alert, waiting for a crime that the system has predicted might happen with a given probability based on past patterns.
This algorithmic approach to crime prediction echoes the dystopian vision of the film Minority Report. While perhaps not as extreme as the system portrayed in the movie, predictive policing similarly relies on foreseeing potential crimes and intervening before they occur, raising concerns about preemptive justice and individual rights. As Forbes explains, “Using data from past criminal activity, economic conditions, and even weather patterns, ML algorithms can detect trends and forecast where crimes are most likely to occur… AI simply allows for faster and more accurate identification of these areas, reducing the need for guesswork and potentially lowering crime rates through proactive policing” [1]. While it may seem like a logical way to prevent crime, predictive policing raises significant ethical concerns about bias, over-surveillance, and whether it truly leads to safer communities or just reinforces existing inequalities.
Despite these ethical concerns, predictive policing continues to gain traction worldwide, with various countries adopting AI-driven crime prevention strategies. In the U.S., several states, including California and New York, have integrated predictive tools like “Patternizr” to identify crime patterns, hoping to improve law enforcement efficiency [2]. Despite claims that the machine learning algorithm was designed to be blind to factors like race and gender, the data it was trained on is inherently biased. This is because the algorithm was built using historical crime data, which disproportionately overrepresents crimes where people of color were identified as suspects, thus reinforcing racial biases.
Many European countries have explored predictive policing systems. In the Netherlands, the city of Roermond implemented the “Sensing Project,” which utilized cameras and sensors to monitor vehicles and assign risk scores based on factors such as license plate origin and travel patterns. Critics, including organizations like Amnesty International, highlighted that the system disproportionately targeted individuals from Eastern European countries, leading to concerns about ethnic profiling and mass surveillance [3].
The promise of algorithmic justice, then, is that it will create a more “objective” system of law enforcement and allow the state and corporations to exercise more efficient and unobtrusive control over the population. In this sense, algorithms are not neutral tools but mechanisms of power that seek to shape and control society under the guise of efficiency and fairness.
Despite its promise of fairness and objectivity, algorithmic justice is ultimately an illusion. The idea that algorithms can deliver unbiased, neutral decisions rests on a fundamental misunderstanding of both the technology and the social systems it operates within. It overlooks the reality that algorithms are not autonomous entities but rather a reflection of the biases embedded in the data they are trained on, which is often shaped by historical inequalities. For instance, in 2012, the Chicago Police Department implemented a predictive policing program known as the “Strategic Subject List” or “Heat List,” which aimed to identify individuals at higher risk of being involved in gun violence. However, this initiative disproportionately targeted young Black and Latino men, leading to intensified surveillance and police interactions within communities predominantly composed of these demographics. Notably, 56% of Black men aged 20 to 29 in Chicago were assigned a risk score, subjecting them to increased scrutiny regardless of actual criminal activity. The program faced criticism for its lack of transparency and effectiveness, ultimately ending in 2019 after significant public backlash [4].
Predictive policing systems treat people as data points to be monitored and analyzed, reducing complex human behaviors to numerical inputs. The algorithmic system doesn’t see or understand people as unique beings with their own agency and context. Instead, it processes them as variables in a larger, deterministic crime and social behavior equation. This reductionist approach to crime is a fundamental issue because it strips away the possibility of justice being a dynamic, human-centered process, replacing it with a mechanized, data-driven approach.
Image via Pexels Free Photos
Works Cited
[1] Ly, David. “Predictive Policing: Myth Busting And What We Can Expect Of AI-Powered Law Enforcement.” Forbes, October 25, 2024. Accessed February 27, 2025.
[2] Griffard, Molly. “A Bias-Free Predictive Policing Tool?: An Evaluation of the NYPD’s Patternizr.” Fordham Urban Law Journal vol. 47, no. 1. 2019. Accessed March 11, 2025.
[3] “Netherlands: End dangerous mass surveillance policing experiments.” Amnesty International. September 29, 2020. Accessed February 12, 2025.
[4] Kunichoff, Yana and Sier, Patrick. “The Contradictions of Chicago Police’s Secretive List.” Chicago Magazine. August 21, 2017. Accessed April 1, 2025.