Predictive policing is not the answer to the racist police Casper Metselaar, Eelke Vlieger, Linde Reus, and Savina van Rielova

Image by Patrick Behn from Pixabay[/caption]

Predictive policing, a relatively new technique to simplify the work of policemen, should not be put into practice yet. Current protests against racially biased police brutality, following the death of George Floyd, might lead » coverage direct to your inbox Sign up today to get weekly science people to consider and advocate methods of removing ‘the human flaw’ from police-work. One of these methods is predictive policing. This technique, however, would work counterproductively and should therefore not be implemented yet. Predictive policing can in fact increase racism within the police force as it leads policemen to base their decisions on opaque, biased software which can not be revised.

Predictive policing is a method designed to predict crimes, their place and time, possible offenders, and the identity of their victims based on large datasets. Types of data that are used to predict the crimes are, for example, the race of the offender, location of the crime scene, number of arrests and the weather forecast. With the assembled data, models are developed through predictive policing software.

 

Deep-learning, an innovative technique within the field of artificial intelligence (Al), is used to train software programmes to find correlations in large data sets. It is important to note that it is currently near impossible to retrace what exact data these programs ultimately use to draw conclusions from, as they process enormous amounts of data. Some might argue that predictive policing helps overcome personal biases of policemen, but this can only be the case if both the dataset and software are neutral and the software’s decisions are transparent. If, however, the dataset is not neutral due to, for example, racially-biased police work, the software could end up reinforcing inequality.

 

Research into predictive policing systems has shown that parts of the data used to train software are inherently flawed. This causes some predictive policing systems to be biased against African Americans. This bias finds its origin in the fact that a disproportionate amount of incarcerated people in the US are black. Using existing data for the training of predictive policing Al could therefore reinforce disproportional police action against African Americans. One predictive policing system was found to only correctly predict violent crimes in 20 percent of cases, which illustrates the disastrous effects inherent biases can have on these systems. This is only one example out of many where predictive policing programs display some form of bias.

 

Then there is another, maybe even more dangerous problem with predictive policing software. We do not know exactly how the software comes to its conclusions regarding, for example, who will commit a crime and where. This decision-making process is a black box; it cannot be opened. Predictive policing makes use of deep-learning: a method which can quickly find associations in large amounts of data, which makes it very suitable for predictive policing software. However, the technique behind deep- learning is so complex that it is often not possible to follow the reasoning of the system. This could lead to the system making decisions that are based on irrelevant factors in the data and that will lead to wrongful arrests. Also, if the police were to find out that their software was not working properly, the complexity of the software would make it incredibly difficult to correct the mistake.

 

Using predictive policing systems in our current society is not desirable. The George Floyd incident shows that we live in a racially biased society. We do not want opaque, uncontrollable predictive policing software to perpetuate this bias.