Artificial intelligence has quietly entered UK policing — not in the form of robot officers, but through predictive algorithms designed to forecast crime, identify “high-risk” individuals, and guide police resources. These systems promise efficiency and data-driven decision-making, but a growing body of evidence suggests they may be amplifying discrimination rather than reducing crime.

A major report from Amnesty International UK found that predictive policing tools used by almost three-quarters of UK police forces are “supercharging racism” by profiling communities based on historic police data (source).

What predictive policing actually is

Predictive policing uses algorithms to analyse past crime data and forecast where crimes are likely to occur or who might be involved. In the UK, police forces use two main types:

  • Location-based prediction — forecasting crime hotspots in specific neighbourhoods.
  • Individual risk prediction — assessing the likelihood that a person will commit or be involved in crime.

According to Amnesty International, at least 33 police forces have used these systems, including the Metropolitan Police, West Midlands Police, and Greater Manchester Police (source).

Why these systems are so controversial

Predictive policing relies on historic police data — and that data is far from neutral. If certain communities have been over-policed in the past, the algorithm interprets that as evidence of higher crime, creating a feedback loop that reinforces bias.

Queen Mary University of London researchers warn that these systems are built on correlation, not causation, and risk giving a “scientific gloss” to racial profiling (source).

The key concerns include:

  • Racial bias — algorithms disproportionately target Black, Asian, and marginalised communities.
  • Opaque decision-making — individuals often don’t know they’ve been flagged by an algorithm.
  • Feedback loops — more policing in an area generates more data, which justifies even more policing.
  • Human rights risks — Amnesty argues these systems breach UK and international human rights law.

How predictive policing affects real people

Statewatch reports that algorithmic systems across Europe — including the UK — are influencing decisions at every stage of the criminal justice process, from stop-and-search to sentencing (source).

These tools can lead to:

  • Increased stop-and-search in already over-policed neighbourhoods
  • People being labelled “high risk” without explanation
  • Greater surveillance of migrants and low-income communities
  • Decisions about bail, probation, and sentencing influenced by algorithmic scores

In some cases, individuals have been subjected to questioning or home visits based on algorithmic predictions alone.

Do these systems actually reduce crime?

Despite the hype, there is little evidence that predictive policing reduces crime. Amnesty International states that the technology “does not keep us safe” and instead “treats entire communities as potential criminals” (source).

Researchers argue that the systems often confuse policing patterns with crime patterns — meaning they predict where police have historically patrolled, not where crime actually occurs.

Lack of transparency and accountability

One of the biggest problems is that predictive policing tools operate with very little public oversight. Many people do not know:

  • that they are being profiled by an algorithm
  • what data is being used
  • how risk scores are calculated
  • how to challenge an algorithmic decision

Statewatch warns that these systems can lead to people being denied jobs, services, or fair treatment based on opaque digital profiles (source).

Calls for a ban

Amnesty International UK has called for an outright ban on predictive policing tools, arguing that they violate fundamental rights and entrench discrimination. They say the systems are “built with discriminatory data” and “serve only to supercharge racism.”

Legal experts and academics echo this, warning that algorithmic policing risks creating a future where technology decides who is suspicious — not evidence.

In conclusion

Predictive policing promises efficiency, but the reality is far more troubling. By relying on biased data and opaque algorithms, these systems risk turning inequality into code and discrimination into automated decision-making.

Until the UK introduces clear laws, transparency, and independent oversight, predictive policing will remain one of the most controversial — and potentially harmful — uses of AI in public life.

Read more

The Online Safety Act: Will the UK Force WhatsApp and Signal to Scan Your Messages?
Facial Recognition in the UK: Safety Tool or Surveillance Creep?
The NHS Federated Data Platform: Modernisation or a Massive Data Grab?
Predictive Policing in the UK: Smart Technology or Automated Discrimination?
Digital Borders: How AI Is Quietly Transforming UK Immigration Decisions
Smart Meters in the UK: Helpful Upgrade or a New Form of Energy Surveillance?
15‑Minute Cities: Urban Planning Vision or Digital Movement Control?
Britcoin: Modern Money or a Digital Surveillance Tool?
AI in the UK Public Sector: Efficiency Revolution or Mass Job Loss?