The Feedback Loop of Bias: How Predictive Policing Amplifies Racial Disparities

New research reveals that AI-powered predictive policing systems, even with attempts at data correction, can worsen existing biases and lead to significantly unequal outcomes.


![A feedforward neural network predicts time series data by leveraging lagged inputs-[latex]x\_{t-1},\dots,x\_{t-p}[/latex]-and an inverse transformation-[latex]t^{-1}(\cdot)[/latex]-maps network weights to autoregressive coefficients, effectively ensuring the stationarity of the predicted series and highlighting the model’s capacity to navigate the precarious balance between complexity and stability.](https://arxiv.org/html/2603.19041v1/NN.png)
![The analysis reveals that each model contributes a distinct share to the overall prevalence of anticipatory outliers-those instances flagged as problematic before they manifest-within the broader landscape of all detected outliers [latex]\mathcal{TOA}[/latex] and general outliers [latex]\mathcal{TO}, \mathcal{O}[/latex].](https://arxiv.org/html/2603.18358v1/images/seed_to_topic_barplot-v2.png)


