Mitmetes riikides tegelevad politseinikud ja politseivaldkonnast huvitet mõtlejad võimalikult tõhusate politseipraktikate väljatöötamisega. Aga nii nagu teaduse ajalugu üldiselt, on sageli tegemist läbikukkumiste jadaga. Ja see ei ole negatiivne hinnang, vaid arengut kajastav sedastus. Arenguks ongi vaja lakkamatut otsimist ja katsetamist.
Tehisintellekti kasutamine politseitöö korraldamisel on viimastel kümnenditel jõudsalt populaarsust kogunud, kuid ka märkimisväärset kriitikat. Siinviidatu on vabalevis olev artikkel, mis käsitleb algoritmidel põhinevate politseipraktikate rakendamisega kaasnevat õigluse küsimust.
In 2011, the Los Angeles Police Department began an experiment in crime ﬁghting when they implemented a computer program called ‘PredPol’ to anticipate the timing and location of crime. PredPol is an algorithmic system that takes data about the type, location, and time of crimes as inputs and produces predictions about when and where future crimes will occur. […] While the LAPD discontinued its PredPol program in the spring of 2020, PredPol has become one of the most widely used pieces of predictive policing software in the United States (Miller 2020).
In this paper I evaluate this widespread criticism, argue that it is inconclusive, and explore a new way forward in the debate about the fairness of predictive policing. I propose that predictive policing can be unfair even if it is unbiased.
Mis on predictive policing?
While there is no uniform deﬁnition of predictive policing, I will follow a deﬁnition offered by Albert Meijerand Martijn Wessels: ‘Predictive policing is the collection and analysis of data about previous crimes for identiﬁcation and statistical prediction of individuals or geospatial areas with an increased probability of criminal activity to help developing policing intervention and prevention strategies and tactics’ (Meijer and Wessels 2019).
Algoritmid politseitöö suunajana:
Still, policing by algorithm has enjoyed some successes. A recent data-driven policing project in Atlantic City found a signiﬁcant decrease in crime rates. The project employed ‘Risk Terrain Modelling’ (RTM), ‘a method of spatial risk analysis used to assess spatial patterns of crime and diagnose how features of a landscape interact and overlap to create unique crime settings’ (Caplan, Kennedy, and Drawve 2017 : 1).
Therefore, racial proﬁling is unjust even when it is more effective than other policing measures at reducing crime.
Algoritmid vajavad inimese abi:
One obstacle to community support is that most predictive policing algorithms are inscrutable or ‘opaque’ to both the police ofﬁcers who employ them in their work and to the citizens they affect. […] And yet there remains an important sense in which the decision-making of human crime analysts is more accountable than decision-making assisted by algorithmic predictive policing systems: when risk assessments are made by a human analyst, one can sensibly demand a justiﬁcation from the analyst for the methods employed.
Politseistrateegiate ja -mudelite tundmine … tuleks kasuks
Achieving fairness in predictive policing will therefore often require either securing a greater degree of consent from affected communities or reducing the burdens that predictive policing imposes on those communities. For these reasons, community-led policing and problem-oriented policing are likely to be key components of any fair approach to predictive policing.
PURVES, D. (2022) Fairness in Algorithmic Policing. Journal of the American Philosophical Association, 1-21.