Tehisintellekt on kohal ja leiab kasutust muuhulgas turvalisuses. Siinviidatud vabalevis olev tekst võiks huvi pakkuda paljudele, sh nendele, kel politsei või turvalisusega suurt pistmist ei ole. Eetika ja tehisintellekti disainiloogikad puudutavad ilmselt paljusid, kui mitte kõiki. Aga siinse teksti sissejuhatuseks oleks ehk huvitav, et kuigi tekst tegeleb kalduvus-küsimustega, siis näiteks tehisintellekti disaini ja analüüsitavate andmete kvaliteedile täiendava rõhu asetamine on tõenäoliselt üheks järgmiseks võimalikuks kallutavaks (biased) teguriks muuhulgas nt organisatsiooni disainis ja arengute kujundamisel.

Kontekstist:

A fast-growing multidisciplinary literature is increasingly detailing the multifaceted biases associated with the data-driven artificial intelligence (AI) 1 systems that now inform decision making in many sectors including high stakes settings such as criminal justice systems [1–7].

Objektiivsus … on naivism:

All digital technologies are guided by specific ideologies, preferences, and other logics that infuse their design and tasks with meaning, giving rise to particular outputs and specific social implications. They are not created in an ideological vacuum: there is always a ‘human in the loop’ influenced by and influencing the social world in which digital technologies are designed and deployed, even if a digital model eventually appears to be fully automated.

Turvalisuse valdkond ei ole kuulikindel:

Criminal justice algorithms are not immune, with the extant literature demonstrating that algorithms deployed in justice systems can disadvantage racialised and low-income groups historically vulnerable to criminal justice intervention [1–7]. Key technologies in this context include risk assessment algorithms [7]; facial recognition systems [10]; and PPAs [6].

Tehno-determinism:

Benjamin [13] defines techno-determinism as, ‘The mistaken view that society is affected by but does not affect technological development’. Raji and Smart et al. [8] take this further by noting in their analysis of global ethical issues and guidelines that, ‘artificial intelligence systems are not independent of their developers or of the larger sociotechnical system’.

Politseistrateegia kohta üsna asjakohane täpsustus:

predictive policing is, ‘not about prevention in the sense of transforming the conditions that contribute to theft or fighting; it is about being in the right place to stop an imminent act before it takes place’.

Lugemishuvi suurendamiseks vihje kokkuvõttest:

Using the case example of PPAs, and drawing on recent studies, the paper demonstrates that it is important to consider during audits that, (1) algorithmic feedback loops leading to the labelling and over policing of historically marginalised communities, and (2) the problem of crime displacement, are potential outcomes that can arise when a PPA is rooted in near repeat theory.

Ugwudike, P. (2021). AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithmsAI and Ethics, 1-10.