Sentenced by an algorithm — Bias and lack of accuracy in risk-assessment software in the United States criminal justice system

Pages31-54
AuthorGravett, W.
Date06 July 2021
DOIhttps://doi.org/10.47348/SACJ/v34/i1a2
Published date06 July 2021
Sentenced by an algorithm —
Bias and lack of accuracy in risk-
assessment software in the United
States criminal justice system
WILLEM GRAVETT*
ABSTRACT
Developments in arti cial intelligence and machi ne learning have caused
governments to star t outsourcing authority i n performing public f unctions
to machines. Indeed, algor ithmic decision-ma king is becoming ubiquitou s,
from assigning credit scor es to people, to identifyi ng the best candidates
for an employment position, to ranki ng applicants for admission to
university. Apart from the broade r social, ethical a nd legal considerations,
controversies have arisen regardi ng the inaccuracy of A I systems and their
bias against vu lnerable populations. The g rowing use of automated risk-
assessment softwa re in crimi nal sentencing is a cause for both opt imism
and scepticism. Whi le these tools could potentia lly increase sentencing
accuracy and reduce the ri sk of human error and bias by provid ing
evidence-based reason s in place of ‘ad-hoc’ decisions by human b eings
beset with cognitive and i mplicit biases, they also have the pote ntial to
reinforce and exacerbate exis ting biases, and to underm ine certain o f the
basic constitutional g uarantees embedde d in the justice system. A 2 016
decision in the United States, S v L oomis, exemplies the thr eat that the
unchecked and unrestra ined outsourcing of public power to AI s ystems
might undermine hum an rights and the r ule of law.
What happens in . . . risk assessment algorit hms
may perhaps be less obvious than in food processing,
but their by-products may be no less toxic.1
1 Introduction
To an ever-increasing degree, Articial Intell igence (AI) systems and the
algorithms that power them are tasked with ma king crucial decisions
that used to be made by humans. Algorith mic decision-making based
* BLC LLB (UP) LLM (Not re Dame) LLD (UP), Associate Profess or in the Depar tment
of Procedural Law, Universit y of Pretoria, Memb er of the New York State Bar,
https://orcid.org/0000-0001-7400-0036.
1 M Ackerman ‘Safet y checklist for socio technical desig n’ Data & Society, 27 October
2016, available at https://points.datasociety.net/safety-checklists-for-sociotechnical-
design-2cb9192e9e3b, accessed on 23 June 202 0.
31
https://doi.org/10.47348/SACJ/v34/i1a2
(2021) 34 SACJ 31
© Juta and Company (Pty) Ltd
on big data has become an essential tool and is per vasive in all
aspects of our daily lives: the news ar ticles we read, the movies we
watch, the people we spend time with, whether we get searched in an
airport secu rity line, whether more police ofcers are deployed in our
neighbourhoods, and whether we are eligible for credit, healthcare,
housing, education and employment opportunities, among a litany of
other commercial and government decisions.2
Some view this as a cause for celebration. We have come to inhabit
a world in which the only sustainable way to make sense of the sheer
volume, complexity and variety of data that are produced dai ly, is to
apply AI.3 We cede our decision-making to algorithms, not only because
of the gains in power, speed and efciency that they afford, but also
because of the aura of impartia lity4 and infallibil ity that human culture
ascribes to them – we believe that algorithms do not have many of the
aws and shortcomings that we ‘fallible, arbitrar y, ill-informed, and
biased’ humans have.5 In fact, the opposite is true. Automated decision-
making systems based on algorith ms and AI are just as prone to mistakes,
biases and arbitr ariness as t heir human counterparts.6 AsJulia A ngwin,
an investigative journalist who exami nes the socio-legal impact of
algorithms, states: ‘In ways big and small, algorithms make judgments
that, under the guise of “cold, hard, data,” direct ly affects people’s lives –
for better, often, but sometimes for worse.’7
We are mistaken if we adopt the naive view that algorithms are truly
neutral forces. The White House Report on Big Data in 2 016 states:
‘[I]t is a mistake to assume that [AI systems] are object ive simply
because they are data-dr iven’.8 Technologies always operate within
2 O Osoba & W Welser IV ‘An intelligence i n our image: The r isks of bias and erro rs
in artic ial intelligence’ (2017) Rand Corporation 1 at 1, available at http s://doi.
org/10.7249/RR1744; AE Waldman ‘Power, process, and automated decision -
making’ (2019) 88 Fordham L Rev 613 at 632.
3 Osoba & Welser IV op cit (n2) 6.
4 M Garcia ‘Racist i n the machine: T he disturbi ng implications of algor ithmic bias’
(2016) 3 3 World Policy J 111 a t 113.
5 D Danks & AJ London ‘Algorit hmic bias in autonomous systems’ (2017) Proceedings
of the Twenty-Sixth Joint C onference on Arti cial Intelligence 4691 at 4691; Osoba
& Welser IV op cit (n2) 61; Waldman op cit (n2) 613–614.
6 Waldman op cit (n2) 614.
7 As quoted in M Garb er ‘When algorit hms take the sta nd’ The Atlantic, 1 July
2016, available at https://www.theatlantic.com/technology/archive/2016/06/when-
algorithms-take- the-stand/489566/, accessed on 23 June 2020. Wh ile many pursue
the potential bene ts of algorithms and big data, automat ed systems can have equal
potential for har m. H-W Liu, C-F Lin & Y-J Chen ‘Beyond State v Loomis: Ar ticial
intelligence, government algor ithmization and accou ntability’ (2019) 27 Int’l J L
& Information Tech 122 at 123.
8 Executive Ofce of the P resident ‘Big Data: A repor t on algorithmic s ystems,
opportunit y, and civil rights’ (2016) 6, available at https://obamawhite house.archives.
gov/sites/default/les/microsites/ostp/ 2016_0504_ data_discrimination.pdf
32 SACJ . (2021) 1
https://doi.org/10.47348/SACJ/v34/i1a2
© Juta and Company (Pty) Ltd

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT