Hallevy, Gabriel
The Basic Models of Criminal Liability of AI Systems and Outer Circles Proceedings Article
In: Vicente, Dário Moura; Pereira, Rui Soares; Leal, Ana Alves (Ed.): Legal Aspects of Autonomous Systems, pp. 69–82, Springer International Publishing, Cham, 2024, ISBN: 978-3-031-47946-5.
@inproceedings{hallevy_basic_2024,
title = {The Basic Models of Criminal Liability of AI Systems and Outer Circles},
author = {Gabriel Hallevy},
editor = {Dário Moura Vicente and Rui Soares Pereira and Ana Alves Leal},
doi = {10.1007/978-3-031-47946-5_5},
isbn = {978-3-031-47946-5},
year = {2024},
date = {2024-01-01},
booktitle = {Legal Aspects of Autonomous Systems},
pages = {69–82},
publisher = {Springer International Publishing},
address = {Cham},
abstract = {The way humans cope with breaches of legal order is through criminal law operated by the criminal justice system. Accordingly, human societies define criminal offenses and operate social mechanisms to apply them. This is how criminal law works. Originally, this way has been designed by humans and for humans. However, as technology has developed, criminal offenses are committed not only by humans. The major development in this issue has occurred in the seventeenth century. In the twenty-first century criminal law is required to supply adequate solutions for commission of criminal offenses through artificial intelligent (AI) systems. Basically, there are three basic models to cope with this phenomenon within the current definitions of criminal law. These models are:
(1)
The Perpetration-by-Another Liability Model;
(2)
The Natural Probable Consequence Liability Model; and
(3)
The Direct Liability Model.
This paper was presented at the “International Conference on Autonomous Systems and the Law”, organized by CIDP, (Centro de Investigação de Direito Privado), University of Lisbon. I thank the organizers for inviting me to the conference and to the participants for their questions and interest in this issue. The Models are based on previous researches published around the world, including two of the author’s books: Hallevy (2013, 2015a, b).},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
(1)
The Perpetration-by-Another Liability Model;
(2)
The Natural Probable Consequence Liability Model; and
(3)
The Direct Liability Model.
This paper was presented at the “International Conference on Autonomous Systems and the Law”, organized by CIDP, (Centro de Investigação de Direito Privado), University of Lisbon. I thank the organizers for inviting me to the conference and to the participants for their questions and interest in this issue. The Models are based on previous researches published around the world, including two of the author’s books: Hallevy (2013, 2015a, b).
Završnik, Aleš
Criminal justice, artificial intelligence systems, and human rights Journal Article
In: ERA Forum, vol. 20, no. 4, pp. 567–583, 2020, ISSN: 1863-9038.
@article{zavrsnik_criminal_2020,
title = {Criminal justice, artificial intelligence systems, and human rights},
author = {Aleš Završnik},
url = {https://doi.org/10.1007/s12027-020-00602-0},
doi = {10.1007/s12027-020-00602-0},
issn = {1863-9038},
year = {2020},
date = {2020-03-01},
urldate = {2024-10-21},
journal = {ERA Forum},
volume = {20},
number = {4},
pages = {567–583},
abstract = {The automation brought about by big data analytics, machine learning and artificial intelligence systems challenges us to reconsider fundamental questions of criminal justice. The article outlines the automation which has taken place in the criminal justice domain and answers the question of what is being automated and who is being replaced thereby. It then analyses encounters between artificial intelligence systems and the law, by considering case law and by analysing some of the human rights affected. The article concludes by offering some thoughts on proposed solutions for remedying the risks posed by artificial intelligence systems in the criminal justice domain.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Barabas, Chelsea
Beyond Bias: Re-Imagining the Terms of "Ethical AI" in Criminal Law Journal Article
In: Georgetown Journal of Law & Modern Critical Race Perspectives, vol. 12, no. 2, pp. 83–112, 2020.
@article{barabas_beyond_2020,
title = {Beyond Bias: Re-Imagining the Terms of "Ethical AI" in Criminal Law},
author = {Chelsea Barabas},
url = {https://heinonline.org/HOL/P?h=hein.journals/gjmodco12&i=90},
year = {2020},
date = {2020-01-01},
urldate = {2024-10-21},
journal = {Georgetown Journal of Law & Modern Critical Race Perspectives},
volume = {12},
number = {2},
pages = {83–112},
abstract = {Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly proliferating across the US criminal justice system as a means of predicting and managing the risk of crime and addressing accusations of discriminatory practices. These data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. This essay examines contemporary debates regarding the use of “artificial intelligence” as a vehicle for criminal justice reform, by closely examining two general approaches to, what has been widely branded as, “algorithmic fairness” in criminal law: 1) the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions and 2) the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency and validity in these systems. The essay argues that attempts to render AI-branded tools more accurate by addressing narrow notions of “bias,” miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The article concludes by calling for an abolitionist understanding of the role and function of the carceral state, in order to fundamentally reformulate the questions we ask, the way we characterize existing data, and how we identify and fill gaps in existing data regimes of the carceral state.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Okidegbe, Ngozi
When They Hear Us: Race, Algorithms and the Practice of Criminal Law Journal Article
In: Kansas Journal of Law & Public Policy, vol. 29, no. 3, pp. 329–338, 2019.
@article{okidegbe_when_2019,
title = {When They Hear Us: Race, Algorithms and the Practice of Criminal Law},
author = {Ngozi Okidegbe},
url = {https://heinonline.org/HOL/P?h=hein.journals/kjpp29&i=352},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {Kansas Journal of Law & Public Policy},
volume = {29},
number = {3},
pages = {329–338},
abstract = {We are in the midst of a fraught debate in criminal justice reform circles about the merits of using algorithms. Proponents claim that these algorithms offer an objective path towards substantially lowering high rates of incarceration and racial and socioeconomic disparities without endangering community safety. On the other hand, racial justice scholars argue that these algorithms threaten to entrench racial inequity within the system because they utilize risk factors that correlate with historic racial inequities, and in so doing, reproduce the same racial status quo, but under the guise of scientific objectivity.
This symposium keynote address discusses the challenge that the continued proliferation of algorithms poses to the pursuit of racial justice in the criminal justice system. I start from the viewpoint that racial justice scholars are correct about currently employed algorithms. However, I advocate that as long as we have algorithms, we should consider whether they could be redesigned and repurposed to counteract racial inequity in the criminal law process. One way that algorithms might counteract inequity is if they were designed by most impacted racially marginalized communities. Then, these algorithms might counterintuitively benefit these communities by endowing them with a democratic mechanism to contest the harms that the criminal justice system’s operation enacts on them.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This symposium keynote address discusses the challenge that the continued proliferation of algorithms poses to the pursuit of racial justice in the criminal justice system. I start from the viewpoint that racial justice scholars are correct about currently employed algorithms. However, I advocate that as long as we have algorithms, we should consider whether they could be redesigned and repurposed to counteract racial inequity in the criminal law process. One way that algorithms might counteract inequity is if they were designed by most impacted racially marginalized communities. Then, these algorithms might counterintuitively benefit these communities by endowing them with a democratic mechanism to contest the harms that the criminal justice system’s operation enacts on them.
Eaglin, Jessica M.
Technologically Distorted Conceptions of Punishment Journal Article
In: Washington University Law Review, vol. 97, no. 2, pp. 483–544, 2019.
@article{eaglin_technologically_2019,
title = {Technologically Distorted Conceptions of Punishment},
author = {Jessica M. Eaglin},
url = {https://heinonline.org/HOL/P?h=hein.journals/walq97&i=497},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {Washington University Law Review},
volume = {97},
number = {2},
pages = {483–544},
abstract = {Much recent work in academic literature and policy discussions suggests that the proliferation of actuarial — meaning statistical — assessments of a defendant’s recidivism risk in state sentencing structures is problematic. Yet scholars and policymakers focus on changes in technology over time while ignoring the effects of these tools on society. This Article shifts the focus away from technology to society in order to reframe debates. It asserts that sentencing technologies subtly change key social concepts that shape punishment and society. These same conceptual transformations preserve problematic features of the sociohistorical phenomenon of mass incarceration. By connecting technological interventions and conceptual transformations, this Article exposes an obscured threat posed by the proliferation of risk tools as sentencing reform. As sentencing technologies transform sentencing outcomes, the tools also alter society’s language and concerns about punishment. Thus, actuarial risk tools as technological sentencing reform not only excise society’s deeper issues of race, class, and power from debates. The tools also strip society of a language to resist the status quo by changing notions of justice along the way.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Richardson, Rashida; Schultz, Jason M.; Crawford, Kate
Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice Journal Article
In: New York University Law Review Online, vol. 94, pp. 15–55, 2019.
@article{richardson_dirty_2019,
title = {Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice},
author = {Rashida Richardson and Jason M. Schultz and Kate Crawford},
url = {https://heinonline.org/HOL/P?h=hein.journals/nyulro94&i=15},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {New York University Law Review Online},
volume = {94},
pages = {15–55},
abstract = {Law enforcement agencies are increasingly using predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced during documented periods of flawed, racially biased, and sometimes unlawful practices and policies (“dirty policing”). These policing practices and policies shape the environment and the methodology by which data is created, which raises the risk of creating inaccurate, skewed, or systemically biased data (“dirty data”). If predictive policing systems are informed by such data, they cannot escape the legacies of the unlawful or biased policing practices that they are built on. Nor do current claims by predictive policing vendors provide sufficient assurances that their systems adequately mitigate or segregate this data.
In our research, we analyze thirteen jurisdictions that have used or developed predictive policing tools while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data available to train or implement these systems. We highlight three case studies: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices and recent litigation suggests an extremely high risk that dirty data was or could be used in predictive policing; and (3) Maricopa County, where despite extensive evidence of dirty policing practices, a lack of public transparency about the details of various predictive policing systems restricts a proper assessment of the risks. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. The use of predictive policing must be treated with high levels of caution and mechanisms for the public to know, assess, and reject such systems are imperative.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In our research, we analyze thirteen jurisdictions that have used or developed predictive policing tools while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data available to train or implement these systems. We highlight three case studies: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices and recent litigation suggests an extremely high risk that dirty data was or could be used in predictive policing; and (3) Maricopa County, where despite extensive evidence of dirty policing practices, a lack of public transparency about the details of various predictive policing systems restricts a proper assessment of the risks. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. The use of predictive policing must be treated with high levels of caution and mechanisms for the public to know, assess, and reject such systems are imperative.
Crootof, Rebecca
"Cyborg Justice" and the Risk of Technological-Legal Lock-in Human/AI Teaming Journal Article
In: Columbia Law Review Online, vol. 119, pp. 233–251, 2019.
@article{crootof_cyborg_2019,
title = {"Cyborg Justice" and the Risk of Technological-Legal Lock-in Human/AI Teaming},
author = {Rebecca Crootof},
url = {https://heinonline.org/HOL/P?h=hein.journals/sidbarc119&i=233},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {Columbia Law Review Online},
volume = {119},
pages = {233–251},
abstract = {Although Artificial Intelligence (AI) is already of use to litigants and legal practitioners, we must be cautious and deliberate in incorporating AI into the common law judicial process. Human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing required of human judges. Nor will “cyborg justice”—hybrid human/AI judicial systems that attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea. While such systems would ideally maximize the strengths of human and machine intelligence, they might also magnify the drawbacks of both. They also raise distinct teaming risks associated with overtrust, undertrust, and interface design errors, as well as second-order structural side effects.
One such side effect is “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common law’s extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
One such side effect is “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common law’s extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy.
Mayson, Sandra G.
Bias in, Bias out Journal Article
In: Yale Law Journal, vol. 128, no. 8, pp. 2218–2301, 2018.
@article{mayson_bias_2018,
title = {Bias in, Bias out},
author = {Sandra G. Mayson},
url = {https://www.yalelawjournal.org/article/bias-in-bias-out},
year = {2018},
date = {2018-01-01},
urldate = {2024-10-08},
journal = {Yale Law Journal},
volume = {128},
number = {8},
pages = {2218–2301},
abstract = {Police, prosecutors, judges, and other criminal justice actors increasingly use algorithmic risk assessment to estimate the likelihood that a person will commit future crime. As many scholars have noted, these algorithms tend to have disparate racial impact. In response, critics advocate three strategies of resistance: (1) the exclusion of input factors that correlate closely with race, (2) adjustments to algorithmic design to equalize predictions across racial lines, and (3) rejection of algorithmic methods altogether.
This Article’s central claim is that these strategies are at best superficial and at worst counterproductive, because the source of racial inequality in risk assessment lies neither in the input data, nor in a particular algorithm, nor in algorithmic methodology. The deep problem is the nature of prediction itself. All prediction looks to the past to make guesses about future events. In a racially stratified world, any method of prediction will project the inequalities of the past into the future. This is as true of the subjective prediction that has long pervaded criminal justice as of the algorithmic tools now replacing it. What algorithmic risk assessment has done is reveal the inequality inherent in all prediction, forcing us to confront a much larger problem than the challenges of a new technology. Algorithms shed new light on an old problem.
Ultimately, the Article contends, redressing racial disparity in prediction will require more fundamental changes in the way the criminal justice system conceives of and responds to risk. The Article argues that criminal law and policy should, first, more clearly delineate the risks that matter, and, second, acknowledge that some kinds of risk may be beyond our ability to measure without racial distortion—in which case they cannot justify state coercion. To the extent that we can reliably assess risk, on the other hand, criminal system actors should strive to respond to risk with support rather than restraint whenever possible. Counterintuitively, algorithmic risk assessment could be a valuable tool in a system that targets the risky for support.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This Article’s central claim is that these strategies are at best superficial and at worst counterproductive, because the source of racial inequality in risk assessment lies neither in the input data, nor in a particular algorithm, nor in algorithmic methodology. The deep problem is the nature of prediction itself. All prediction looks to the past to make guesses about future events. In a racially stratified world, any method of prediction will project the inequalities of the past into the future. This is as true of the subjective prediction that has long pervaded criminal justice as of the algorithmic tools now replacing it. What algorithmic risk assessment has done is reveal the inequality inherent in all prediction, forcing us to confront a much larger problem than the challenges of a new technology. Algorithms shed new light on an old problem.
Ultimately, the Article contends, redressing racial disparity in prediction will require more fundamental changes in the way the criminal justice system conceives of and responds to risk. The Article argues that criminal law and policy should, first, more clearly delineate the risks that matter, and, second, acknowledge that some kinds of risk may be beyond our ability to measure without racial distortion—in which case they cannot justify state coercion. To the extent that we can reliably assess risk, on the other hand, criminal system actors should strive to respond to risk with support rather than restraint whenever possible. Counterintuitively, algorithmic risk assessment could be a valuable tool in a system that targets the risky for support.
Rizer, Arthur; Watney, Caleb
Artificial Intelligence Can Make Our Jail System More Efficient, Equitable, and Just Journal Article
In: Texas Review of Law and Politics, vol. 23, no. 1, pp. 181–228, 2018.
@article{rizer_artificial_2018,
title = {Artificial Intelligence Can Make Our Jail System More Efficient, Equitable, and Just},
author = {Arthur Rizer and Caleb Watney},
url = {https://heinonline.org/HOL/P?h=hein.journals/trlp23&i=193},
year = {2018},
date = {2018-01-01},
urldate = {2024-10-21},
journal = {Texas Review of Law and Politics},
volume = {23},
number = {1},
pages = {181–228},
abstract = {Artificial intelligence (AI), and algorithms more broadly, hold great promise for making our criminal justice system more efficient, equitable, and just. Many of these systems are already in place today, assisting with tasks such as risk assessment and case management. In the popular media, these tools have been compared to dystopian science-fiction scenarios run awry. But while these comparisons may succeed in luring readers, the reality of how AI is used in the criminal justice context-at least in its current form-is a bit more mundane. The courts are not at the precipice of replacing jurists with black-robed robots or arresting people before they commit a crime. However, there are real concerns about how effectively and transparently these systems operate, or how they might subtly distort outcomes, without adequate scrutiny.
This article contends that AI can play a critical role in achieving fairer and more efficient pretrial and jail systems, in particular through risk assessment software. Unlike other applications of risk assessment AI, such as for sentencing or parole, pretrial applications have relatively simple goals, involve fewer complex legal questions, and have outcomes that are quicker and easier to measure. Thus, it is likely that the pretrial and jail stages will be the testbed for broader deployment of AI technology in the justice system.
Of course, AI will not (and should not) supplant human judgment any time soon. A machine cannot yet read a defendant's demeanor or assess the full context of facts the way an experienced judge can. But AI can counter certain human biases and, if deployed in a transparent manner, can help advise judges in ways that will produce better outcomes-such as reduced crime rates and lower jail populations.
This article will differentiate between the various types of algorithms and explain current capabilities, as well as give an overview of current pretrial and jail system trends. Next, we give a brief overview of the history of risk assessment tools, their current uses in the pretrial and jail systems, and the potential for further reform using more advanced algorithms. In addition, the article will discuss the relevant legal framework as well as governance capabilities across state, municipal, and federal jurisdictions. We then will attempt to consider the most prominent critiques of algorithms in the jail system, especially in risk assessment. Finally, the article will look at potential policy and legal solutions for the effective stewardship and deployment of algorithms in the pretrial and jail systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This article contends that AI can play a critical role in achieving fairer and more efficient pretrial and jail systems, in particular through risk assessment software. Unlike other applications of risk assessment AI, such as for sentencing or parole, pretrial applications have relatively simple goals, involve fewer complex legal questions, and have outcomes that are quicker and easier to measure. Thus, it is likely that the pretrial and jail stages will be the testbed for broader deployment of AI technology in the justice system.
Of course, AI will not (and should not) supplant human judgment any time soon. A machine cannot yet read a defendant's demeanor or assess the full context of facts the way an experienced judge can. But AI can counter certain human biases and, if deployed in a transparent manner, can help advise judges in ways that will produce better outcomes-such as reduced crime rates and lower jail populations.
This article will differentiate between the various types of algorithms and explain current capabilities, as well as give an overview of current pretrial and jail system trends. Next, we give a brief overview of the history of risk assessment tools, their current uses in the pretrial and jail systems, and the potential for further reform using more advanced algorithms. In addition, the article will discuss the relevant legal framework as well as governance capabilities across state, municipal, and federal jurisdictions. We then will attempt to consider the most prominent critiques of algorithms in the jail system, especially in risk assessment. Finally, the article will look at potential policy and legal solutions for the effective stewardship and deployment of algorithms in the pretrial and jail systems.