The Journal of Things We Like (Lots)
Select Page

Many nation states have grappled with the questions raised by the use of artificial intelligence (AI) in administrative decision-making, law enforcement and criminal prosecution. National courts have addressed the use of data analytics for criminal sentencing. National legislatures have debated regulations limiting the use of machine learning for surveillance and profiling. But what role does international law play in the governance of existing and emerging artificial intelligence technologies? As of this writing, there are no international treaties providing guidance or imposing obligations on signatories in shaping the regulation of artificial intelligence. National law is the sole locus for containing artificial intelligence-based technologies.

Two essays published in the American Journal of International Law Unbound contribute to the neglected area of international law and artificial intelligence. Both look to international human rights law as the source of protections for liberty and equality in the encroaching technologies of machine learning, data analytics, and other software aided tools in the domains of law. Each however takes a different approach to integrating technology with traditional legal approaches in reining in unchecked uses of artificial intelligence. One author is skeptical of human rights law and its potentially luddite tendencies. The other author advocates for democratic values, as embodied in international human rights law, as providing the check for the deployment of new technologies. Because these two essays fill a longstanding gap in the scholarly literature on artificial intelligence and international law through contrasting yet complementary approaches, these are important works that I like lots.

Professor Malcolm Langford, University of Oslo Law Faculty, examines how two rights recognized under international human rights law can moderate the use of automation in administrative decision making: the right to social security and the right to a fair trial. These two rights, he points out, are often the basis for criticisms of administrative implementations of artificial intelligence for making decisions about the denial of benefits or alteration of rights. While Langford acknowledges the criticisms, he emphasizes that “they should be evidence-based, informed by an understanding of ‘technological systems,’ and cognizant of the trade-offs between human and machine failure.” (P. 141).

Langford’s account of AI begins with the parallels between the logic of AI and the logic of legal process recognized in the 1970’s. This recognition led to introduction of automated technology in the administration of benefits, justified in part by the need for efficiency in allocating and distributing benefits and in part by the need for promoting ease of access. As automation expanded, the democratic goals of transparency, participation and accountability became compromised. At the same time, the advantages of technology in providing benefits underscored the need for digital literacy among the recipients of government benefits. Furthermore, digitization of appeals transformed administrative hearings into formal adjudication. Transformation into formal adjudication reveals the need to recognize a right to a fair trial within the digital rule of law. This right is necessary to combat arbitrariness and discrimination created by digital algorithms. In addition, the right can counter concerns about legal accuracy and lack of transparency in algorithm-based methods. Finally, a right to a fair trial can address inequities between parties arising from unequal resources and access to counsel.

Substantive rights may cure the threats from digital technologies, but they serve to critique the use of artificial intelligence in legal process more broadly. Langford identifies Severalin which critical empiricism can guide the critique. The primary question is how critics frame the technology. Langford emphasizes that technology is not an artifact or a thing but a “complex assemblages of components and know-how that make up ‘technological systems.’” (P. 145). Analogously, the right to a fair trial “is based on a complex combination of actors, processes, and rules.” (Id.). This systems-based approach to artificial intelligence and legal rights highlights the use of systems as tools for control as well as tools for change. “Public administration,” Langford concludes, “should be understood as a form of technology, complex and hierarchical amalgam of rules, algorithms, institutions and spaces—that can both liberation and repress.” (Id.)

This complexity points to the need for evidence to document the ills of artificial intelligence . Design of automated administration should minimize the dangers whether these ills arise from machine or human error. Evidence should inform regulation of artificial intelligence to preserve the values of democracy such as accountability and transparency of process. Regulation would go together with new digital tools that can hold the government accountable for how technologies are implemented and used for decision-making.

Langford envisions a role for digital technology to combat digital technology: “Public and private investment in digital accountability will be crucial therefore in ensuring that automation advances rather than retards international human rights.” (P. 146). Langford identifies “public interest technologies” that can mobilize citizens to counter AI and government overreach acomplaints to identify AI failures and to propose regulations to cure them. One example is JustBot, an application to help individuals in Europe “apply more readily to the European Court of Human Rights and potentially avoid customary summary rejection.” (Id).

In contrast with Langford, Daragh Murray of University of Essex advocates for a critical regulatory perspective on artificial intelligence in administrative decision making. His argument is that international human rights law obligates states to ask two questions before deploying artificial intelligence: (1) why a deployment is necessary and (2) what alternatives are available. These two questions are assessed against democratic principles that protect individuals against arbitrary state interferences violative of human rights. What Murray proffers is a framework for legislatures to follow in assessing policy justifications for adopting AI-technology and for courts to review legislative decision making.

Mr. Murray frames this inquiry as an ex ante obligation on states to address these questions before deploying AI technologies. But he also recognizes that court may apply this inquire ex post to review AI technologies that states have implemented. He uses the example of live facial recognition technology in the United Kingdom where an appeals court was reviewing the South Wales Police’s adoption of face recognition technology, as of the time of the article’s publication.

AI deployment is necessary based on an assessment of “the potential utility and the potential harm of any deployment, in light of the constraints of democratic society.” (P. 160). This assessment of utility and harm would include fidelity to democratic principles, the solution to a pressing need, and a proportionality analysis of means and ends. This third factor points to the consideration of alternatives to artificial intelligence in light of the objectives underpinning the deployment, the necessity of the objective, and the precise manner in which the technology is implemented. Furthermore, before deploying any AI-technology, the state should consider whether it “could use other, less invasive, approaches to achieve the same—or sufficiently similar—objectives.” (p 161). For example, alternatives to facial recognition technology to identify individuals subject to an arrest warrant would include contacting the person’s family or household and visiting places where the person is known to frequent.

This two-question approach leads Murray to an inquiry that would distinguish between AI technologies that represent “a continuation of preexisting police capability by other means, or…a step-change in capability.” (P. 162). For AI technologies that serve as a continuation, Murray proposes that resource efficiencies are relevant for decision-making, given the potentially positive impact would have on the state’s effectiveness. If, however, a particular AI technology represents a step-change in police capability, resource efficiencies should not be relevant for consideration because of the expansion in state powers and potentially adverse effects on human rights. In short, Murray offers a human rights-centered methodology to assess and control state deployment of artificial intelligence.

These two articles together fill a gap in understanding artificial intelligence within international law. While filling a gap, the articles also expand the debates over artificial intelligence. International human rights can aid human judgment as it confronts the challenges posed by artificial intelligence. These two contributions set forth the place of legal process and the role of AI in the future of human rights. Together they provide a preliminary roadmap for reforms. Separate, each article supports the two sides of ongoing debates, those who favor traditional law and those who favor novel technologies. These two articles together I like a lot because they are important seeds for future scholarship on law, policy, and technology.

Download PDF
Shubha Ghosh, Artificial Intelligence, Human Rights, & Legal Judgment, JOTWELL (April 4, 2023) (reviewing Malcolm Langford, Taming the Digital Leviathan: Automated Decision-Making and International Human Rights, 114 Am. J. of Int'l. L. Unbound 141 (2020); Daragh Murray, Using Human Rights Law to Inform States’ Decisions to Deploy AI, 114 Am. J. of Int'l. L. Unbound 158 (2020)), https://intl.jotwell.com/?p=4651&preview=true.