Keynote LectureRichard Chbeir, Université de Pau et des Pays de l'Adour (UPPA), FranceAn Overview of AI & Law Models of Case-Based Reasoning, With Applications in Decision Analysis and Explainable AIHenry Prakken, Utrecht University, Netherlands
Brief Bio Available soon.
Abstract Available soon.
Brief Bio Henry Prakken is a professor of Artificial Intelligence and Law in the Responsible AI group of the Department of Information and Computing Sciences at Utrecht University. He has master degrees in law (1985) and philosophy (1988) from the University of Groningen. In 1993 he obtained his PhD degree (cum laude) at the Free University Amsterdam with a thesis titled Logical Tools for Modelling Legal Argument.Prakken's main research interests concern artificial intelligence & law and computational models of argumentation. Prakken is a past president of the International Association for AI & Law (IAAIL), of the JURIX Foundation for Legal Knowledge-Based Systems and of the steering committee of the COMMA conferences on Computational Models of Argument. He is on the editorial board of several journals, including Artificial Intelligence and Law. Between 2017-2022 he was an associate editor of Artificial Intelligence.
Abstract In this talk I will give an overview of recent research of myself and my students on formal and computational models of legal case-based reasoning. Legal case-based reasoning is the process of arguing for or against decisions in new cases by drawing analogies to or stressing differences with precedent cases. In the field of AI & law, seminal work on case-based reasoning was by Ashley & Rissland on the HYPO system, followed by Aleven's work on the CATO system. This work primarily focussed on generating case-based debates. More recently, John Horty initiated the formal study of so-called precedential constraint, which addresses a question of a more logical nature, namely, to what extent a decision in a new case is constrained by a body of precedents.In our recent work we have further developed this work and, among other things, developed gradual consistency measures for collections of case-based decisions. We have also studied the application of models of legal case-based reasoning in explainable AI, by exploiting an analogy between the legal notion of precedent and the machine-learning concept of training data.