JURIX 2022: 35TH INTERNATIONAL CONFERENCE ON LEGAL KNOWLEDGE AND INFORMATION SYSTEMS
PROGRAM FOR FRIDAY, DECEMBER 16TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 9: Legal Knowledge Extraction II
09:00
Toward Automatically Identifying Legally Relevant Facts

ABSTRACT. In making legal decisions, courts apply relevant law to facts. While the law typically changes slowly over time, facts vary from case to case. Nevertheless, underlying patterns of fact may emerge. This research focuses on underlying fact patterns commonly present in cases where motorists are stopped for a traffic violation and subsequently detained while a police officer conducts a canine sniff of the vehicle for drugs. We present a set of underlying patterns of fact, that is, factors of suspicion, that police and courts apply in determining reasonable suspicion. We demonstrate how these fact patterns can be identified and annotated in legal cases and how these annotations can be employed to train a transformer model to identify the factors in previously unseen legal opinions.

09:20
Conditional Abstractive Summarization of Court Decisions for Laymen and Insights from Human Evaluation

ABSTRACT. Legal text summarization is generally formalized as an extractive text summarization task applied to court decisions from which the most relevant sentences are identified and returned as a gist meant to be read by legal experts. However, such summaries are not suitable for laymen seeking intelligible legal information. In the scope of the JusticeBot, a question-answering system in French that provides information about housing law, we intend to generate summaries of court decisions that are, on the one hand, conditioned by a question-answer-decision triplet, and on the other hand, intelligible for ordinary citizens not familiar with legal documents. So far, our best model, a further pre-trained BARThez, achieves an average ROUGE-1 score of 37.7 and a deepened manual evaluation of summaries reveals that there is still room for improvement.

09:40
Predicting outcomes of Italian VAT decisions

ABSTRACT. This study aims at predicting the outcomes of legal cases based on the textual content of judicial decisions. We present a new corpus of Italian documents, consisting of 226 annotated decisions on Value Added Tax by Regional Tax law commissions. We address a binary classification task on the basis of whether a request is upheld or rejected in the final decision. We employ traditional classifiers and NLP methods to assess which parts of the decision are more informative for the task. Our results are encouraging, indicating feasibility.

09:55
Legal Text Summarization using Argumentative Structures

ABSTRACT. Legal text summarization focuses on the automated creation of summaries for legal texts. We show that the argumentative structure of judgments can improve the selection of guiding principles as a specific kind of summary using judgments of the German Federal Court of Justice as measured by the ROUGE metric. We evaluate our first results and put them in the context of our ongoing work.

10:10
Autosuggestion of relevant cases and statutes

ABSTRACT. A precedent or statute that has been cited frequently to solve many similar or identical legal issues is considered to be highly relevant when a similar issue arises. With this paper, we aim to create an auto suggestion tool to predict the most relevant cases and statutes for the specified legal issue/query. Our approach considers the cited cases and statutes as single tokens having unique IDs as the value where we try to find the relevant tokens based on the words describing the legal issues around these tokens. We observed that context-based representations outperformed lexical-based representations and distributional representations. Moreover, we observed that the method works better for statute law retrieval compared to case law retrieval.

10:25
TBA
10:30-11:00Coffee Break
11:00-12:15 Session 10: Legal Knowledge Modeling and Machine Learning
11:00
Semantic Querying of Knowledge Rich Legal Digital Libraries using Prism

ABSTRACT. Contemporary legal digital libraries such as Lexis Nexis and WestLaw allow searching for case laws using tools with varying sophistication. At the core, various forms of keyword search and indexing are used to find documents of interest. While newer digital library search engines leveraging semantic technologies such as knowledgebases, natural language processing, and knowledge graphs are becoming available, legal databases are yet to take advantage of them fully. In this paper, we introduce an experimental legal document search engine, called {\em Prism}, that is capable of supporting legal premise based search to support legal theories.

11:20
Reasoning with Legal Cases: A Hybrid ADF-ML Approach

ABSTRACT. Reasoning with legal cases has long been modelled using symbolic methods. In recent years the increased availability of legal data together with improved machine learning techniques has led to an explosion of interest in data-driven methods being applied to the problem of predicting outcomes of legal cases. Although encouraging results have been reported, they are unable to justify the outcomes produced in satisfactory legal terms and do not exploit the structure inherent within legal domains; in particular, with respect to the issues and factors relevant to the decision. In this paper we present the technical foundations of a novel hybrid approach to reasoning with legal cases, using Abstract Dialectical Frameworks (ADFs) in conjunction with hierarchical BERT. ADFs are used to represent the legal knowledge of a domain in a structured way to enable justifications and improve performance. The machine learning is targeted at the task of factor ascription; once factors present in a case are ascribed, the outcome follows from reasoning over the ADF. To realise this hybrid approach, we present a new hybrid system to enable factor ascription, envisioned for use in legal domains, such as the European Convention on Human Rights that is used frequently in modelling experiments.

11:40
Why Do Tenants Sue their Landlords? Answers from a Topic Model

ABSTRACT. Topic modeling is widely used in various domains for extracting latent topics underlying large corpora, including judicial texts. In the latter, topics tend to be made by and for domain experts, but remain unintelligible for laymen. In the framework of housing law court decisions in French which mixes abstract legal terminology with real-life situations described in common language, similarly to [1], we aim at identifying different situations that can cause a tenant to prosecute their landlord in court with the application of topic models. Upon quantitative evaluation, LDA and BERTopic deliver the best results, but a closer manual analysis reveals that the second embedding-based approach is much better at producing and even uncovering topics that describe a tenant’s real-life issues and situations.

12:00
On Capturing Legal Knowledge in Ontology and Process Models Combined: The Case of an Appeal Process

ABSTRACT. In this paper, we explore conceptual modeling as a means to improve the explicit representation of key aspects of a legal procedure. We employ in tandem an ontology-based structural conceptual model and a behavioral process model as complementary views on a legal subject matter. We examine as a case a specific type of appeal in the Brazilian legal system and establish a correspondence between elements in the models and fragments of the specific norms on which they are grounded. These correspondences are expressed with identifiers using the Brazilian LexML identification scheme.

12:15-13:00 Session 11: Keynote Speech
12:15
A Typology of Legal Techs: A method to map/compare/assess

ABSTRACT. Legal Informatics and AI & Law have been a niche domain within both law and information & computing science for decades, producing carefully crafted scientific papers and prototypes based on the formalisation of legal rules and legal reasoning, argumentation theory and so much more. The community has been aware of pitfalls, limitations and the potential of various levels of automation in the legal domain, and has produced an acquis that many lawyers and computer scientists are not even aware of. In the meantime, they have been overtaken by providers of so-called ‘legal techs’ that promise the stars while delivering oftentimes underwhelming software systems. During this keynote, I will explain how and why lawyers need to come to terms with the potential reconfiguration of their domain, proposing a method and a mindset to map, compare and assess technologies that claim to support, replace or enhance legal research and legal practice: https://publications.cohubicol.com/typology/

13:00-14:30Lunch Break
14:30-16:10 Session 12: Deontic logic, defeasible reasoning
14:30
Stable Normative Explanations

ABSTRACT. Modelling the concept of explanation is a central in AI systems, as it provides methods for the development of eXplainable AI (XAI). When explanation applies to normative reasoning, XAI aims at promoting normative trust in the decisions of AI systems: in fact, such a trust depends on understanding whether systems predictions correspond to legally compliant scenarios. This paper extends to normative reasoning a work by Governatori \emph{et al.} (2022) on the notion of stable explanations in a non-monotonic setting: when an explanation is stable, it can, in fact, be used to infer the same normative conclusion independently of other facts that are found afterwards.

14:50
Precedential constraint derived from inconsistent case bases

ABSTRACT. I explore a factor-based model of precedential constraint that, unlike existing models, does not rely on the assumption that the background set of precedent cases is consistent. The model I consider is a generalization of the reason model of precedential constraint that was suggested by Horty. I show that, within this framework, inconsistent case bases behave in a sensible and interesting way, both from a logical and a more practical perspective.

15:10
An Automata-Based Formalism for Normative Documents with Real-Time

ABSTRACT. Deontic logics have long been the tool of choice for the formal analy- sis of normative texts. While various such logics have been proposed many deal with time in a qualitative sense, i.e., reason about the ordering but not timing of events, and it was only in the past few years that real-time deontic logics have been developed to reason about time quantitatively. In this paper we present timed contract automata, an automata-based deontic modelling approach complementing these logics with a more operational view of such normative clauses and providing a computational model more amenable to automated analysis and monitoring.

15:25
A compression and simulation-based approach to fraud discovery

ABSTRACT. With the uptake of digital services in public and private sectors, the formalization of laws is attracting increasing attention. Yet, non-compliant fraudulent behaviours (money laundering, tax evasion, etc.)—practical realizations of violations of law—remain very difficult to formalize, as one does not know the exact formal rules that define such violations. The present work introduces a methodological framework aiming to discover non-compliance through compressed representations of behaviour, considering a fraudulent agent that explores via simulation the space of possible non-compliant behaviours in a given social domain. The framework is founded on a combination of utility maximization and active learning. We illustrate its application on a simple social domain. The results are promising, and seemingly reduce the gap on fundamental questions in AI and Law, although this comes at the cost of developing complex models of the simulation environment, and sophisticated reasoning models of the fraudulent agent.

15:40
Can a military autonomous device follow International Humanitarian Law?

ABSTRACT. The paper presents a formal model and an experimental verification of the system controlling the International Humanitarian Law compliance for the autonomous military device.

15:55
Fundamental Revisions on Constraint Hierarchies for Ethical Norms

ABSTRACT. This paper studies constraint hierarchies for ethical norms, which are unwritten and may be relaxed if they conflict with stronger norms. Since such ethical norms are unwritten, initial representations of ethical norms may contain errors. For correcting those errors, this paper examines fundamental revisions on constraint hierarchies for ethical norms. Although some revisions on representations for ethical norms have been suggested, revisions on constraint hierarchies for ethical norms have not been completely investigated. In this paper, we categorize two fundamental types of revisions on such constraint hierarchies. The first type is preference revision, which changes only the strengths of the constraints but not their contents. The second type is content revision, which changes only the contents of the constraints but not their strengths. We also compare effects of those revisions in the criteria of syntactic and semantic changes, which are common criteria of revisions on legal theories. From the comparison, we found that preference revision tentatively makes lower syntactic changes. However, its computation is intractable, incomplete, and potentially makes a large number of semantic changes. On the other hand, we show that content revision on constraint hierarchies has at least two computations, i.e. DF-contraction and DF-expansion, that are complete and make a small number of semantic changes. However, the computations tentatively produce a large number of syntactic changes. This comparison leads to the possibility of optimization between preference revision and content revision, which we think is an interesting future work.

16:10-16:30Coffee Break
16:30-17:35 Session 13: Semantic annotation and legal reasoning
16:30
Judgment Tagging and Recommendation Using Pre-trained Language Models and Legal Taxonomy

ABSTRACT. We study the problem of machine comprehension of court judgments and generation of descriptive tags for judgments. Our approach makes use of a legal taxonomy D, which serves as a dictionary of canonicalized legal concepts. Given a court judgment J, our method identifies the key contents of J and then applies Word2Vec and BERT-based models to select a short list T_J of terms/phrases from the taxonomy D as descriptive tags of J. The tag set T_J suggests concepts that are relevant to or associative with J and provides a simple mechanism for readers of J to compose associative queries for effective judgment recommendation. Our prototype system implemented on a court judgment search platform shows that our method provides a highly effective tool that assists users in exploring a judgment corpus and in obtaining relevant judgment recommendation.

16:45
WhenTheFact: Extracting events from European legal decisions

ABSTRACT. This paper presents WhenTheFact, a tool that identifies relevant events from European judgments. It is able to extract the structure of the document, as well as when the event happened and who carried it out. WhenTheFact builds then a timeline that allows the user to navigate through the annotations in the document.

17:00
Extracting References from German Legal Texts using Named Entity Recognition

ABSTRACT. Semantic knowledge extraction is fundamental to the exploitation of data. Information extraction tasks are particularly challenging in specific contexts such as the legal domain. In this paper, Named Entity Recognition is used to make legal texts more accessible to domain experts and laymen. This paper focuses on extracting law references and citations of court decisions, which occur in various syntactic formats. To investigate this task a reference data set is constructed from a large collection of German court decisions and different NER-techniques are compared. Pattern matching, probabilistic sequence labeling (CRF), Deep Learn- ing (BiLSTM) and transfer learning using a pretrained language model (BERT) are applied to extract references to laws and court decisions. The results show that the BERT based approach significantly outperforms methods used in prior work.

17:15
Toward an Integrated Annotation and Inference Platform for Enhancing Justifications for Algorithmically Generated Legal Recommendations and Decisions

ABSTRACT. Convincing justifications strengthen the usability of the legal recommendations and decisions that are produced by algorithmic computations. A legal informatics system may offer similar cases for preparing cross-examinations in courts, and may even recommendations for the sentences against the defendants. The inference systems, which are constructed by the machine learning (ML) approaches, typically rely on training data to learn to select the recommendations and decisions. An ML-based inference procedure that may offer satisfactory recommendations and decisions would be more useful if we can associate their outputs with appropriate supporting evidences.

We believe that such supporting evidences require at least a few high-quality annotated data for training. Given a collection of original judgment documents, we use existing tools for lexical, syntactical, semantic, and even pragmatic analysis to mark the texts. Human experts can verify and correct the raw annotations. Our system also allows the annotators to read, find and mark the statements for high-level legal factors for specific categories of lawsuits. The annotated data will be used to train a new generation of tools, hopefully improving the quality of future annotations.

Currently, we use the open judgment documents of the courts in Taiwan in our system. We believe that the system architecture can be adopted by systems of legal informatics in any other languages. If the proposal is accepted, we hope to show the current operations of the annotation system both on site and online.

17:25
Scribe: A Specialized Collaborative Tool for Legal Judgment Annotation

ABSTRACT. Scribe is a legal decision annotation platform that facilitates learning dataset generation for Legaltech teams, in which legal experts and developers in- teract, aiming at accelerating annotation time. The platform enables legal experts to control for task characteristics such as categories of claims and classes of named entities. Legal NLP models developers can compose (and select) custom dataset for specific tasks from huge database. Multiple legal experts can annotate the dataset, while developers are always updated with progress state. The platform is organized by modules, maintainable and extensible in addition to its flexibility and unified output result in JSON format. See our demo https://youtu.be/ZZRUgWkyGIk