Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains
release_jpnu4ldsnva65kya4ut7ch2fgm
by
Jaromir Savelka, Hannes Westermann, Karim Benyekhlef, Charlotte S. Alexander, Jayla C. Grant, David Restrepo Amariles, Rajaa El Hamdani, Sébastien Meeùs, Michał Araszkiewicz, Kevin D. Ashley, Alexandra Ashley, Karl Branting (+6 others)
2021
Abstract
In this paper, we examine the use of multi-lingual sentence embeddings to
transfer predictive models for functional segmentation of adjudicatory
decisions across jurisdictions, legal systems (common and civil law),
languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic
resources outside of their original context have significant potential benefits
in AI & Law because differences between legal systems, languages, or traditions
often block wider adoption of research outcomes. We analyze the use of
Language-Agnostic Sentence Representations in sequence labeling models using
Gated Recurrent Units (GRUs) that are transferable across languages. To
investigate transfer between different contexts we developed an annotation
scheme for functional segmentation of adjudicatory decisions. We found that
models generalize beyond the contexts on which they were trained (e.g., a model
trained on administrative decisions from the US can be applied to criminal law
decisions from Italy). Further, we found that training the models on multiple
contexts increases robustness and improves overall performance when evaluating
on previously unseen contexts. Finally, we found that pooling the training data
from all the contexts enhances the models' in-context performance.
In text/plain
format
Archived Files and Locations
application/pdf 1.0 MB
file_f7bis7elu5c5djxno3yw35jq6q
|
arxiv.org (repository) web.archive.org (webarchive) |
2112.07882v1
access all versions, variants, and formats of this works (eg, pre-prints)