An Evaluation Dataset for Legal Word Embedding: A Case Study On Chinese Codex release_ayeb3ivnzbentmsqs67eekwboa

by Chun-Hsien Lin, Pu-Jen Cheng

Released as a article .

2022  

Abstract

Word embedding is a modern distributed word representations approach widely used in many natural language processing tasks. Converting the vocabulary in a legal document into a word embedding model facilitates subjecting legal documents to machine learning, deep learning, and other algorithms and subsequently performing the downstream tasks of natural language processing vis-\`a-vis, for instance, document classification, contract review, and machine translation. The most common and practical approach of accuracy evaluation with the word embedding model uses a benchmark set with linguistic rules or the relationship between words to perform analogy reasoning via algebraic calculation. This paper proposes establishing a 1,134 Legal Analogical Reasoning Questions Set (LARQS) from the 2,388 Chinese Codex corpus using five kinds of legal relations, which are then used to evaluate the accuracy of the Chinese word embedding model. Moreover, we discovered that legal relations might be ubiquitous in the word embedding model.
In text/plain format

Archived Files and Locations

application/pdf  791.7 kB
file_orarsn4fengpdis673hnz6diie
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-03-29
Version   v1
Language   en ?
arXiv  2203.15173v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a10548a0-2efe-4634-8749-e9f876820c60
API URL: JSON