Meta Learning for Natural Language Processing: A Survey release_lpyrqh7njfdtnccbxp7kghpeti

by Hung-yi Lee, Shang-Wen Li, Ngoc Thang Vu

Released as a article .

2022  

Abstract

Deep learning has been the mainstream technique in natural language processing (NLP) area. However, the techniques require many labeled data and are less generalizable across domains. Meta-learning is an arising field in machine learning studying approaches to learn better learning algorithms. Approaches aim at improving algorithms in various aspects, including data efficiency and generalizability. Efficacy of approaches has been shown in many NLP tasks, but there is no systematic survey of these approaches in NLP, which hinders more researchers from joining the field. Our goal with this survey paper is to offer researchers pointers to relevant meta-learning works in NLP and attract more attention from the NLP community to drive future innovation. This paper first introduces the general concepts of meta-learning and the common approaches. Then we summarize task construction settings and application of meta-learning for various NLP problems and review the development of meta-learning in NLP community.
In text/plain format

Archived Files and Locations

application/pdf  480.6 kB
file_oixwz5joffhchamizglfupkwce
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-05-03
Version   v1
Language   en ?
arXiv  2205.01500v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 8c51e65b-ccc8-4ebb-80d0-508f7fcef092
API URL: JSON