Meta Learning for Natural Language Processing: A Survey
release_lpyrqh7njfdtnccbxp7kghpeti
by
Hung-yi Lee, Shang-Wen Li, Ngoc Thang Vu
2022
Abstract
Deep learning has been the mainstream technique in natural language
processing (NLP) area. However, the techniques require many labeled data and
are less generalizable across domains. Meta-learning is an arising field in
machine learning studying approaches to learn better learning algorithms.
Approaches aim at improving algorithms in various aspects, including data
efficiency and generalizability. Efficacy of approaches has been shown in many
NLP tasks, but there is no systematic survey of these approaches in NLP, which
hinders more researchers from joining the field. Our goal with this survey
paper is to offer researchers pointers to relevant meta-learning works in NLP
and attract more attention from the NLP community to drive future innovation.
This paper first introduces the general concepts of meta-learning and the
common approaches. Then we summarize task construction settings and application
of meta-learning for various NLP problems and review the development of
meta-learning in NLP community.
In text/plain
format
Archived Files and Locations
application/pdf 480.6 kB
file_oixwz5joffhchamizglfupkwce
|
arxiv.org (repository) web.archive.org (webarchive) |
2205.01500v1
access all versions, variants, and formats of this works (eg, pre-prints)