Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-tuning release_jqdrcaonireyfj6z5irxvv6hva

by Mozhdeh Gheini, Xuezhe Ma, Jonathan May

Released as a article .

2022  

Abstract

A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-efficient transfer learning by updating only a small set of additional parameters while keeping the parameters of the pretrained language model frozen. While proven to be an effective method, there are no existing studies on if and how such knowledge of the downstream fine-tuning approach should affect the pretraining stage. In this work, we show that taking the ultimate choice of fine-tuning method into consideration boosts the performance of parameter-efficient fine-tuning. By relying on optimization-based meta-learning using MAML with certain modifications for our distinct purpose, we prime the pretrained model specifically for parameter-efficient fine-tuning, resulting in gains of up to 1.7 points on cross-lingual NER fine-tuning. Our ablation settings and analyses further reveal that the tweaks we introduce in MAML are crucial for the attained gains.
In text/plain format

Archived Files and Locations

application/pdf  261.5 kB
file_bbnsvmfgbvgadp3mk6ib6lqkiy
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-05-25
Version   v1
Language   en ?
arXiv  2205.12453v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a6b8039b-89a7-41d7-a277-1bcec1d8df6d
API URL: JSON