An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models release_ugszbafd3vhcfk7yecqfg75nju

by Lifu Tu, Garima Lalwani, Spandana Gella, He He

Released as a article .

2020  

Abstract

Recent work has shown that pre-trained language models such as BERT improve robustness to spurious correlations in the dataset. Intrigued by these results, we find that the key to their success is generalization from a small amount of counterexamples where the spurious correlations do not hold. When such minority examples are scarce, pre-trained models perform as poorly as models trained from scratch. In the case of extreme minority, we propose to use multi-task learning (MTL) to improve generalization. Our experiments on natural language inference and paraphrase identification show that MTL with the right auxiliary tasks significantly improves performance on challenging examples without hurting the in-distribution performance. Further, we show that the gain from MTL mainly comes from improved generalization from the minority examples. Our results highlight the importance of data diversity for overcoming spurious correlations.
In text/plain format

Archived Files and Locations

application/pdf  282.3 kB
file_tg7wcy6hwnffbo6nanuwb7diyy
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-08-10
Version   v2
Language   en ?
arXiv  2007.06778v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 8429bed1-2d09-4b44-a240-84a843c0b965
API URL: JSON