Benchmarking Approximate Inference Methods for Neural Structured Prediction release_jqzp6lmf55hfjb3wzud4vm7ani

by Lifu Tu, Kevin Gimpel

Released as a article .

2019  

Abstract

Exact structured inference with neural network scoring functions is computationally challenging but several methods have been proposed for approximating inference. One approach is to perform gradient descent with respect to the output structure directly (Belanger and McCallum, 2016). Another approach, proposed recently, is to train a neural network (an "inference network") to perform inference (Tu and Gimpel, 2018). In this paper, we compare these two families of inference methods on three sequence labeling datasets. We choose sequence labeling because it permits us to use exact inference as a benchmark in terms of speed, accuracy, and search error. Across datasets, we demonstrate that inference networks achieve a better speed/accuracy/search error trade-off than gradient descent, while also being faster than exact inference at similar accuracy levels. We find further benefit by combining inference networks and gradient descent, using the former to provide a warm start for the latter.
In text/plain format

Archived Files and Locations

application/pdf  206.5 kB
file_ufk4e3nsfvgrxjvey4vqaotvj4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-07-06
Version   v2
Language   en ?
arXiv  1904.01138v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d2685ed4-0c76-4640-8daa-8592c34585cd
API URL: JSON