Minimum Description Length Recurrent Neural Networks
release_tgt2hwm3krgh3gzejwssqm7r5i
by
Nur Lan, Michal Geyer, Emmanuel Chemla, Roni Katzir
2022
Abstract
We train neural networks to optimize a Minimum Description Length score,
i.e., to balance between the complexity of the network and its accuracy at a
task. We show that networks optimizing this objective function master tasks
involving memory challenges and go beyond context-free languages. These
learners master languages such as a^nb^n, a^nb^nc^n, a^nb^2n,
a^nb^mc^n+m, and they perform addition. Moreover, they often do so with
100
transparent. We thus provide formal proofs that their perfect accuracy holds
not only on a given test set, but for any input sequence. To our knowledge, no
other connectionist model has been shown to capture the underlying grammars for
these languages in full generality.
In text/plain
format
Archived Files and Locations
application/pdf 1.5 MB
file_vn7htkiccre6bgd5re6umjjdqm
|
arxiv.org (repository) web.archive.org (webarchive) |
2111.00600v2
access all versions, variants, and formats of this works (eg, pre-prints)