Minimum Description Length Recurrent Neural Networks release_tgt2hwm3krgh3gzejwssqm7r5i

by Nur Lan, Michal Geyer, Emmanuel Chemla, Roni Katzir

Released as a article .

2022  

Abstract

We train neural networks to optimize a Minimum Description Length score, i.e., to balance between the complexity of the network and its accuracy at a task. We show that networks optimizing this objective function master tasks involving memory challenges and go beyond context-free languages. These learners master languages such as a^nb^n, a^nb^nc^n, a^nb^2n, a^nb^mc^n+m, and they perform addition. Moreover, they often do so with 100 transparent. We thus provide formal proofs that their perfect accuracy holds not only on a given test set, but for any input sequence. To our knowledge, no other connectionist model has been shown to capture the underlying grammars for these languages in full generality.
In text/plain format

Archived Files and Locations

application/pdf  1.5 MB
file_vn7htkiccre6bgd5re6umjjdqm
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-03-25
Version   v2
Language   en ?
arXiv  2111.00600v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 014b4e6d-fd85-46fb-aa7d-cde254ed69c7
API URL: JSON