Optimizing Automata Learning via Monads release_f4buxewojzhmrosant6gkasw7y

by Gerco van Heerdt and Matteo Sammartino and Alexandra Silva

Released as a article .

2017  

Abstract

Automata learning has been successfully applied in the verification of hardware and software. The size of the automaton model learned is a bottleneck for scalability, and hence optimizations that enable learning of compact representations are important. This paper exploits monads, both as a mathematical structure and a programming construct, to design, prove correct, and implement a wide class of such optimizations. The former perspective on monads allows us to develop a new algorithm and accompanying correctness proofs, building upon a general framework for automata learning based on category theory. The new algorithm is parametric on a monad, which provides a rich algebraic structure to capture non-determinism and other side-effects. We show that our approach allows us to uniformly capture existing algorithms, develop new ones, and add optimizations. The latter perspective allows us to effortlessly translate the theory into practice: we provide a Haskell library implementing our general framework, and we show experimental results for two specific instances: non-deterministic and weighted automata.
In text/plain format

Archived Files and Locations

application/pdf  383.7 kB
file_lwjyifz7kfbyrnf4s2mnabqlze
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2017-11-15
Version   v2
Language   en ?
arXiv  1704.08055v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 9ab15ba6-e256-4de2-b944-ee849cc2bbc1
API URL: JSON