Strong mixed-integer programming formulations for trained neural
networks
release_s3ecgsxvujapxgcfxc4sqq4bfa
by
Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo
Vielma
2019
Abstract
We present an ideal mixed-integer programming (MIP) formulation for a
rectified linear unit (ReLU) appearing in a trained neural network. Our
formulation requires a single binary variable and no additional continuous
variables beyond the input and output variables of the ReLU. We contrast it
with an ideal "extended" formulation with a linear number of additional
continuous variables, derived through standard techniques. An apparent drawback
of our formulation is that it requires an exponential number of inequality
constraints, but we provide a routine to separate the inequalities in linear
time. We also prove that these exponentially-many constraints are
facet-defining under mild conditions. Finally, we study network verification
problems and observe that dynamically separating from the exponential
inequalities 1) is much more computationally efficient and scalable than the
extended formulation, 2) decreases the solve time of a state-of-the-art MIP
solver by a factor of 7 on smaller instances, and 3) nearly matches the dual
bounds of a state-of-the-art MIP solver on harder instances, after just a few
rounds of separation and in orders of magnitude less time.
In text/plain
format
Archived Files and Locations
application/pdf 232.3 kB
file_7sfep76mkvdfdiq3fktypn3ddu
|
arxiv.org (repository) web.archive.org (webarchive) |
1811.08359v2
access all versions, variants, and formats of this works (eg, pre-prints)