The Rashomon effect occurs when many different explanations exist for the
same phenomenon. In machine learning, Leo Breiman used this term to
characterize problems where many accurate-but-different models exist to
describe the same data. In this work, we study how the Rashomon effect can be
useful for understanding the relationship between training and test
performance, and the possibility that simple-yet-accurate models exist for many
problems. We consider the Rashomon set - the set of almost-equally-accurate
models for a given problem - and study its properties and the types of models
it could contain. We present the Rashomon ratio as a new measure related to
simplicity of model classes, which is the ratio of the volume of the set of
accurate models to the volume of the hypothesis space; the Rashomon ratio is
different from standard complexity measures from statistical learning theory.
For a hierarchy of hypothesis spaces, the Rashomon ratio can help modelers to
navigate the trade-off between simplicity and accuracy. In particular, we find
empirically that a plot of empirical risk vs. Rashomon ratio forms a
characteristic Γ-shaped Rashomon curve, whose elbow seems to be a
reliable model selection criterion. When the Rashomon set is large, models that
are accurate - but that also have various other useful properties - can often
be obtained. These models might obey various constraints such as
interpretability, fairness, or monotonicity.
Archived Files and Locations
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
access all versions, variants, and formats of this works (eg, pre-prints)