{"abstract":"The Rashomon effect occurs when many different explanations exist for the\nsame phenomenon. In machine learning, Leo Breiman used this term to\ncharacterize problems where many accurate-but-different models exist to\ndescribe the same data. In this work, we study how the Rashomon effect can be\nuseful for understanding the relationship between training and test\nperformance, and the possibility that simple-yet-accurate models exist for many\nproblems. We consider the Rashomon set - the set of almost-equally-accurate\nmodels for a given problem - and study its properties and the types of models\nit could contain. We present the Rashomon ratio as a new measure related to\nsimplicity of model classes, which is the ratio of the volume of the set of\naccurate models to the volume of the hypothesis space; the Rashomon ratio is\ndifferent from standard complexity measures from statistical learning theory.\nFor a hierarchy of hypothesis spaces, the Rashomon ratio can help modelers to\nnavigate the trade-off between simplicity and accuracy. In particular, we find\nempirically that a plot of empirical risk vs. Rashomon ratio forms a\ncharacteristic \u0393-shaped Rashomon curve, whose elbow seems to be a\nreliable model selection criterion. When the Rashomon set is large, models that\nare accurate - but that also have various other useful properties - can often\nbe obtained. These models might obey various constraints such as\ninterpretability, fairness, or monotonicity.","author":[{"family":"Semenova"},{"family":"Rudin"},{"family":"Parr"}],"id":"unknown","issued":{"date-parts":[[2020,3,27]]},"language":"en","title":"A study in Rashomon curves and volumes: A new perspective on\n generalization and model simplicity in machine learning","type":"article"}