A risk profile for information fusion algorithms
release_sbu635viqvdgpnxrlnau6ezmdu
by
Kenric P. Nelson, Brian J. Scannell, Herbert Landau
2011
Abstract
E.T. Jaynes, originator of the maximum entropy interpretation of statistical
mechanics, emphasized that there is an inevitable trade-off between the
conflicting requirements of robustness and accuracy for any inferencing
algorithm. This is because robustness requires discarding of information in
order to reduce the sensitivity to outliers. The principal of nonlinear
statistical coupling, which is an interpretation of the Tsallis entropy
generalization, can be used to quantify this trade-off. The coupled-surprisal,
-ln_k (p)=-(p^k-1)/k, is a generalization of Shannon surprisal or the
logarithmic scoring rule, given a forecast p of a true event by an inferencing
algorithm. The coupling parameter k=1-q, where q is the Tsallis entropy index,
is the degree of nonlinear coupling between statistical states. Positive
(negative) values of nonlinear coupling decrease (increase) the surprisal
information metric and thereby biases the risk in favor of decisive (robust)
algorithms relative to the Shannon surprisal (k=0). We show that translating
the average coupled-surprisal to an effective probability is equivalent to
using the generalized mean of the true event probabilities as a scoring rule.
The metric is used to assess the robustness, accuracy, and decisiveness of a
fusion algorithm. We use a two-parameter fusion algorithm to combine input
probabilities from N sources. The generalized mean parameter 'alpha' varies the
degree of smoothing and raising to a power N^beta with beta between 0 and 1
provides a model of correlation.
In text/plain
format
Archived Files and Locations
application/pdf 416.8 kB
file_4acfectsrvavdbm7dirb6hebly
|
arxiv.org (repository) web.archive.org (webarchive) |
1105.5594v1
access all versions, variants, and formats of this works (eg, pre-prints)