"Robust Bayesian inference via coarsening" (Next Week at the Statistics Seminar)
Attention
conservation notice: Only of interest if you (1) care allocating
precise fractions of a whole belief over a set of mathematical models when you
know none of them is actually believable, and (2) will be in Pittsburgh on
Monday.
As someone who thinks Bayesian inference is only worth
considering under mis-specification, next week's first talk is of intense
interest.
- Jeff Miller, "Robust Bayesian inference via coarsening" (arxiv:1506.06101)
- Abstract: The standard approach to Bayesian inference is based on
the assumption that the distribution of the data belongs to the chosen model
class. However, even a small violation of this assumption can have a large
impact on the outcome of a Bayesian procedure, particularly when the data set
is large. We introduce a simple, coherent approach to Bayesian inference that
improves robustness to small departures from the model: rather than
conditioning on the observed data exactly, one conditions on the event that the
model generates data close to the observed data, with respect to a given
statistical distance. When closeness is defined in terms of relative entropy,
the resulting "coarsened posterior" can be approximated by simply raising the
likelihood to a certain fractional power, making the method computationally
efficient and easy to implement in practice. We illustrate with real and
simulated data, and provide theoretical results.
- Time and place: 4 pm on Monday, 15 February 2016, in 125 Scaife Hall
As always, the talk is free and open to the public.
Enigmas of Chance;
Bayes, Anti-Bayes
Posted at February 10, 2016 00:24 | permanent link