Confidence Sets, Confidence Intervals
Last update: 08 Dec 2024 00:05First version: 31 August 2022
This is, to my mind, one of the more beautiful and useful ideas in statistics, but also one of the more tricky. (I might admire the idea more because of the trickiness.)
We have some parameter of a stochastic model we want to learn about, proverbially \( \theta \), which lives in the parameter space \( \Theta \). We observe random data, say \( X \). The distribution of \( X \) changes with \( \theta \), so the probability law is \( P_{\theta} \). Our game is one of "statistical inference", i.e., we look at \( X \) and make a guess about \( \theta \) on that basis. One type of guess would be an exact value for \( \theta \), a point estimate. But we'd basically never expect any point estimate to be exactly right, and we'd like to be able to say something about the uncertainty. A level \( \alpha \) confidence set is a random set of parameter values \( C_{\alpha} \subseteq \Theta \) which contains the true parameter value, whatever it might happen to be, with probability \( \alpha \) (at least): \[ \min_{\theta \in \Theta}{P_{\theta}(\theta \in C_{\alpha})} \geq \alpha \] We say that \( C_{\alpha} \) has coverage level \( \alpha \).
Quibbles:
- It's (pragmatically) implied that the coverage probability is \( =\alpha \) for at least some \( \theta \); if the probability is \( > \alpha \) for all \( \theta \), we say the confidence set is "conservative".
- If you know enough to quibble about "min" vs. "inf", you also know what I meant.
- \( C_{\alpha} \) is really \( C_{\alpha}(X) \), a (measurable) function of the data, but I am trying to keep the notation under control.
- In many situations there will be other ("nuisance") parameters we don't care about, canonically \( \psi \), and then we have to consider the worst case over both \( \theta \) and \( \psi \) simultaneously, even if really only want to draw inference about \( \theta \).
Either the confidence set contains the truth, or we were really unlucky
Now, confidence sets are notoriously hard for learners to wrap their minds around, but I have a way of explaining them which seems to work when I teach, and so I might as well share.
When I construct a confidence set from our data, I am offering you, the reader, a dilemma: Either
- the true parameter value is in the confidence set \( C_{\alpha} \), or
- we were very unlucky, and we got data that was very improbable (\( P \leq 1-\alpha \) and unrepresentative under all values of the parameter.
(More strictly there is really a tri-lemma here:
- the true parameter value is in the confidence set \( C_{\alpha} \), or
- we were very unlucky, and we got data that was very improbable (\( P \leq 1-\alpha \) and unrepresentative under all values of the parameter, or
- the model we're using to calculate probabilities is wrong.
The confidence set is every parameter value we can't reject
At this point a very reasonable question is to ask how on Earth we're supposed to find such a set. Here is one very general procedure. Suppose that we can statistically test whether \( \theta = \theta_0 \). That is, we have some function \( T(X;\theta_0) \) which returns 0 if \( X \) looks like it could have come from \( \theta=\theta_0 \), and returns 1 otherwise. More concretely, \( P_{\theta_0}{(T(X;\theta_0) = 1)} \leq 1-\alpha \), so the "false positive" rate or "false rejection" rate is at most \( 1-\alpha \). (That is, the "size" of the test is at most \( 1-\alpha \), over all parameter values.) Now building \( C_{\alpha} \) is very easy: \[ C_{\alpha}(X) = \left\{ \theta \in \Theta ~ : ~ T(X;\theta) = 0 \right\} \] (Here I am being explicit that \( C_{\alpha} \) is a function of the data \( X \), which I otherwise suppress in the notation.)
In words: the confidence set consists of all the parameter values we compatible with the data, i.e., all the parameter values we can't reject (at any acceptably low error rate \( 1-\alpha \) ).
This construction is called "inverting the hypothesis test". Clearly, any hypothesis test gives us a confidence set, by inversion. Equally clearly, any confidence set can be used to give a hypothesis test: to test whether \( \theta = \theta_0 \), see whether \( \theta_0 \in C_{\alpha} \); the false-rejection rate of this test is, by construction, \( \leq 1-\alpha \).
It is a little less clear that every confidence set can be constructed by inverting some test, but it's nonetheless true, and a textbook result (see, e.g., Casella and Berger, or Schervish). This is called the "duality between hypothesis tests and confidence sets".
Consistency and Evidence
Now at this point you might feel we're done, because we've got a range of parameter values which we know is right with high probability. Of course you might worry about what probability means about any particular case, but there's no special difficulty about that here, as opposed to (say) predicting the risk of rain tomorrow. But there is an additional wrinkle here, which has to do with consistency, or convergence to the truth.
Suppose we get larger and larger data sets, \( X_n \) with \( n \rightarrow \infty \). For each one, we construct a confidence set \( C_{\alpha}(X_n) \). What we would like to have happen is for these sets to get smaller and smaller, and to converge on the true value, \( C_{\alpha} \rightarrow \theta \). That is, if the true \( \theta \neq \theta_0 \), we'd like \( P_{\theta}(\theta_0 \in C_{\alpha}(X_n)) \rightarrow 1 \) as \( n \rightarrow \infty \). If we think about things in terms of the hypothesis test, we'd like the probability of correctly rejecting the wrong parameter values to go to 1 as we get more and more data (at constant false-rejection probability). So: inverting a consistent hypothesis test gives us a consistent confidence set (one which converges on the truth), and vice versa.
If we have a consistent confidence set, then, I claim, we've got evidence that the true parameter value is in the set.
(When a parameter is only partially identified, then inverting consistent tests will give confidence regions converging to the set of observationally-equivalent parameter values, rather than to a single point.)
Confidence Intervals
I have written about confidence "sets" because the basic logic is very abstract and doesn't rely on any geometric properties of the parameter space. But in many situations the parameters we're interested in are real number, and the test functions \( T(X;\theta) \) are piece-wise constant in \( \theta \). This is the sort of situation where the confidence set we'll get by inverting a test is an interval. In a few Euclidean dimensions, we might get a ball or box, or anyway some sort of compact, connected region. But in many of the situations I'm interested in, the parameter of interest is something like a function or a network, and "interval" just isn't going to cut it.Confidence Sets for Model Selection
There are many problems which are basically forms of model selection where it would be very nice to quantify uncertainty in the form of confidence sets. Examples include: the number of clusters in a mixture model; the number of factors in a factor model; which variables to include in a regression; the order of a Markov chain; the context tree of a variable-length Markov chain; the directed acyclic graph in a graphical causal model. Unfortunately it seems to me that we will usually only be able to give one-sided confidence sets, saying, in effect, "the process must have at least this much structure, but it could have infinitely more".To see the issue, take mixture models. Suppose the data really did come from a \( k \)-cluster mixture model, \( f(x) = \sum_{i=1}^{k}{\alpha_i f(x;\theta_i)} \). I can approximate this arbitrarily closely using an \( (m) \)-cluster mixture, for any \( m > k \). The trick is to just to reduce the \( \alpha_1, \ldots \alpha_k \) very slightly, and make \( \alpha_{k+1}, \ldots \alpha_m \) very close to zero --- so close that it'd be unlikely to have actually drawn data from any of those clusters in the first \( n \) samples. Thus any \( k \) cluster distribution is actually arbitrarily close to infinitely many distributions with arbitrarily more clusters. This is true for any sensible and relevant sense of distance between distributions --- Kullback-Leibler divergence, anything from information geometry, total variation, etc.
Similarly, for factor models, I just make the loadings on the extra factors extremely small (but not zero). For Markov models, I make the conditional dependence on the remote past extremely small (but not zero). For variable selection in regression, I make the slope on the extra regressors extremely small (but not zero). For graphical causal models, I make the extra causal links extremely weak (but not zero). Because, in all these cases, the distribution of observables changes smoothly as these parameters are varied, but the model structure changes abruptly when various of these parameters hit zero, I don't think we ever get to rule out very complicated structures. (More exactly: there's no way to rule them out with any statistical power; we could always use Gygax tests.) We can rule out structures which are too simple to account for the data, and we can say that we have no need for the complicated ones, yet, but that's a one-sided confidence set. This moves us towards Occam's razor (particularly Kevin Kelly's version of it --- follow the link).
Formally: divide the over-all parameter vector \( \theta \) into the discrete, model-structure part \( \kappa \) and the remaining bits \( \eta \). Say that the true parameter \( \theta^* = (\kappa^*, \eta^* ) \). What I've argued above is that for any \( \kappa, \eta \), we can find \( \kappa^{\prime}, \eta^{\prime} \), with \( \kappa^{\prime} > \kappa \), such that, at any sample size \( n \), the distance between the distribution generated by \( (\kappa^{\prime}, \eta^{\prime}) \) and that generated by \( (\kappa^*, \eta^*) \) is arbitrarily small. Hence no test can have any power to reject \( (\kappa^{\prime}, \eta^{\prime}) \), hence no confidence set (with any power) can exclude it. Thus we cannot exclude \( \kappa=\kappa^{\prime} \), because it's compatible with the data for some value of \( \eta \). It's vital here that when we increase \( \kappa \) to \( \kappa^{\prime} \), we can make compensating changes to \( \eta \) to stay in close in distribution to \( (\kappa^*, \eta^*) \). If that's ruled out for some reason, we're back in business.
Now, for variable selection, there is an apparent way out, which is the use of the lasso or similar. But the trick there is the assumption that the true regression coefficient vector is sparse. If we're sure that at most \( s \) of the \( p \) regressors have non-zero coefficients, \( s \ll p \), and we have in fact detected, say, \( \approx s \) non-zero coefficients, we can indeed be pretty sure that there aren't many more of them lurking around. (Or, at least, we can transfer some confidence from our sparsity assumption to our conclusion about which variables matter.) Exactly what would take the place of sparsity for other model-selection problems is something I should think through.
One final note on this sub-topic: Suppose that the true distribution really is a \( k \)-cluster mixture. Then we should be able to reject distributions where there are (say) 100 extra clusters of large weight, and we will become more and more able to reject them as we get more data. So those very complicated models with lots of extra structure will tend to become ones where the extra structure does less and less work. (Again, this gets us back towards Kelly-style Occam's razors.) If we try to form confidence sets for not just the model structure (here, number of clusters), but the whole model, those sets will work properly.
- See also:
- Bootstrapping, and Other Resampling Methods (for one particularly useful way of building confidence sets)
- Conformal Prediction
- Gygax Tests
- Nonparametric Confidence Sets for Functions
- Partial Identification
- Post-Model-Selection Inference
- Recommended, big picture but textbook treatments:
- George Casella and R. L. Berger, Statistical Inference
- Mark J. Schervish, Theory of Statistics
- Recommended, close-ups:
- Don Fraser, "Is Bayes posterior just quick and dirty confidence", Statistical Science 26 (2011): 299--316, arxiv:1112.5582 [See also the discussions by others, and Fraser's reply. My answer to the question posed in Fraser's title is "yes", or rather "YES!"]
- Tore Schweder and Nils Lid Hjort, Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions [I need to think very hard about the meaning and utility of their "confidence distributions"]
- Recommended, big picture, historical:
- Trygve Haavelmo, "The Probability Approach in Econometrics", Econometrica 12 supplement (1944): iii--115
- Jerzy Neyman, "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability", Philosophical Transactions of the Royal Society of London A 236 (1937): 333--380
- Recommended with some reservations:
- Min-ge Xie, Peng Wang, "Repro Samples Method for Finite- and Large-Sample Inferences", arxiv:2206.06421 [Comments in their own notebook.]
- Cannot altogether recommend:
- Rink Hoekstra, Richard D. Morey, Jeffrey N. Rouder and Eric-Jan Wagenmakers, "Robust misinterpretation of confidence intervals", Psychonomic Bulletin and Review
21 (2014): 1157--1164 [Comment, explaining my reasoning for this categorization]
- To read:
- Heng Lian, "Empirical Likelihood Confidence Intervals for Nonparametric Functional Data Analysis", arxiv:0904.0843
- Jana Jankova, Sara van de Geer, "Confidence intervals for high-dimensional inverse covariance estimation", arxiv:1403.6752
- Stephen M. S. Lee, "Hybrid confidence regions based on data depth", Journal of the Royal Statistical Society B 74 (2012): 91--109
- Kesar Singh, Minge Xie, William E. Strawderman, "Confidence distribution (CD) -- distribution estimator of a parameter", pp. 132--150 in Regina Liu, William Strawderman and Cun-Hui Zhang (eds.), Complex Datasets and Inverse Problems: Tomography, Networks and Beyond
- Amy Willis, "Confidence sets for phylogenetic trees", Journal of the American Statistical Association 114 (2019): 235--244, arxiv:1607.08288
- Amy Willis and Rayna Bell, "Uncertainty in Phylogenetic Tree Estimates", Journal of Computational and Graphical Statistics 27 (2018): 542--552, arxiv:1611.03456