Notebooks

Empirical Process Theory

Last update: 08 Dec 2024 00:44
First version: 7 September 2009

(I first used the next few paragraphs as part of a review of Pollard's book of lecture notes. I have no shame about self-plagiarism.)

The simplest sort of empirical process arises when trying to estimate a probability distribution from sample data. The difference between the empirical distribution function \( F_n(x) \) and the true distribution function \( F(x) \) converges to zero everywhere (by the law of large numbers), and — this is non-trivial — the maximum difference between the empirical and true distribution functions converges to zero, too (by the Glivenko-Cantelli theorem, a uniform law of large numbers). The "empirical process" \( E_n(x) \) is the re-scaled difference, \( n^{1/2} \left[ F_n(x) - F(x) \right] \), and it converges to a Gaussian stochastic process that only depends on the true distribution (by the functional central limit theorem). Empirical process theory is concerned with generalizing this sort of material to other stochastic processes determined by random samples, and indexed by infinite classes (like the real line, or the class of all Borel sets on the line, or some space parameterizing a regression model). The typical objects of concern are proving uniform limit theorems, and with establishing distributional limits. (For instance, one might one want to prove that the errors of all possible regression models in some class will come close to their expected errors, so that maximum-likelihood or least-squares estimation is consistent. [For more on that line of thought, see Sara van de Geer's book.]) This endeavor is closely linked to Vapnik-Chervonenkis-style learning theory, and in fact one can see VC theory as an application of empirical process theory.

As usual, I am most interested in results for dependent data.

See also: Concentration of Measure


Notebooks: