## Deviation Inequalities in Probability Theory

*27 Feb 2017 16:30*

The laws of large numbers say that averages taken over large samples converge on expectation values. But these are asymptotic statements which say nothing about what happens for samples of any particular size. A deviation inequality, by contrast, is a result which says that, for realizations of such-and-such a stochastic process, the sample value of this functional deviates by so much from its typical value with no more than a certain probability: \( \Pr{\left(\left|f(X_1, X_2, \ldots X_n) - \mathbb{E}[f]\right| > h \right)} < r(n,h,f) \), where the rate function \( r \) has to be given explicitly, and may depend on the true joint distribution of the \( X_i \) (though it's more useful if it doesn't depend on that very much). (And of course one could compare to the median rather than the mean, or just look at fluctuations above the typical value rather than to other side, or get within a certain factor of the typical value rather than a certain distance, etc.) The rate should be decreasing in \( h \) and in \( n \).

An elementary example is "Markov's inequality": If \( X \) is a non-negative random variable with a finite mean, then \[ \Pr{\left(X > h \right)} < \frac{\mathbb{E}[X]}{h} ~. \] One can derive many other deviation inequalities from Markov's inequality by taking \( X = g(Y) \), where \( Y \) is another random variable and \( g \) is some suitable non-negative-valued function.

For instance if \( Y \) has a finite variance \( v \), then \[ \Pr{\left(|Y-\mathbb{E}[Y]| > h \right)} < \frac{v}{h^2} ~. \] This is known as "Chebyshev's inequality". (Exercise: derive Chebyshev's inequality from Markov's inequality. Since Markov was in fact Chebyshev's student, it would seem that the logical order here reverses the historical one, though guessing at priority from eponyms is always hazardous.) Suppose that \( X_1, X_2, \ldots \) are random variables with a common mean \( m \) and variance \( v \), and \( Y \) is the average of the first \( n \) of these. Then Chebyshev's inequality tells us that \[ \Pr{\left(|Y - m| > h\right)} < \frac{\mathrm{Var}[Y]}{h^2} ~. \] If the \( X_i \) are uncorrelated (e.g., independent), then \( \mathrm{Var}[Y] = v/n \), so the probability that the sample average differs from the expectation by \( h \) or more goes to zero, no matter how small we make \( h \). This is precisely the weak law of large numbers. If the \( X_i \) are correlated, but nonetheless \( \mathrm{Var}[Y] \) goes to zero as \( n \) grows (generally because correlations decay), then we get an ergodic theorem. The rate of convergence here however is not very good, just \( O(n^{-1}) \).

Since \( e^u \) is a monotonically increasing function of \( u \), for any positive \( t \), \( X > h \) if and only if \( e^{tX} > e^{th} \), so we get an exponential inequality, \[ \Pr{\left(X > h\right)} < e^{-th} \mathbb{E}\left[e^{tX}\right] ~. \] Notice that the first term in the bound does not depend on the distribution of \( X \), unlike the second term, which doesn't depend on the scale of the deviation \( h \). We are in fact free to pick whichever \( t \) gives us the tightest bound. The quantity \( \mathbb{E}\left[e^{tX}\right] \) is called the "moment generating function" of \( X \), let's abbreviate it \( M_X(t) \), and can fail to exist if some moments are infinite. (Write the power series for \( e^u \) and take expectations term by term to see all this.) It has however the very nice property that when \( X_1 \) and \( X_2 \) are independent, \( M_{X_1+X_2}(t) = M_{X_1}(t) M_{X_2}(t) \). From this it follows that if \( Y \) is the sum of \( n \) independent and identically distributed copies of \( X \), \( M_Y(t) = {(M_X(t))}^{n} \). Thus \[ \Pr{\left(Y > h\right)} < e^{-th} (M_X(t))^{n} ~. \] If \( Z = Y/n \), the sample mean, this in turn gives \[ \Pr{\left(Z > h\right)} = \Pr{\left(Y > hn\right)} < e^{-thn} (M_X(t))^{n} ~. \] So we can get exponential rates of convergence for the law of large numbers from this. [Students who took the CMU statistics department's probability qualifying exam in 2010 now know who wrote problem 9.] Again, the restriction to IID random variables is not really essential, allowing dependence just means that the moment generating functions don't factor exactly, but if they almost factor than we can get results of the same form. (Often, we end up with \( n \) being replaced by \( n/n_0 \), where \( n_0 \) is something like how long it takes dependence to decay to trivial levels.)

I don't feel like going into the reasoning behind the other common deviation bounds — Bernstein, Chernoff, Hoeffding, Azuma, McDiarmid, etc. — because I feel like I've given enough of the flavor already. I am using this notebook as, actually, a notebook, more specifically a place to collect references on deviation inequalities, especially ones that apply to collections of dependent random variables. Results here typically appeal to various notions of mixing or decay of correlations, as found in ergodic theory.

See also: Concentration of Measure; Ergodic Theory; Large Deviations; Learning Theory; Probability

- Recommended, big picture:
- Olivier Bousquet, Stéphane Boucheron and Gábor Lugosi, "Introduction to Statistical Learning Theory" [Gives a very nice review of many deviation inequalities, with references.]
- Nicolo Cesa-Bianchi and Gabor Lugosi, Prediction, Learning, and Games [Provides exemplary proofs in the appendix. Mini-review]

- Recommended, close-ups:
- Daniel Berend and Aryeh Kontorovich, "On the Convergence of the Empirical Distribution", arxiv:1205.6711
- Daniel Berend, Peter Harremoës, Aryeh Kontorovich, "Minimum KL-divergence on complements of L1 balls", arxiv:1206.6544
- G. G. Bosco, F. P. Machado and Thomas Logan Ritchie, "Exponential
Rates of Convergence in the Ergodic Theorem: A Constructive
Approach", Journal
of Statistical Physics
**139**(2010): 367--374 - J.-R. Chazottes, P. Collet, C. Kuelske and F. Redig, "Deviation inequalities via coupling for stochastic processes and random fields", math.PR/0503483 [Very cool]
- Xinjia Chen, "A New Generalization of Chebyshev Inequality for Random Vectors", arxiv:0707.0805 [A very natural bound using the inverse variance matrix (= Mahalanobis distance); see Navarro for a truly clever proof]
- Ioannis Kontoyiannis, L. A. Lastras-Montano and S. P. Meyn, "Relative Entropy and Exponential Deviation Bounds for General Markov Chains", ISIT 2005 [PDF reprint via Prof. Meyn]
- Aryeh Kontorovich and Anthony Brockwell, "A Strong Law of Large Numbers for Strongly Mixing Processes", arxiv:0807.4665
- Aryeh Kontorovich and Roi Weiss, "Uniform Chernoff and Dvoretzky-Kiefer-Wolfowitz-type inequalities for Markov chains and related processes", arxiv:1207.4678
- Ioannis Kontoyiannis, L. A. Lastras-Montano and S. P. Meyn, "Relative Entropy and Exponential Deviation Bounds for General Markov Chains", ISIT 2005 [PDF reprint via Prof. Meyn]
- Jorge Navarro [Two wonderfully elementary proofs of Chen's multivariate Chebyshev inequality]
- "A simple proof for the multivariate Chebyshev inequality", arxiv:1305.5646
- "A very simple proof of the multivariate Chebyshev's inequality", Communications in Statistics: Theory and Methods
**forthcoming**(2014) [preprint]

- Iosif Pinelis, "Between Chebyshev and Cantelli", arxiv:1011.6065 [Cute, with an appealing proof]
- Yongqiang Tang, "A Hoeffding-Type Inequality for Ergodic Time
Series", Journal of
Theoretical Probability
**20**(2007): 167--176 [PDF preprint]

- To read:
- Yannick Baraud, "A Bernstein-type inequality for suprema of random processes with an application to statistics", arxiv:0904.3295
- Olivier Catoni
- "High confidence estimates of the mean of heavy-tailed real random variables", arxiv:0909.5366
- "Challenging the empirical mean and empirical variance: a deviation study", arxiv:1009.2048

- Patrick Cattiaux and Arnaud Guillin, "Deviation bounds for additive functionals of Markov process", math.PR/0603021 [Non-asymptotic bounds for the probability that time averages deviate from expectations with respect to the invariant measure, when the process is stationary and ergodic and the invariant measure is reasonably regular.]
- Jérôme Dedecker, Florence Merlevède, Magda Peligrad, Sergey Utev, "Moderate deviations for stationary sequences of bounded random variables", arxiv:0711.3924
- Xiequan Fan, Ion Grama, Quansheng Liu, "On Hoeffding's and Talagrand's inequalities", arxiv:1206.2501
- Marian Grendar Jr. and Marian Grendar, "Chernoff's bound forms," math.PR/0306326
- A. Guionnet and B. Zegarlinski, Lectures on Logarithmic Sobolev Inequalities [120 pp. PDF]
- Vladislav Kargin, "A large deviation inequality for vector functions on finite reversible Markov Chains", Annals of Applied Probability
**17**(2007): 1202--1221, arxiv:0508538 - Carlos A. Leon and Francois Perron, "Optimal Hoeffding bounds for discrete reversible Markov chains", math.PR/0405296
- Dasha Loukianova, Oleg Loukianov, Eva Loecherbach, "Polynomial
bounds in the Ergodic Theorem for positive recurrent one-dimensional diffusions
and integrability of hitting
times", arxiv:0903.2405
[
*non-asymptotic*deviation bounds from bounds on moments of recurrence times] - P. Major
- "On a multivariate version of Bernstein's inequality", math.PR/0411287
- "A multivariate generalization of Hoeffding's ineqality", math.PR/0411288

- Florence Merlevède, Magda Peligrad, Emmanuel Rio , "A Bernstein type inequality and moderate deviations for weakly dependent sequences",
Probability Theory
and Related Fields
**151**(2011): 435--474, arxiv:0902.0582 - Blazej Miasojedow, "Hoeffding's inequalities for geometrically ergodic Markov chains on general state space", arxiv:1201.2265
- Ohad Shamir, "A Variant of Azuma's Inequality for Martingales with Subgaussian Tails", arxiv:1110.2392
- Ted Theodosopoulos, "A Reversion of the Chernoff Bound", math.PR/0501360
- Lutz Warnke, "On the method of typical bounded differences", arxiv:1212.5796

(Thanks to Michael Kalgalenko for typo-spotting)