"Concentration of measure" is a phenomenon in probability theory where, roughly speaking, any set which contains a substantial fraction of the probability can be expanded just a little to yield a set containing most of the probability. Another way to say this is that, given any reasonably continuous function, the probability that it deviates from its mean is exponentially small, and the exponential rate does not depend on the precise function. This makes concentration of measure results extremely useful for questions involving the estimation of complicated and ugly functions. The classical work in this area proves concentration-of-measure for various kinds of sequences of independent variables, but for real applications in statistics, machine learning or physics you'd want to be able to handle dependence. The natural way to do this would be to look at mixing processes, which are at least asymptotically independent.
Leo Kontorovich, who was one of the students in my stochastic processes class this past spring, now has a paper summarizing his work on, precisely, concentration of measure for mixing sequences:
Since I'll be teaching stochastic processes again in the spring, I would very much like to claim that Leo wrote these papers as a direct result of having taken my class. But the truth is that Leo knew so much about this already that, so far teaching him everything he knows, I learned almost all I know about concentration from him. But this is one of the real pleasures of teaching...
Enigmas of Chance; Corrupting the Young; Incestuous Amplification
Posted at October 16, 2006 14:00 | permanent link