Convergence of Stochastic Processes

25 Apr 2016 09:27

By which I mean the convergence of sequences of whole processes, i.e., random functions — not the convergence of averages along a process, which is the subject of ergodic theory, and something I understand better. (Of course these two subjects are connected, the bridge being empirical process theory.) I am especially interested in convergence in distribution, a.k.a. weak convergence, though certainly not averse to stronger modes of convergence.

A particularly important class of results are what are called "functional central limit theorems", or "Donsker theorems", or even just "invariance principles". (I hate the last name, but we seem to be stuck with it.) These are all assertions that the processes, appropriately re-scaled, are converging on a fixed limiting Gaussian process, such as the Wiener process or the Brownian bridge. And just as sometimes the central limit theorem for sample averages gives you a Levy distribution rather than a Gaussian, sometimes you get convergence to a Levy process rather than a Gaussian process...

A second important class of results has to do with the convergence of discrete-time, and often discrete-valued, Markov chains to continuous-time Markov processes, either diffusions (which solve stochastic differential equations) or flows (which solve ordinary different equations, i.e., deterministic dynamical systems).

See also: Stochastic Processes