## Operator Semigroups

*27 Feb 2017 16:30*

In math, an "operator" is just a mapping which takes points in a function space to other points in another function space; the term is used even or especially when the two spaces are the same, which is what I'm interested in here. (Taking derivatives, integrals and Fourier transforms are all familiar examples.) An "operator semigroup" is, naturally, a collection of operators which forms a semigroup, raising the question of what the latter term means. Here it means that when we compose two operators from the collection, we get another operator in the collection, i.e., that when $ A $ and $ B $ are in the semigroup, so is $ AB $; and that composition is associative, so that $ (AB)C = A(B) $. If one of the operators is the identity, then the semigroup is sometimes called a "monoid". The semigroup becomes a group if every operator has an inverse, which is not the case for many natural examples.

I supposedly learned about operator groups and semigroups when learned quantum mechanics, but if I'm honest that didn't make a lot of sense at the time. Things really clicked when I studied dynamical systems and Markov processes. For discrete-time dynamical systems, the operator semi-group is just the powers of the time-evolution operator, a.k.a. the Frobenius-Perron (or Perron-Frobenius) operator; for discrete-time Markov chains, the powers of the transition matrix. In continuous time, one has the more subtle notion of a generator, and the Hille-Yosida theorem linking generators to semigroups indexed by a single continuous parameter.

Actually, there are *two* families of semigroups for dynamical
systems and Markov processes. One describes the evolution of individual points
or probability measures under the dynamics. The other describes the
conditional expectation of *functions* over the state space. (For
dynamical systems, this is called the "Koopman operator".) These correspond,
in quantum mechanics, to the Schrödinger and Heisenberg pictures,
respectively. This is related to the duality between measures and integrable
functions --- integrating a function with respect to a measure gives you a
single real number, so you can think of measures as one-forms on the vector
space of functions.

I would now like to understand all this more deeply and abstractly.

- Recommended:
- Stewart N. Ethier and Thomas G. Kurtz, Markov Processes: Characterization and Convergence
- Einar Hille, Functional Analysis and Semi-Groups [Actually, I've only read about half of this]
- Andrzej Lasota and Michael C. Mackey, Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics [Has a really excellent discussion of the Hille-Yosida theorem]

- Modesty forbids me to recommend:
- CRS, Almost None of the Theory of Stochastic Processes [I tried to be consistent and clear about presenting Markov process theory from this point of view...]

- To read:
- Bernd Carl, Entropy, Compactness, and the Approximation of Operators
- Adam Bobrowski, Convergence of One-parameter Operator Semigroups: In Models of Mathematical Biology and Elsewhere
- T. Eisner, B. Farkas, M. Haase and R. Nagel, Operator Theoretic Aspects of Ergodic Theory
- Klaus Jochen Engel, A Short Course on Operator Semigroups
- Carlos Kubrusly, Elements of Operator Theory
- Thomas G. Kurtz, "Semigroups of Conditioned Shifts and
Approximation of Markov
Processes", Annals
of Probability
**3**(1975): 618--642