Notebooks

Dynamical Systems (Including Chaos)

13 Sep 2023 20:48

And the future is certain
Give us time to work it out

Take your favorite mathematical space. It might represent physical variables, or biological ones, or social, or simply be some abstract mathematical object, whatever those are; in general each variable will be a different coordinate in the space. Come up with a rule (function) which, given any point in the space, comes up with another point in the space. It's OK if the rule comes up with the same result for two input points, but it must deliver some result for every point (can be many-one but must be defined at every point). The combination is a discrete-time dynamical system, or a map. The space of points is the state space, the function the mapping or the evolution operator or the update rule, or any of a number of obviously synonymous phrases.

The time-evolution, the dynamics, work like this: start with your favorite point in the state space, and find the point the update rule specifies. Then go to that point --- the image of the first --- and apply the rule again. Repeat forever, to get the orbit or trajectory of the point. If you have a favorite set of points, you can follow their dynamics by applying the mapping to each point separately. If your rule is well-chosen, then the way the points in state space move around matches the way the values of measured variables change over time, each update or time-step representing, generally, a fixed amount of real time. Then the dynamical system models some piece of the world. Of course it may not model it very well, or may even completely fail in what it set out to do, but let's not dwell on such unpleasant topics, or the way some people seem not to care whether the rules they propose really model what they claim they model.

This is all for discrete-time dynamics, as I said. But real time is continuous. (Actually, it might not be. If it isn't continuous, though, the divisions are so tiny that for practical purposes it might as well be.) So it would be nice to be able to model things which change in continuous time. This is done by devising a rule which says, not what the new point in state space is, not how much all the variables change, but the rates of change of all the variables, as functions of the point in state space. This is calculus, or more specifically differential equations: the rule gives us the time-derivatives of the variables, and to find out what happens at any later time we integrate. (The rule which says what the rates of change are is the vector field --- think of it as showing the direction in which a state-space point will move.) A continuous-time dynamical system is called a flow.

In either maps or flows, there can be (and generally are) sets of points which are left unchanged by the dynamics. (More exactly, for any point in the set, there is always some point in the set which maps (or flows) into its place, so the set doesn't change. The set is its own image.) These sets are called invariant. Now, we say that a point is attracted to an invariant set if, when we follow its trajectory for long enough, it always gets closer to the set. If all points sufficiently close to the invariant set are attracted to it, then the set is an attractor. (Technically: there is some neighborhood of the invariant set whose image is contained in itself. Since the invariant set is, after all, invariant, the shrinkage has to come from non-invariant points moving closer to the invariant set.) An attractor's basin of attraction is all the points which are attracted to it.

The reasons for thinking about attractors, basins of attraction, and the like, are that, first, they control (or even are) the long-run behavior of the system, and, second, they let us think about dynamics, about change over time, geometrically, in terms of objects in (state) space, like attractors, and the vector field around attractors.

Imagine you have a one-dimensional state space, and pick any two points near each other and follow their trajectories. Calculate the percentage by which the distance between them grows or shrinks; this is the Lyapunov exponent of the system. (If the points are chosen in a technically-reasonable manner, it doesn't matter which pair you use, you get the same number for the Lyapunov exponent.) If it is negative, then nearby points move together exponentially quickly; if it is positive, they separate exponentially; if it is zero, either they don't move relative to one another, or they do so at some sub-exponential rate. If you have n dimensions, there is a spectrum of n Lyapunov exponents, which say how nearby points move together or apart along different axes (not necessarily the coordinate axes). So a multi-dimensional system can have some negative Lyapunov exponents (directions where the state space contracts), some positive ones (expanding directions) and some zero ones (directions of no or slow relative change). At least one of a flow's Lyapunov exponents is always zero. (Exercise: why?) The sum of all the Lyapunov exponents says whether the state space as a whole expands (positive sum) or contracts (negative sum) or is invariant (zero sum).

If there is a positive Lyapunov exponent, then the system has sensitive dependence on initial conditions. We can start with two points --- two initial conditions --- which are arbitrarily close, and if we wait only a very short time, they will be separated by some respectable, macroscopic distance. More exactly, suppose we want to know how close we need to make two initial conditions so that they'll stay within some threshold distance of each other for a given length of time. A positive Lyapunov exponent says that, to increase that length of time by a fixed amount, we need to reduce the initial separation by a fixed factor (the time is logarithmic in the initial separation). Now think of trying to predict the behavior of the dynamical system. We can never measure the initial condition exactly, but only to within some finite error. So the relationship between our guess about where the system is, and where it really is, is that of two nearby initial conditions, and our prediction is off by more than an acceptable amount when the two trajectories diverge by more than that amount. Call the time when this happens the prediction horizon. Sensitive dependence says that adding a fixed amount of time to the prediction horizon means reducing the initial measurement error by a fixed factor, which quickly becomes hopeless. More optimistically, if we re-measure where the system is after some amount of time, we can work back to say more exactly where the initial condition was. To reduce the (retrospective) uncertainty about the initial condition by a fixed factor, wait a fixed amount of time before re-measuring...

Sensitive dependence is not, by itself, dynamically interesting; very trivial, linear dynamical systems have it. (Exponential growth, for instance!) Something like it has been appreciated from very early times in dynamics. Laplace, for instance, so often held up to ridicule or insult as a believer in determinism and predictability fully recognized that (to use the modern jargon) very small differences in initial conditions can have very large effects, and that our predictions are correspondingly inexact and uncertain. That's why he wrote books on probability theory! And as a proverb, the butterfly effect ("The way a butterfly flaps its wings over X today can change whether or not there's a hurricane over Y in a month") isn't really much of an improvement over "For want of a nail, a horse was lost". (It did, however, inspire Terry Pratchett's fine comic invention, the Quantum Chaos Butterfly, which causes small hurricanes to appear when it flaps its wings.) No, what's dynamically interesting is the combination of sensitive dependence and some kind of limit on exponential spreading. This could be because the state space as a whole is bounded, or because the sum of the Lyapunov exponents is negative or zero. That, roughly speaking, is chaos. (There are much more precise definitions!) In particular, if the sum of the Lyapunov exponents is negative, but some are positive, then there is an attractor, with exponential separation of points on the attractor --- called, for historical reasons, a strange attractor.

Chaotic systems have many fascinating properties, and there is a good deal of evidence that much of nature is chaotic; the solar system, for instance. (This is actually, by a long and devious story, where dynamical systems theory comes from.) It raises a lot of neat and nasty problems about how to understand dynamics from observations, and about what it means to make a good mathematical model of something. But it's not the whole of dynamics, and in some ways not even the most interesting part, and it's certainly not the end of "linear western rationalism" or anything like that.

Things I ought to talk about here: Time series. Geometry from a time series/attractor reconstruction. Symbolic dynamics. Structural stability. Bifurcations. The connection to fractals. Spatiotemporal chaos.

Uses and abuses: Military uses. Popular and semi-popular views. Metaphorical uses. Appropriation by non-scientists.

See also: Algorithmic Information Theory; Cellular Automata; Complexity; Complexity Measures; Computational Mechanics; Ergodic Theory; Evolution; Foundations and History of Statistical Mechanics; Information Theory [the sum of the positive Lyapunov exponents is the rate of information production]; Koopman Operators for Modeling Dynamical Systems and Time Series; Machine Learning, Statistical Inference and Induction; Math I Ought to Learn; Neuroscience; Pattern Formation; Philosophy of Science; Probability; Self-Organization; Simulation; State-Space Reconstruction; Statistics; Statistical Mechanics; Synchronization; Time Series, or Statistics for Stochastic Processes and Dynamical Systems; Turbulence


Notebooks: