Notebooks

Power Law Distributions, 1/f Noise, Long-Memory Time Series

10 Oct 2024 13:29

Why do physicists care about power laws so much?

I'm probably not the best person to speak on behalf of our tribal obsessions (there was a long debate among the faculty at my thesis defense as to whether "this stuff is really physics"), but I'll do my best. There are two parts to this: power-law decay of correlations, and power-law size distributions. The link is tenuous, at best, but they tend to get run together in our heads, so I'll treat them both here.

The reason we care about power law correlations is that we're conditioned to think they're a sign of something interesting and complicated happening. The first step is to convince ourselves that in boring situations, we don't see power laws. This is fairly easy: there are pretty good and rather generic arguments which say that systems in thermodynamic equilibrium, i.e. boring ones, should have correlations which decay exponentially over space and time; the reciprocals of the decay rates are the correlation length and the correlation time, and say how big a typical fluctuation should be. This is roughly first-semester graduate statistical mechanics. (You can find those arguments in, say, volume one of Landau and Lifshitz's Statistical Physics.)

Second semester graduate stat. mech. is where those arguments break down --- either for systems which are far from equilibrium (e.g., turbulent flows), or in equilibrium but very close to a critical point (e.g., the transition from a solid to liquid phase, or from a non-magnetic phase to a magnetized one). Phase transitions have fluctuations which decay like power laws, and many non-equilibrium systems do too. (Again, for phase transitions, Landau and Lifshitz has a good discussion.) If you're a statistical physicist, phase transitions and non-equilibrium processes define the terms "complex" and "interesting" --- especially phase transitions, since we've spent the last forty years or so developing a very successful theory of critical phenomena. Accordingly, whenever we see power law correlations, we assume there must be something complex and interesting going on to produce them. (If this sounds like the fallacy of affirming the consequent, that's because it is.) By a kind of transitivity, this makes power laws interesting in themselves.

Since, as physicists, we're generally more comfortable working in the frequency domain than the time domain, we often transform the autocorrelation function into the Fourier spectrum. A power-law decay for the correlations as a function of time translates into a power-law decay of the spectrum as a function of frequency, so this is also called "1/f noise".

Similarly for power-law distributions. A simple use of the Einstein fluctuation formula says that thermodynamic variables will have Gaussian distributions with the equilibrium value as their mean. (The usual version of this argument is not very precise.) We're also used to seeing exponential distributions, as the probabilities of microscopic states. Other distributions weird us out. Power-law distributions weird us out even more, because they seem to say there's no typical scale or size for the variable, whereas the exponential and the Gaussian cases both have natural scale parameters. There is a connection here with fractals, which also lack typical scales, but I don't feel up to going into that, and certainly a lot of the power laws physicists get excited about have no obvious connection to any kind of (approximate) fractal geometry. And there are lots of power law distributions in all kinds of data, especially social data --- that's why they're also called Pareto distributions, after the sociologist.

Physicists have devoted quite a bit of time over the last two decades to seizing on what look like power-laws in various non-physical sets of data, and trying to explain them in terms we're familiar with, especially phase transitions. (Thus "self-organized criticality".) So badly are we infatuated that there is now a huge, rapidly growing literature devoted to "Tsallis statistics" or "non-extensive thermodynamics", which is a recipe for modifying normal statistical mechanics so that it produces power law distributions; and this, so far as I can see, is its only good feature. (I will not attempt, here, to support that sweeping negative verdict on the work of many people who have more credentials and experience than I do.) This has not been one of our more successful undertakings, though the basic motivation --- "let's see what we can do!" --- is one I'm certainly in sympathy with.

There have been two problems with the efforts to explain all power laws using the things statistical physicists know. One is that (to mangle Kipling) there turn out to be nine and sixty ways of constructing power laws, and every single one of them is right, in that it does indeed produce a power law. Power laws turn out to result from a kind of central limit theorem for multiplicative growth processes, an observation which apparently dates back to Herbert Simon, and which has been rediscovered by a number of physicists (for instance, Sornette). Reed and Hughes have established an even more deflating explanation (see below). Now, just because these simple mechanisms exist, doesn't mean they explain any particular case, but it does mean that you can't legitimately argue "My favorite mechanism produces a power law; there is a power law here; it is very unlikely there would be a power law if my mechanism were not at work; therefore, it is reasonable to believe my mechanism is at work here." (Deborah Mayo would say that finding a power law does not constitute a severe test of your hypothesis.) You need to do "differential diagnosis", by identifying other, non-power-law consequences of your mechanism, which other possible explanations don't share. This, we hardly ever do.

Similarly for 1/f noise. Many different kinds of stochastic process, with no connection to critical phenomena, have power-law correlations. Econometricians and time-series analysts have studied them for quite a while, under the general heading of "long-memory" processes. You can get them from things as simple as a superposition of Gaussian autoregressive processes. (We have begun to awaken to this fact, under the heading of "fractional Brownian motion".)

The other problem with our efforts has been that a lot of the power-laws we've been trying to explain are not, in fact, power-laws. I should perhaps explain that statistical physicists are called that, not because we know a lot of statistics, but because we study the large-scaled, aggregated effects of the interactions of large numbers of particles, including, specifically, the effects which show up as fluctuations and noise. In doing this we learn, basically, nothing about drawing inferences from empirical data, beyond what we may remember about curve fitting and propagation of errors from our undergraduate lab courses. Some of us, naturally, do know a lot of statistics, and even teach it --- I might mention Josef Honerkamp's superb Stochastic Dynamical Systems. (Of course, that book is out of print and hardly ever cited...)

If I had, oh, let's say fifty dollars for every time I've seen a slide (or a preprint) where one of us physicists makes a log-log plot of their data, and then reports as the exponent of a new power law the slope they got from doing a least-squares linear fit, I'd at least not grumble. If my colleagues had gone to statistics textbooks and looked up how to estimate the parameters of a Pareto distribution, I'd be a happier man. If any of them had actually tested the hypothesis that they had a power law against alternatives like stretched exponentials, or especially log-normals, I'd think the millennium was at hand. (If you want to know how to do these things, please read this paper, whose merits are entirely due to my co-authors.) The situation for 1/f noise is not so dire, but there have been and still are plenty of abuses, starting with the fact that simply taking the fast Fourier transform of the autocovariance function does not give you a reliable estimate of the power spectrum, particularly in the tails. (On that point, see, for instance, Honerkamp.)


Notebooks: