Notebooks

Tsallis Statistics, Statistical Mechanics for Non-extensive Systems and Long-Range Interactions

05 Sep 2024 10:36

A standard assumption of statistical mechanics is that quantities like energy are "extensive" variables, meaning that the total energy of the system is proportional to the system size; similarly the entropy is also supposed to be extensive. Generally, at least for the energy, this is justified by appealing to the short-range nature of the interactions which hold matter together, form chemical bonds, etc. But suppose one deals with long-range interactions, most prominently gravity; one can then find that energy is not extensive. This makes the life of the statistical mechanic much harder.

Constantino Tsallis is a physicist who came up with a supposed solution, based on the idea of maximum entropy. One popular way to derive the (canonical) equilibrium probability distribution is the following. One purports to know the average values of some quantities, such as the energy of the system, the number of molecules, the volume it occupies, etc. One then searches for the probability distribution which maximizes the entropy, subject to the constraint that it give the right average values for your supposed givens. Through the magic of Lagrange multipliers, the entropy-maximizing distribution can be shown to have the right, exponential, form, and the Lagrange multipliers which go along with your average-value constraints turn out to be the "intensive" variables paired with (or "conjugate to") the extensive ones whose means are constrained (energy:temperature :: volume:pressure:: molecular number:chemical potential, etc.). But, as I said, the entropy is an extensive quantity. What Tsallis proposed is to replace the usual (Gibbs) entropy with a new, non-extensive quantity, now commonly called the Tsallis entropy, and maximize that, subject to constraints. There is actually a whole infinite family of Tsallis entropies, indexed by a real-valued parameter q, which supposedly quantifies the degree of departure from extensivity (you get the usual entropy back again when q = 1). One can then grind through and show that many of the classical results of statistical mechanics can be translated into the new setting. What has really caused this framework to take off, however, is that while normal entropy-maximization gives you exponential, Boltzmann distributions, Tsallis statistics give you power-law, Pareto distributions, and everyone loves a power-law. (Strictly speaking, Tsallis distributions are type II generalized Pareto distributions, with power-law tails.) Today you'll find physicists applying Tsallis statistics to nearly anything with a heavy right tail.

I have to say I don't buy this at all. Leaving to one side my skepticism about the normal maximum entropy story, at least as it's usually told (e.g. by E. T. Jaynes), there are a number of features which make me deeply suspicious of Tsallis statistics.

  1. It's simply not true that one maximizes the Tsallis entropy subject to constraints on the mean energy \( \langle E \rangle =\sum_{i}{p_i E_i} \). Rather, to get things to work out, you have to fix the value of a "generalized mean" energy, \( { \langle E \rangle }_{q} = \sum_{i}{p_i^q E_i} / \sum_{i}{p^q_i} \). (This can be interpreted as replacing the usual average, an expectation take with respect to the actual probability distribution, by an expectation taken with respect to a new, "escort" probability distribution.) I have yet to encounter anyone who can explain why such generalized averages should be either physically or probabilistically natural; the usual answer I get is "OK, yes, it's weird, but it works, doesn't it?"
  2. There is no information-theoretic justification for the Tsallis entropy, unlike the usual Gibbs entropy. The Tsallis form is, however, a kind of low-order truncation of the Rényi entropy, which does have information-theoretic interest. (The Tsallis form has been independently rediscovered many times in the literature, going back to the 1960s, usually starting from the Rényi entropy. A brief review of the "labyrinthic history of the entropies" can be found in one of Tsallis's papers, cond-mat/0010150.) Maximizing the Rényi entropy under mean-value constraints leads to different distributions than maximizing the Tsallis entropy.
  3. I have pretty severe doubts about the backing story here, about long-range interactions leading to a non-extensive form for the entropy, particularly when, in derivations which begin with such a story, I often see people blithely factoring the probability that a system is in some global state into the product of the probabilities that its components are in various states, i.e., assuming independent sub-systems.
  4. There are alternative, non-max-ent derivations of the usual statistical-mechanical distributions; such derivations do not seem forthcoming for Tsallis statistic. In particular, large deviations arguments, which essentially show how to get such distributions as emergent, probabilistic consequences of individual-level interactions, do not seem to ever lead to Tsallis statistics, even when one has the kind of long-range interactions which, supposedly, Tsallis statistics ought to handle.
  5. There is no empirical evidence that Tsallis statistics correctly gives the microscopic energy distribution for any known system.
  6. Zanette and Montemurro have shown that you can get any distribution you like out of the Tsallis recipe, simply by changing the function whose generalized average you take as your given. The usual power-law prescription only holds if you constrain either x or x2, but one of the more "successful" applications requires constraining the generalized mean of \( x^{2\alpha}/2 - c\mathrm{sgn}{x}({|x|}^{\alpha} - {|x|}^{3\alpha}/3) \), with c and \( \alpha \) as adjustable parameters! (In fairness, I should point out that if you're willing to impose sufficiently weird constraints, you can generate arbitrary distributions from the usual max. ent. procedure, too; this is one of the reasons why I don't put much faith in that procedure.)

I think the extraordinary success of what is, in the end, a slightly dodgy recipe for generating power-laws illustrates some important aspects, indeed unfortunate weaknesses, in the social and intellectual organization of "the sciences of complexity". But that rant will have to wait for my book on The Genealogy of Complexity, which, prudently, means waiting until I'm safely tenured. (Update, 2021: I am indeed now safely tenured, but I have better, or at least more pressing, things to do, so enjoy.)

I should also discuss the "superstatistics" approach here, which tries to generate non-Boltzmann statistics as mixtures of Boltzmann distributions, physically justified by appealing to fluctuating intensive variables, such as temperature. I will only remark that the superstatistics approach severes all connections between the use of these distributions and non-extensivity and long-range interactions; and that results in the statistical literature on getting generalized Pareto distributions from mixtures of exponentials go back to 1952 at least.

Finally, it has come to my attention that some people are citing this notebook as though it had some claim to authority. Fond though I am of my own opinions, this seems to me to be deeply wrong. The validity of Tsallis statistics, as a scientific theory, ought to be settled in the usual way, by means of the peer-reviewed scientific literature, subject to all its usual conventions and controls. It's obvious from the foregoing that I have pretty strong beliefs in how that debate ought to go, and (this may not be so clear) enough faith in the scientific community that I think, in the long run, it will go that way, but no one should confuse my opinion with a scientific finding. For myself, this page is a way to organize my own thoughts; for everyone else, it's either entertainment, or at best an opinionated collection of pointers to the real discussion.


Previous versions: 27 Feb 2017 16:30; 2007-01-29 23:22; first version several years older (2003? earlier?)


Notebooks: