The Bactra Review: Occasional and eclectic book reviews by Cosma Shalizi   19

An Interpretative Introduction to Quantum Field Theory

by Paul Teller

Princeton University Press, 1995; first paperback printing, 1997
Quantum field theory is a notoriously hard theory to learn. The best physics students do well with it, but many able students flounder, eventually resigning themselves to going through the motions with their problem sets to make it through the course. Among philosophers of physics, I have heard many colleagues express interest, only to learn a year or two later that they had somehow gotten involved in other things.
This is altogether too accurate a picture, and part of the reason is that the teaching of QFT hasn't changed appreciably since the '50s, so that students today are almost as confused as the theory's founders --- as if we "had to learn nonrelativistic quantum mechanics from Heisenberg's 1930 Chicago lectures." Teller's brief, modest book is admirably calculated to fix that. It is not a technical introduction to QFT, and will not teach the reader how to calculate anything, but it does an excellent job of explaining what one calculates, why one does so in a certain way, and what parts of the calculations are themselves meaningful, which the currently existing texts are all shockingly bad at (Weinberg's Quantum Theory of Fields is much better than the others, but still leaves a lot to be desired). In addition to helping make the theory comprehensible, there is more purely philosophical work, particularly an analysis of the notion of "particle" (see below). Throughout, Teller assumes that readers are familiar with at least non-relativistic quantum mechanics and the Dirac, bra-ket notation; so will I.

The first chapter lays out, in a very simple way, Teller's ideas about what a scientific theory consists of (a bunch of models), what it means to interpret one ("fill in" the similarity relation between the models and the appropriate chunk of the world), and summarizes the contents of the rest of the book. Some of the best models, like the harmonic oscillator, have a great many useful interpretations; QFT has at least two, since the same formalism gets used in statistical mechanics, and it would be nice if someone --- say, Teller --- would interpret that for us as well.

The second chapter is the most philosophical one, i. e., the one most likely to be skipped by physics students. (Teller realizes this.) Here he goes to work on the idea of "particles," arguing that most of its components have got to go. The everyday notion of particles, in his account, includes exact trajectories and "primitive thisness" or "haecceity". Already in quantum mechanics, exact trajectories must go, on account of the uncertainty relations, though a very high degree of localization is of course possible. As for primitive thisness, as near as I can make out, the situation is as follows. Suppose we have two objects, which we propose to call Hortense and Calliope, and which can be in either of two states, flush and broke. (Teller prefers a and b, and 1 and 2, respectively.) If the objects have primitive thisness, there is an actual distinction between Hortense's being flush while Calliope is broke, and Hortense being broke while Calliope is flush. The most obvious way of extending one-particle quantum mechanics to handle multiple particles, rejoicing in the name of the labeled tensor product Hilbert space formalism, respects this distinction; Nature, to all appearances, does not. Apparently, it makes no more sense to distinguish between Hortense and Calliope than between individual dollars in a bank account (an analogy which goes back to Schrödinger). Like those dollars, objects in quantum field theory (which Teller, naturally enough, proposes to call "quanta") can be aggregated but not counted out. When all the conceptual surgery is finished, quanta are left as little more than bundles of dispositions to display certain properties on measurement, which would probably have pleased Heisenberg, and will please any surviving positivists, though Teller is careful to be neutral about realism.

In any case, the labeled tensor product Hilbert space formalism doesn't really suit the new conception of quanta; we can manipulate it in such a way as to get numerically correct answers, but at the cost of lugging around considerable "surplus formal structure." A new, more svelte formalism would be nice, and chapter three provides it, in the form of Fock space, a Hilbert space whose basis elements are, or correspond to, situations with a definite number of quanta in each of the possible single-particle states. (This is also known as the "occupation number formalism," for obvious reasons.) Each of those single-particle states now gets its own raising and lowering operator, which work like the raising and lowering operators of a harmonic oscillator; bosons and fermions call for different commutation relations between these operators. (The connection between spin and statistics is justified, but the theorem isn't proved, and Teller admits that it remains a bit mysterious.) Chapter four is fairly standard material, about what the theory looks like for free, non-interacting quanta, and the various procedures for getting there from either the Fock space formalism, or from quantizing classical theories. Teller presents the Fock space route first, then the more traditional approaches of "field quantization" (a prescription for turning a classical field, one of unproblematic "c-numbers", into a quantum field, one of non-commuting "q-numbers") and "second quantization" (applying field quantization to the wave-functions we get from solving the Schrödinger, Klein-Gordon, Dirac, Weyl, etc. equations), emphasizing that both are frankly analogical, and that they come to the same thing in the end. Since all of this is intended to be relativistically correct, there has to be no interference between points separated by space-like intervals, which constrains the commutation relations among the field operators. Traditionally, this is called "causality" or "microcausality," and Teller here follows tradition, but I'm far from convinced this has anything to do with causation at all; at the least the book could have benefited from some analysis of this point. (Cf. Weinberg's remarks on this point.)

Chapter five is more philosophy: in what sense is quantum field theory a theory about fields at all? The folklore replies "it's a theory about operator-valued fields," an answer in which Teller finds no more than a grain of truth. His answer is somewhat involved, but it does have the merit of making a number of otherwise-puzzling phenomena seem sensible, and of once and for all disposing of the whole "wave-particle duality" meshuggaas.

In chapter six we finally "turn on the interactions," and see step by step where Feynman diagrams come from. The first step is the introduction of the "interaction picture," a sort of hybrid of the Schrödinger and Heisenberg pictures of ordinary quantum mechanics. This leads naturally to representing scattering by a single operator, the S-matrix, which can be written as a kind of power series in terms of the interaction Hamiltonian. Arriving at the right form for this Hamiltonian is a mix of analogy with classical physics and other field theories, intuition, luck, and a trick called normal ordering, a prescription for re-arranging the order in which raising and lowering operators appear in the Hamiltonian "whose effect is exactly to subtract any nonzero vacuum expectation value". The oral tradition justifies this on the grounds that "in setting up a quantum field theory we are guided by generalizing on classical theories", where the order of terms is strictly arbitrary and conventional, so juggling the order of corresponding terms in the quantum theory is entirely legitimate --- "we must try out the various orders until we get the right one." Teller essentially agrees, emphasizing that in designing the the interaction Hamiltonian we are proceeding "analogically, not logically", and that equivalent chicanery after having fixed on a Hamiltonian would be much more serious.

Having massaged the interaction Hamiltonian into an agreeable condition, and in particular reduced it to a bundle of raising and lowering operators, we can use it to calculate the terms in the S-matrix and so observables like cross-sections and reaction rates. This demands a mess of algebra and book-keeping, the details of which absorb most books on field theory. Teller, however, goes over this very briskly before turning to the Feynman diagrams. We can establish a one-to-one correspondence between terms appearing in our expansions for elements of the S-matrix, and various sorts of lines and vertices in diagrams. Each quantum in the initial state gets an "external" line of its own, as does each final-state quantum. Every interaction is marked by a vertex. Vertices are connected to each other by "internal" lines, which are held to be a sort of particle "propagator". "The power of [Feynman diagrams] lies, first, in the uniform rules that associate values of terms in expressions [for elements of the S-matrix] with the various lines and vertices of a diagram. The power lies, further, in the fact that one merely has to write down all topologically distinct diagrams representing the process in question in order to get all the terms to a given order for a process, with the number of vertices corresponding to the order in the S-matrix operator expansion." In other words, Feynman diagrams make quantum field theories useful for mere mortals, and the bulk of most QFT books is devoted to techniques for coaxing the most information out of diagrams or explaining different sets of Feynman rules belonging to different theories.

The internal lines, as I've mentioned, are usually called "particle propagators", and we usually talk about them as though they represented "virtual particles" created or destroyed at vertices. This sort of extreme realism about the diagrams is fostered not just by talking about "creation" and "annihilation" operators, but by the fact that they look like actual recordings of particle tracks, as you might see them in a bubble-chamber, or reconstruct them in a modern detector. Nonetheless, Teller rightly "counsels resistance to this way of thinking, which [is] misleading in the extreme." This is because "one too easily overlooks the fact that the expression graphically represented by a Feynman diagram is only a component in a much larger superposition. Typical diagrams appear to describe events of creation of `virtual' quanta, followed by propagation to new space-time locations where the virtual quanta are subsequently annihilated. Each diagram seems so vividly to depict events of creation, propagation, and annihilation that one is tempted to see these events as actually occurring `parts' of a larger process. But there is danger here in equivocating on the word `part'." Teller here draws a distinction between "mereological parts", what I'd be tempted to call parts proper ("such as [a chair's] back and each of its four legs") and merely analytic parts relative to a basis: "The analytic parts of a vector relative to a basis are its components in the basis. . . . Clearly there is some sense in which a wave form with one bump is `composed' of the sine and cosine wave functions into which the one-bump wave can be Fourier analyzed. But it seems dubious to say that the analyzing pure wave forms are `parts' of the analyzed wave in anything like the sense in which the leg of a chair is a part of the chair." The propagators belonging to internal lines in Feynman diagrams are much more like Fourier components than chair-legs, since each one of them is only a small component in a vast superposition --- over all the different space-time points where the vertices anchoring the propagator could be, over all distinct diagrams of the same order, over all diagrams of all different orders that contribute to a process, and then, in the S-matrix itself, integrated again over all of space-time. --- It must be said that, while Teller is pretty convincing that Feynman diagrams "should not be taken literally," he doesn't give an alternative interpretation, an explanation of why those internal lines should look so much like photons or electrons or gluons or whatnot; perhaps this part of the theory is best left uninterpreted.

The last chapter explains renormalization in twenty pages, assuming nothing more than an understanding of integration, and as such is a true tour de force of exposition. As is well-known, field theories in general contain infinite and definitely unphysical terms; these correspond to Feynman diagrams where some of the internal lines form loops, which leads to diverging integrals over all momenta from zero to infinity. The way we handle these is to say that (e.g.) an electron which interacted with nothing would have a certain "bare" value for its mass and charge, but that we only observe it "dressed", or "renormalized," taking into account the interaction; in other words, the bare mass is the observed, renormalized mass minus that infinite integral. Since we never observe the bare mass, it is perhaps not completely illegitimate to make it take on whatever value is needed to fit experiment. The conviction that this isn't complete ad-hoc theory-saving can be bolstered by taking the "cut-off" approach, which is to say that we really have no idea what happens at really huge momenta, but it shouldn't be unduly important, so instead of integrating from zero to infinity we just go out to some very large cut-off value, canonically called L, and so we use a bare mass which depends on L.

But it gets worse. There are convergent (analytic!) parts in the divergent integrand, which generally are proportional to some power of the momentum exchanged between interacting particles, q, and these have to be separated out and retained while the divergent parts get renormalized into the observed mass. The effect of this is to make effective mass and charge depend on q, and so on the distance-scale at which particles interact. (Generally, the closer you get to a particle, the larger its effective charge. The folklore explains this as the effect of shielding by a cloud of virtual particle-antiparticle pairs, but we've already seen how dubious the idea of virtual particles is.) So really we have to worry about understanding not only the cut-off procedure but also the uses of q. Teller does a fine job of showing how talk of the bare mass, charge, etc. can be understood as tacit talk about a series of effective bare masses for higher and higher values of L. As to q, we pretty much have to give up the notion of a charge-independent-of-q (though the q=0 value, when it makes sense --- it doesn't, in some parts of quantum chromodynamics --- has obvious importance); in fact the study of how things scale with q, the study of "renormalization group" methods, is now very big in field theory, having spilled over from the theory of phase transitions and critical phenomena in condensed matter theory, and on into bifurcation theory in dynamics.

It should be, and is, emphasized that, once renormalized, quantum field theories, especially QED, the best-developed among them, are extraordinarily good at matching experimental data --- as in, to ten or more significant figures. Clearly, renormalization is a good way of saving the phenomena, but then, so were epicycles. Most physicists probably accept renormalization in the spirit of "it works, so it must make some sort of sense." A few go so far as to claim that things like the bare mass really are infinite --- though, as Teller says, "given the current state of mathematics, the real-infinities approach makes no sense." Teller distinguishes between accepting the cut-offs approach and what he calls "the mask of ignorance" approach, though in practice I don't think most physicists do. The mask of ignorance argument runs as follows: The divergences all come from letting momenta, and so energies, run all the way up to infinity, but on various grounds we can be reasonably sure that none of our theories works at very high energies. Somewhere out there in theory-space is a more accurate theory which handles the very high energies and does not have divergent terms; presumably it would give the same results as our current theories in the low-energy regimes where the latter are accurate, and in addition would let us calculate things like the effective electron mass, rather than simply fitting it to data. Until we find it, however, hiding our ignorance in measured numbers is the best we can do, and in fact it's the sort of thing we're forced to do all the time, e.g. with specific heats which could, in principle, be calculated from statistical mechanics, but in practice are just measured. Renormalization is a convenient tool for managing without the correct theory, and (as Teller notes) since the correct theory doubtless be even more of a calculational bitch than current field theories, probably won't go away even once we have it.

An Interpretive Introduction to Quantum Field Theory will not teach anyone how to do any practical field-theoretic calculations, but it is invaluable for clarifying what field theory is about, and should be required reading in all QFT courses. (Teller gives end of chapter problems, evidently aimed at a class in philosophy, not physics.) In addition to those larval physicists, it should be read by philosophers of science and other science-studiers; not the least of its merits is showing just how valuable a contribution outsiders can make to even very esoteric scientific disciplines.


x + 176 pp. including bibliography and index, nine Feynman diagrams and one sketch of a bump.
Philosophy of Science / Physics
Currently in print as a paperback, US$16.95, ISBN 0-691-01627-5 [buy from Powell's], and as a hardback, ISBN 0-691-07408-9 [buy from Powell's]
11 July/8 August 1997