The Bactra Review: Occasional and eclectic book reviews by Cosma Shalizi   170

Mathematics of Epidemics on Networks

From Exact to Approximate Models

by István Z. Kiss, Joel C. Miller and Péter L. Simon

Springer, 2017 (doi: 10.1007/978-3-319-50806-1)


Society Prepares the Plague

Modern mathematical models of epidemic diseases go back to the first decades of the 20th century. The root idea is to partition the population (of hosts!) into types, or "compartments": say, the "susceptibles" who don't have the disease yet but who could catch it, "infectious" who have the disease who can pass it on, and the "recovered" (or, more ominously, "removed") who have had the disease but are no longer able to transmit it. Individuals move between these compartments at specified rates, which can depend on how many hosts there are in each compartment. Thus, in the classical models, recovery is spontaneous, each infected host has a certain probability per unit time of recovering (of moving from the "I" compartment to the "R" compartment), but infection happens with some probability when a susceptible person encounters an infectious one, and such encounters are more frequent when there are more infectious people. If we write $S(t)$, $I(t)$ and $R(t)$ for the number of people in each compartment at time $t$, the classical "SIR" model is a system of three ordinary differential equations: \[ \begin{eqnarray} \frac{dS}{dt} & = & -\tau S(t) I(t)\\ \frac{dI}{dt} & = & \tau S(t) I(t) - \gamma I(t)\\ \frac{dR}{dt} & = & \gamma I(t) \end{eqnarray} \] Here $\tau$ and $\gamma$ are constants, reflecting, respectively, the probability of transmission from an infected to a susceptible host, and the rate at which infected hosts are removed. From what I've said already, you can no doubt work out the variant SIS, where infectious hosts can become susceptible again, or the SEIR model, where there is a stage between being exposed to the disease and becoming infectious.

As I said, these models are deterministic (once we say the initial state, ($S(0), I(0), R(0))$, the state at all later times is fixed) and aggregate --- they only look at the over-all numbers of hosts in each compartment, without regard to any kind of internal structure in the population. (By analogy to chemistry, this is often called a "mass-action" assumption.) Of course, as my talk of "probability" hinted at, it's easy to come up with an aggregate stochastic model, where the numbers change randomly, but with probabilities that reflect the current states. (Relating such "population processes" to their deterministic limits is an old problem, albeit one which continues to show new faces.) It's also easy to develop an individual-level stochastic model, where we keep track of which compartment each host is in, and let them move between compartments at rates which depend on the aggregate population. Of course, the state space for such a model is very large (with $n$ hosts, it'd have $3^n$ states in the SIR model), so one would usually just go back to the aggregate level.

The advantage of the individual-level stochastic model comes from the ability to add population structure, specifically networks. It seems reasonable to assume that Irene's probability of catching the disease will be much more affected by the status of her friend Joey than by the status of Karl, with whom Irene has no contact. But under the "mass-action" assumption, an infectious Joey increases Irene's risk just as much (or as little) as an infectious Karl. Again, it seems reasonable that Alice, a hermit who lives alone with her cat and barely speaks to her drug dealer, should be much less exposed to the disease than Babur, a social butterfly constantly surrounded by a variety of ever-shifting entourages. Individual-level network models let us capture both effects: we just say that if Irene currently has $I_i(t)$ infectious neighbors, her probability per unit time of catching the disease is $\tau I_i(t)$. This means that, conditional on the state of her neighbors, the state of distant people in the network is irrelevant to Irene. It also accords with our intuitions about Alice and Babur. Typically, but not necessarily, we assume that the network stays fixed over time*.

Simulating a model like this is straightforward enough. (Though, as usual, there are subtleties about computational efficiency; Appendix A of this book is a good crash course.) The difficulty is that, purely on the basis of simulations, it's hard to draw general conclusions, especially when the state-space is large ($3^n$ again for SIR models). This means that it would be nice to find some intermediate level of description, between the fully-disaggregated, individual-level network simulation, and the fully-aggregated, mass-action model with its closed-form equations. If we have to sacrifice some accuracy for efficiency of prediction, well, that's emergence...

This is where Mathematics of Epidemics on Networks comes in. (Remember, this is a review of a book about mathematics of epidemics on networks.) Chapter 1 quickly covers the basics of both compartmental epidemic models and of networks. Chapter 2 is where the work starts. It begins with the full dynamics of an individual-level stochastic model. The probability distribution over states evolves following a (deterministic) ordinary differential equation, the "master equation", in a high-dimensional space. The book then looks at how these states could be "lumped" --- how distinct configurations of disease over the network can be treated as equivalent, so that we don't have to keep track of them as separate states. A key role here is played by symmetries of the network, especially "automorphisms" (ways of re-numbering the nodes which preserve edge information). While mathematically elegant, and applicable to a wider class of dynamics than just epidemic models, the sad truth is that most interesting networks just don't have enough useful symmetries to push this very far. (Approximate symmetries, or approximate lumpings, are not considered.)

Chapter 3 introduces the approach that the bulk of the book follows, of approximate "closures". Because the disease spreads over the network, we know that the number of new infections is (in expectation) going to be proportional to the number of pairs of neighboring susceptible and infected nodes, symbolically $I-S$. The rate of change for such pairs will turn depend on the numbers of triples in configurations like $S-S-I$ or $I-S-I$. The rates of change for triples will depend on the number of quadruples, and so on and so on. In a closure approximate, we "close off" this escalating hierarchy, by expressing the number of nodes in large configurations in terms of the number in smaller configurations. A very simple closure, for instance, would say that the number of $S-I$ pairs was proportional to the number of $S$ nodes times the number of $I$ nodes. A more complicated one might say (for instance) that the number of $S-S-I$ triples was proportional to the number of $S-S$ pairs and the number of $S-I$ pairs (divided by the total number of $S$ nodes). As my examples suggest, closures typically correspond to assumptions about statistical independence (as in the first example) or conditional independence (as in the second).

Chapter 3, as I said, introduces the idea of closures, but stays with keeping track of the disease state of individual nodes in the network. Chapter 4 introduces "mean-field" closures for networks where all nodes have the same number of neighbors ("homogeneous degree"), say $m$. In this setting, we can still keep track of the total number of nodes in each compartment, and write differential equations for their expected numbers. For the SIR model, this gives \[ \begin{eqnarray} \frac{d[S]}{dt} & = & -\tau [SI]\\ \frac{d[I]}{dt} & = & \tau[SI] -\gamma[I]\\ \frac{d[R]}{dt} & = & \gamma[I] \end{eqnarray} \] where I'm following the authors in using $[ \cdot ]$ to indicate expectations, and $[SI]$ is the expected number of susceptible-infected pairs. The time-evolution of the latter requires tracking the time-evolution of triples: \[ \begin{eqnarray} \frac{d[SI]}{dt} & = & -\gamma[SI] + \tau([SSI] - [ISI] - [SI])\\ \frac{d[SS]}{dt} & = & -2\tau[SSI]\\ \frac{d[II]}{dt} & = & -2\gamma[II] + 2\tau([ISI] + [SI]) \end{eqnarray} \] (I'm omitting the equations for $[SR]$, $[IR]$ and $[RR]$ out of boredom.) The "closure" part comes in at this stage: rather than writing out equations for the triples in terms of the quadruples, we "close" this system of equations by writing the pairs (or triples) in terms of the singles (or pairs). Closing at the "single level" means saying (hoping!) that \[ [SI] \approx \frac{m}{n}[S][I] \] while closing at the "pair level" ("pairwise closure") means that \[ \begin{eqnarray} [SSI] & \approx & \frac{m-1}{m}\frac{[SS][SI]}{[S]}\\ [ISI] & \approx & \frac{m-1}{m}\frac{[SI]^2}{[S]} \end{eqnarray} \] Either approximation gives us a self-contained (closed) system of ordinary differential equations, which we can solve (semi-analytically or numerically). Numerical solutions tell us about how the system will evolve from given initial conditions. Analytical results concern things like the stability of the un-infected, all-$S$ state, how large $\tau$ must be compared to $\gamma$ to get an epidemic going (the "epidemic threshold"), etc.

It's worth remarking here that there are two approximations involved in the mean-filed system of ODEs. One, to which the book gives a lot of attention, is that of the closures: we're not really counting the number of triples accurately, and that introduces some error. The other, which the book doesn't attend to so much, is that we're approximating a stochastic process (the actual individual-level process) to its expected value. We can hope that both sources of error will shrink as the size of the network $n\rightarrow\infty$. Numerical studies, comparing simulations of the individual-level model to solutions of the mean-field equations, confirm this hope, especially for the pairwise closure, but they don't break out the two sources of error separately.

Of course, networks where all nodes have the same number of otherwise-random neighbors are pretty unrealistic. Chapter 5 thus extends the mean-field approach to networks where nodes have varying degree, but are otherwise randomly connected. The trick here is to keep track of the number of nodes in each compartment separately for each degree, so $[I]_k(t)$ is the expected number of infectious nodes of degree $k$ at time $t$. So, for instance, \[ \begin{eqnarray} \frac{d[S_k]}{dt} & = & -\sum_{l=1}^{M}{\tau[S_k I_l]}\\ \frac{d[I_k]}{dt} & = & \sum_{l=1}^{M}{\tau[S_k I_l]} - \gamma[I_k]\\ \frac{d[R_k]}{dt} & = & -\gamma[I_k] \end{eqnarray} \] where $M$ is the highest degree of any node in the network. This eventually forces us to consider the dynamics of triples like $[S_k S_l I_j]$, which we close in terms of the singles or pairwise as before. The pairwise mean-field equations for degree-heterogeneous but otherwise random networks are remarkably accurate for a wide range of parameters, and can even work well for some real-world networks. (By "accurate" and "work well", here, I mean "approximate simulations of the individual-level model".) The pairwise approximation is still tractable enough to let us get some analytical results, such as calculating the "epidemic threshold", the minimum advantage transmission must have over recovery, in terms of the average degree and the variance of degrees.

Chapter 5 goes on to consider some further simplifications of the basic mean-field models, made possible by exploiting various symmetries. It also (section 5.6) introduces an alternative closure, called the "effective degree" model, where we keep track of the number of nodes with different configurations of neighbors, so $S_{si}(t)$ is the number of susceptible nodes with $s$ susceptible and $i$ infectious neighbors at time $t$. (Removed neighbors don't matter, hence "effective degree".) The time-evolution of these quantities again involves triples, which again can be closed in terms of the number of nodes in each local configuration.

Chapter 6 considers precolation-theory based models. The idea here is that since each node can only be infected for a limited amount of time before being removed, we can associate some (average) probability with each edge of the disease passing along that edge from neighbor to neighbor. We say that the edge is "open" with that probability, and "closed" otherwise. An outbreak of the disease corresponds to a cluster of contiguous open edges. An epidemic corresponds to a cluster "percolating across the network", or growing with the over-all size of the network towards $\infty$ (without necessarily involving every single node). Clever probabilistic arguments, originating in percolation theory in physics, and ultimately the Galton-Watson model for branching processes in biology, let us calculate when such epidemics will occur, and how big they will get. (Remarkably, the results for the epidemic threshold match those from the pairwise mean-field model.) This chapter also introduces "edge-based compartment models", where, as the name suggests, we categorize each edge in the graph as belonging to one of the compartments $S-S$, $S-I$, $I-R$, etc., and track the transitions of edges across compartments. (Again, we need a closure.)

Chapter 7 establishes a hierarchy of accuracy across the different approximations. The pairwise mean-field and effective-degree models are distinct from each other, and each can fail in the wrong circumstances (though there are many where they're both good approximations). In the right limits, both models are equivalent to the edge-based compartmental model. If transmission rates (per edge) are very low, while the average degree is very high, these three models all reduce to the degree-heterogeneous mean-field model closed at the level of singles. And that, in turn, reduces to the homogeneous mean-field model as the dispersion of degree goes to zero.

Chapter 8 considers some models where the network changes in response to the spread of the disease. The models treated here are simply formulated, but the analysis quickly gets messy.

Chapter 9 looks at how to break the Markov assumption for the dynamics, which implies that the amount of time spent in any given state is exponentially distributed. (In particular, it's often more plausible to have a fixed-duration infectious period, or one whose duration is peaked around some typical value.) The approach here is to build more complicated equations, rather than (say) to add more compartments, say $I^{(1)}, I^{(2)}, I^{(d)}$ so that an infected individual can automatically progress through a infectious period of duration $d$.

Chapter 10 looks at large-population limits, where we ignore the fact that the number of individuals in each compartment is necessarily an integer and treat it instead as a continuous quantity, leading to partial differential equations ("Fokker-Planck" equations) for the evolution of the probability distribution. This chapter reverts back, as in chapter 2, to considering more general dynamics on the network than just epidemic models.

Finally, chapter 11 compares the various approximate models to individual-level simulations on real-world networks. (There is not, unfortunately, any attempt to compare the models to data on actual diseases, or to estimate their parameters.) One of the things this makes clear is that the approximations get in trouble in the real world from two sources: correlations between the degree of neighbors, and clustering, a.k.a. the existence of short cycles in the graph. While the book doesn't dwell on it (except for some exercises), the pairwise heterogeneous mean-field closure can be extended to handle degree correlations; clustering is more of an open problem. I would also liked to have seen some treatment of node-level heterogeneity in rates of transmission or recovery, which would matter if it's correlated with degree.

The literature on epidemic models on networks is scattered across epidemiology, multiple branches of mathematics, and (for reasons which seemed to make sense at the time) physics. The book does an excellent job of bringing it together in a common formalism and language, clearly explained and clearly motivated. The implied reader has a pretty firm grasp on the theory of ordinary differential equations, including not just the basic facts about existence, uniqueness and finding solutions numerically, but also about the existence and stability of fixed points, and how to find (simple) bifurcations. Elementary probability, including some knowledge of Markov chains, is also pre-supposed. Graph theory, or knowledge of actual social and biological networks, is much less important. This is an exemplary review of pretty much (2017) the current state of the art. While I've done the usual academic thing of saying what more I'd have liked, I would have no hesitation in using it as the basis of a semester-long theoretical course on epidemics (or more generally dynamics) on networks, and it's well-suited to self-study.


Niggling: Many of the figures rely on color for important distinctions, and the colors do not come through well when translated into grey-scale, as by Springer's cheap "MyCopy" editions. (Whether the full-price edition is printed in color, I couldn't say.) On the other hand, the PDF available through SpringerLink (if you subscribe, or pay an extortionate price), while it has the figures in color and was clearly prepared with a hyperlink-aware version of LaTeX, has no actual hyperlinks, so you can't click to follow cross-references or citations. I have been reading math books long enough to be very aware of how petty such complaints are.

*: If we imagine drawing a new, independent network at each moment in time, and one where every node was equally likely to be connected to every other node, then we recover the model where transition rates depend only on population aggregates. ^.


xviii+413 pages, many color and black-and-white diagrams, bibliography, index

ISBN 978-3-319-50805-4

Mathematics


2 March 2020 (broken links fixed 15 March)