Notebooks

Graph Limits and Infinite Exchangeable Arrays

03 Jun 2016 09:23

An exchangeable random sequence $X$ is a sequence of random variables, $ X_1, X_2, \ldots $, whose distribution is invariant under permutation of the indices. All such sequences are formed by taking mixtures of independent and identically-distributed (IID) sequences. (See Exchangeable Random Sequences.) An exchangeable random array, $G$, is simply a matrix or array of random variables $ G_{ij} $ whose distribution is invariant under permutation of row and column indices ($i$ and $j$). I mostly care about what are sometimes called jointly exchangeable arrays, where the same permutation is applied to the rows and the columns. If we can apply different permutations to rows and to columns and still get invariance in distribution, then the process is separately exchangeable; this however does not interest me so much, for reasons which I hope will be clear in a moment.

As I said, infinite-dimensional exchangeable distributions are all formed by taking mixtures of certain basic or extremal distributions, which are the infinite-dimensional IID distributions. To generate an exchangeable sequence, one first randomly draws an probability law from some prior distribution, and then draws from that law independently until the end of time. (Again, see Exchangeable Random Sequences.) Is there an analogous set of extremal distributions for exchangeable arrays? Well, yes, or else I wouldn't have asked the question...

It's easiest to understand what's going on if we restrict ourselves to binary arrays, so $ G_{ij} $ must be either 0 or 1. One very important instance of this — or at least one I use a lot — makes $G$ the adjacency matrix (or "sociomatrix") of a network, with $ G_{ij} = 1$ if there is a edge linking $i$ and $j$, and $ G_{ij}=0 $ otherwise.

For each $i$, draw an independent random number $ U_i $ uniformly on the unit interval. Now, separately, fix a function $w(u,v)$ from the unit square ${[0,1]}^2$ to the unit interval $[0,1]$, with the symmetry $w(u,v) = w(v,u)$. Finally, set $ G_{ij} = 1$ with probability $ w(U_i, U_j) $, independently across dyads $ ij $. Conditional on the $ U_i$, all edges are now independent (though not identically distributed). Moreover, $ G_{ij}$ and $ G_{kl}$ are independent, unless the indices overlap. (However, $ G_{ij}$ and $ G_{kl}$ can be dependent given, say, $ G_{jk}$.) But edges with nodes in common are not independent, nor are edges identically distributed, unless the function $ w$ is constant almost everywhere. Call the resulting stochastic graph $ G$ a $ w$-random graph.

(Using the unit interval for the $U$ variables is inessential; if we have a measurable mapping $f$ from $[0,1]$ to any other space, with a measurable inverse $f^{-1}$, then set $ V_i = f(U_i) $, and \[ \Pr{\left(G_{ij} = 1\right)} = w^{\prime}(V_i, V_j) = w(f^{-1}(U_i),f^{-1}(U_j)) ~. \] So if you really want to make the variable for each node a 7-dimensional Gaussian rather than a standard uniform, go ahead.)

What are some examples of $w$-random graphs? Well, as I said, setting $w$ to a constant, say $p$, does in fact force the edges to be IID, each edge being present with probability $p$, so the whole family of Erdos-Renyi random graphs, i.e., random graphs in the strict sense, is included. Beyond this, a simple possibility is to partition the unit interval into sub-intervals, and force $w$ to be constant on the rectangles we get by taking products of the sub-intervals. This corresponds exactly to what the sociologists call "stochastic block models", where each node belongs to a discrete type or block of nodes (= sub-interval), and the probability of an edge between $i$ and $j$ depends only on which blocks they are in. Community- or module- discovery in networks is mostly based on the assumption that not only is there some underlying block model, but that the probability of an intra- block connection is greater than that of an inter- block edge, no matter the blocks; that is, $w$ is peaked along the diagonal. Since every measurable function can be approximated arbitrarily-closely by piecewise-constant "simple functions", one can in fact conclude that every $w$-random graph can be approximated arbitrarily closely (in distribution) by a stochastic block model, though it might need a truly huge number of blocks to get an adequate approximation. This also gives an easy way to see that two different $w$ functions can give rise to the same distribution on graphs, so we'll ignore the difference between $w$ and $w^{\prime}$ if $w(u,v) = w^{\prime}(T(u), T(v))$, where $T$ is an invertible map from $[0,1]$ onto $[0,1]$ that preserves the length of intervals (i.e., preserves Lebesgue measure). The reason we ignore this difference is that $T$ just "relabels the nodes", without changing the distribution of graphs.

It's not hard to convince yourself that every $w$-random graph is exchangeable. (Remember that we see only the edges $ G_{ij}$, and not the node-specific random variables $ U_i$.) What is very hard to show, but is in fact true, is that the distribution of every infinite exchangeable random graph is a mixture of $w$-random graph distributions. Symbolically, the way to produce an infinite exchangeable graph is always to go through the recipe \[ \begin{eqnarray*} W & \sim & p\\ U_i|W & \sim_{\mathrm{IID}} & \mathcal{U}(0,1)\\ G_{ij}| W, U_i, U_j &\sim & \mathrm{Bernoulli}(W(U_i,U_j)) \end{eqnarray*} \] for some prior distribution $p$ over $w$-functions.

In the exchangeable-sequence case, if all we have is a single realization of the process, we cannot learn anything about the prior distribution over IID laws. (Similarly, if we have only a single realization of a stationary process, we can only learn about the one ergodic component that realization happens to be in, though in principle we can learn everything about it.) If we have only a single network to learn from, then we cannot learn anything about the prior distribution $p$, but we can learn about the particular $W$ that it generated, and that will let us extrapolate to other, currently-unseen parts of the network.

Here is where a very interesting connection comes in to what at first sight seems like a totally different set of ideas. Suppose I have a sequence of graphs $G^1, G^2, \ldots $, all of finite size. When can I say that this sequence of graphs is converging to a limit, and what kind of object is its limit?

Experience with analysis tells us that we would like converging objects to get more and more similar in their various properties, and one important set of properties for graphs is the appearance of specific sub-graphs, or motifs. For instance, when $ G_{ij} = G_{jk} = G_{ki} = 1$, we say that $ i,j,k$ form a triangle, and we are often interested in the number of triangles in $G$. More broadly, let $H$ be some graph with fewer nodes than $G$, and define $m(H,G)$ to be the number of ways of mapping $H$ onto $G$ --- picking out nodes in $G$ and identifying them with nodes in $H$ such that the nodes in $G$ have edges if and only if their counterpart nodes in $H$ have edges. (In a phrase, the number of homomorphisms from $H$ into $G$.) The maximum possible number of such mappings is limited by the number of nodes in the two graphs. The density of $H$ in $G$ is \[ t(H,G) \equiv \frac{m(H,G)}{{|G| \choose |H|}} \] If $H$ has more nodes than $G$, we define $m(H,G)$ and $t(H,G)$ to be 0. (Actually, there are a couple of different choices for defining the allowed mappings from $G$ to $H$, and so for the normalizing factor in the denominator of $t$, but these end up not making much difference.)

We can now at last define convergence of a graph sequence: $G^1, G^2, \ldots $ converge when, for each motif $H$, the density sequence $t(H,G^1), t(H,G^2), \ldots $ converges. There are several points to note about this definition:

  1. If, after a certain point $n$, the graph sequence becomes constant, $G^n = G^{n+m}$, then the sequence converges. This is a reasonable sanity-check on our using the word "convergence" here.
  2. A sequence of isomorphic graphs (i.e., ones which are the same after some re-labeling of the nodes) has already converged, since they all have the same density for every motif. So the definition of convergence is insensitive to isomorphisms. This is good, in a way, because isomorphic graphs really are the same in a natural sense, but bad, because deciding whether two graphs are isomorphic is computationally non-trivial, and may even be NP-complete.
  3. If the sequence of graphs keep growing, then convergence of the sequence implies convergence not of the number of edges, triangles, four-stars, etc., but of their suitably-normalized densities.
  4. The definition is strongly analogous to that of "convergence in distribution" (a.k.a. "weak convergence") in probability theory. A sequence of distributions $P^1, P^2, \ldots $, converges if and only if, for every bounded and continuous function $f$, the sequence of expected values \[ P^i f \equiv \int{f(x) dP^{i}(x)} \] converges. Densities of motifs act like bounded and continuous "test functions".
  5. The limit of a sequence of graphs is not necessarily a graph. Analogously, the limit of a sequence of discrete probability distributions, like our empirical distribution at any $n$, is not necessarily discrete — it might be a distribution with a continuous density, a mixture of a continuous and a discrete part, etc. The people who developed the theory of such graph limits called the limiting objects graphons. Roughly speaking, graphons are to graphs as general probability distributions are to discrete ones.

How are graphons represented, if they are not graphs? Well, they turn out to be representable as symmetric functions from the unit square to the unit interval, i.e., $w$-functions! It is easy to see how to turn any finite graph's adjacency matrix into a $0-1$-valued $w$-function: divide the unit interval into $n$ equal segments, and make $w$ 0 or 1 on each square depending on whether the corresponding nodes had an edge or not. Call this $ w_G$. It turns out, through an argument I do not feel up to even sketching today, that the density $t(H,G)$ can be expressed as an integral, which depends on $H$ with respect to $w$-function derived from $G$: \[ t(H,G) = \int_{[0,1]^{|H|}}{\prod_{(i,j)\in H}{w_{G}(u_i,u_j)} du_1 \ldots du_{|H|}} \] This carries over to the limit: if the sequence $G^n$ converges, then \[ \lim_{n\rightarrow\infty}{t(H,G^n)} = \int_{[0,1]^{|H|}}{\prod_{(i,j)\in H}{w(u_i,u_j)} du_1 \ldots du_{|H|}} \] for some limiting function $w$. (If you are the kind of person who finds the analogy to convergence in distribution helpful, you can fill in this part of the analogy now.) We identify the limiting object, the graphon, with the limiting $w$-function, or rather with the equivalence class of limiting $w$-functions.

To sum up: If we start with an infinite exchangeable graph distribution, then what gets realized comes from a (randomly-chosen) extremal distribution. But the limits of sequences of graphs are, precisely, the extremal distributions of the family of exchangeable graphs. So we would seem to have the kind of nice, closed circle which makes statistical inference possible: a sufficiently large realization becomes representative of the underlying process, which lets us infer that process by examining the realization. What I am very much interested in is how to actually use this suggestion to do some concrete, non-parametric statistics for networks. In particular, it would seem that understanding this would open the way to being able to smooth networks and/or bootstrap them, and either one of those would make me very happy.

Specific points of interest:

  1. Understand how to metrize graph convergence, and efficiently calculate the metrics; use for tests of network difference.
  2. Suppose that the sequence of graphs $G^n$ are sparse, so that the number of edges per node tends to a constant (rather than growing proportional to the number of nodes). Then all motif densities tend to zero and we lose the ability to distinguish between graph sequences. What is the best way of defining convergence of sparse graphs? What does this do to the probabilistic analogs of graphons?
  3. How does this relate to the issues of projectibility for exponential-family random graph models?
  4. Given a graph sequence, when can we consistently estimate the, or a, limiting $w$-function? Bickel, Chen and Levina (below) define a set of statistics whose expected values characterize the $w$-function and which can be consistently estimated. This was extremely clever, but inverting the mapping from $w$ to those expectations looks totally intractable — and indeed they don't even try. My own feeling is that this is more of a job for smoothing than for the method of moments, but I'm not comfortable saying much more, yet.

See also: Characterizing Mixtures of Processes by Summarizing Statistics; Graph Theory


Notebooks: