Notebooks

Graph Limits and Infinite Exchangeable Arrays

17 Apr 2024 00:12

An exchangeable random sequence \( X \) is a sequence of random variables, \( X_1, X_2, \ldots \), whose distribution is invariant under permutation of the indices. All such sequences are formed by taking mixtures of independent and identically-distributed (IID) sequences. (See Exchangeable Random Sequences.) An exchangeable random array, \( G \), is simply a matrix or array of random variables \( G_{ij} \) whose distribution is invariant under permutation of row and column indices (\( i \) and \( j \)). I mostly care about what are sometimes called jointly exchangeable arrays, where the same permutation is applied to the rows and the columns. If we can apply different permutations to rows and to columns and still get invariance in distribution, then the process is separately exchangeable; this however does not interest me so much, for reasons which I hope will be clear in a moment.

As I said, infinite-dimensional exchangeable distributions are all formed by taking mixtures of certain basic or extremal distributions, which are the infinite-dimensional IID distributions. To generate an exchangeable sequence, one first randomly draws an probability law from some prior distribution, and then draws from that law independently until the end of time. (Again, see Exchangeable Random Sequences.) Is there an analogous set of extremal distributions for exchangeable arrays? Well, yes, or else I wouldn't have asked the question...

It's easiest to understand what's going on if we restrict ourselves to binary arrays, so \( G_{ij} \) must be either 0 or 1. One very important instance of this --- or at least one I use a lot --- makes \( G \) the adjacency matrix (or "sociomatrix") of a network, with \( G_{ij} = 1 \) if there is a edge linking \( i \) and \( j \), and \( G_{ij}=0 \) otherwise.

For each \( i \), draw an independent random number \( U_i \) uniformly on the unit interval. Now, separately, fix a function \( w(u,v) \) from the unit square \( {[0,1]}^2 \) to the unit interval \( [0,1] \), with the symmetry \( w(u,v) = w(v,u) \). Finally, set \( G_{ij} = 1 \) with probability \( w(U_i, U_j) \), independently across dyads \( ij \) . Conditional on the \( U_i \), all edges are now independent (though not identically distributed). Moreover, \( G_{ij} \) and \) G_{kl} \) are independent, unless the indices overlap. (However, \( G_{ij}\) and \( G_{kl} \) can be dependent given, say, \( G_{jk} \).) But edges with nodes in common are not independent, nor are edges identically distributed, unless the function \( w \) is constant almost everywhere. Call the resulting stochastic graph \( G \) a \( w \)-random graph.

(Using the unit interval for the \( U \) variables is inessential; if we have a measurable mapping \( f \) from \( [0,1] \) to any other space, with a measurable inverse \( f^{-1} \), then set \( V_i = f(U_i) \), and \[ \Pr{\left(G_{ij} = 1\right)} = w^{\prime}(V_i, V_j) = w(f^{-1}(U_i),f^{-1}(U_j)) ~. \] So if you really want to make the variable for each node a 7-dimensional Gaussian rather than a standard uniform, go ahead.)

What are some examples of \( w \)-random graphs? Well, as I said, setting \( w \) to a constant, say \( p \), does in fact force the edges to be IID, each edge being present with probability \( p \), so the whole family of Erdos-Renyi random graphs, i.e., random graphs in the strict sense, is included. Beyond this, a simple possibility is to partition the unit interval into sub-intervals, and force \( w \) to be constant on the rectangles we get by taking products of the sub-intervals. This corresponds exactly to what the sociologists call "stochastic block models", where each node belongs to a discrete type or block of nodes (= sub-interval), and the probability of an edge between \( i \) and \( j \) depends only on which blocks they are in. Community- or module- discovery in networks is mostly based on the assumption that not only is there some underlying block model, but that the probability of an intra- block connection is greater than that of an inter- block edge, no matter the blocks; that is, \( w \) is peaked along the diagonal. Since every measurable function can be approximated arbitrarily-closely by piecewise-constant "simple functions", one can in fact conclude that every \( w \)-random graph can be approximated arbitrarily closely (in distribution) by a stochastic block model, though it might need a truly huge number of blocks to get an adequate approximation. This also gives an easy way to see that two different \( w \) functions can give rise to the same distribution on graphs, so we'll ignore the difference between \( w \) and \( w^{\prime} \) if \( w(u,v) = w^{\prime}(T(u), T(v)) \), where \( T \) is an invertible map from \( [0,1] \) onto \( [0,1] \) that preserves the length of intervals (i.e., preserves Lebesgue measure). The reason we ignore this difference is that \( T \) just "relabels the nodes", without changing the distribution of graphs.

It's not hard to convince yourself that every \( w \)-random graph is exchangeable. (Remember that we see only the edges \( G_{ij} \), and not the node-specific random variables \( U_i \).) What is very hard to show, but is in fact true, is that the distribution of every infinite exchangeable random graph is a mixture of \( w \)-random graph distributions. Symbolically, the way to produce an infinite exchangeable graph is always to go through the recipe \[ \begin{eqnarray*} W & \sim & p\\ U_i|W & \sim_{\mathrm{IID}} & \mathcal{U}(0,1)\\ G_{ij}| W, U_i, U_j &\sim & \mathrm{Bernoulli}(W(U_i,U_j)) \end{eqnarray*} \] for some prior distribution \( p \) over \( w \)-functions.

In the exchangeable-sequence case, if all we have is a single realization of the process, we cannot learn anything about the prior distribution over IID laws. (Similarly, if we have only a single realization of a stationary process, we can only learn about the one ergodic component that realization happens to be in, though in principle we can learn everything about it.) If we have only a single network to learn from, then we cannot learn anything about the prior distribution \( p \), but we can learn about the particular \( W \) that it generated, and that will let us extrapolate to other, currently-unseen parts of the network.

Here is where a very interesting connection comes in to what at first sight seems like a totally different set of ideas. Suppose I have a sequence of graphs \( G^1, G^2, \ldots \), all of finite size. When can I say that this sequence of graphs is converging to a limit, and what kind of object is its limit?

Experience with analysis tells us that we would like converging objects to get more and more similar in their various properties, and one important set of properties for graphs is the appearance of specific sub-graphs, or motifs. For instance, when \( G_{ij} = G_{jk} = G_{ki} = 1 \), we say that \( i,j,k \) form a triangle, and we are often interested in the number of triangles in \( G \). More broadly, let \( H \) be some graph with fewer nodes than \( G \), and define \( m(H,G) \) to be the number of ways of mapping \( H \) onto \( G \) --- picking out nodes in \( G \) and identifying them with nodes in \( H \) such that the nodes in \( G \) have edges if and only if their counterpart nodes in \( H \) have edges. (In a phrase, the number of homomorphisms from \( H \) into \( G \).) The maximum possible number of such mappings is limited by the number of nodes in the two graphs. The density of \( H \) in \( G \) is \[ t(H,G) \equiv \frac{m(H,G)}{{|G| \choose |H|}} \] If \( H \) has more nodes than \( G \), we define \( m(H,G) \) and \( t(H,G) \) to be 0. (Actually, there are a couple of different choices for defining the allowed mappings from \( G \) to \( H \), and so for the normalizing factor in the denominator of \( t \), but these end up not making much difference.)

We can now at last define convergence of a graph sequence: \( G^1, G^2, \ldots \) converge when, for each motif \( H \), the density sequence \( t(H,G^1), t(H,G^2), \ldots \) converges. There are several points to note about this definition:

  1. If, after a certain point \( n \), the graph sequence becomes constant, \( G^n = G^{n+m} \), then the sequence converges. This is a reasonable sanity-check on our using the word "convergence" here.
  2. A sequence of isomorphic graphs (i.e., ones which are the same after some re-labeling of the nodes) has already converged, since they all have the same density for every motif. So the definition of convergence is insensitive to isomorphisms. This is good, in a way, because isomorphic graphs really are the same in a natural sense, but bad, because deciding whether two graphs are isomorphic is computationally non-trivial, and may even be NP-complete.
  3. If the sequence of graphs keep growing, then convergence of the sequence implies convergence not of the number of edges, triangles, four-stars, etc., but of their suitably-normalized densities.
  4. The definition is strongly analogous to that of "convergence in distribution" (a.k.a. "weak convergence") in probability theory. A sequence of distributions \( P^1, P^2, \ldots \), converges if and only if, for every bounded and continuous function \( f \), the sequence of expected values \[ P^i f \equiv \int{f(x) dP^{i}(x)} \] converges. Densities of motifs act like bounded and continuous "test functions".
  5. The limit of a sequence of graphs is not necessarily a graph. Analogously, the limit of a sequence of discrete probability distributions, like our empirical distribution at any \( n \), is not necessarily discrete — it might be a distribution with a continuous density, a mixture of a continuous and a discrete part, etc. The people who developed the theory of such graph limits called the limiting objects graphons. Roughly speaking, graphons are to graphs as general probability distributions are to discrete ones.

How are graphons represented, if they are not graphs? Well, they turn out to be representable as symmetric functions from the unit square to the unit interval, i.e., \( w \)-functions! It is easy to see how to turn any finite graph's adjacency matrix into a \( 0-1 \)-valued \( w \)-function: divide the unit interval into \( n \) equal segments, and make \( w \) 0 or 1 on each square depending on whether the corresponding nodes had an edge or not. Call this \( w_G \). It turns out, through an argument I do not feel up to even sketching today, that the density \( t(H,G) \) can be expressed as an integral, which depends on \( H \) with respect to \( w \)-function derived from \( G \): \[ t(H,G) = \int_{[0,1]^{|H|}}{\prod_{(i,j)\in H}{w_{G}(u_i,u_j)} du_1 \ldots du_{|H|}} \] This carries over to the limit: if the sequence \( G^n \) converges, then \[ \lim_{n\rightarrow\infty}{t(H,G^n)} = \int_{[0,1]^{|H|}}{\prod_{(i,j)\in H}{w(u_i,u_j)} du_1 \ldots du_{|H|}} \] for some limiting function \( w \). (If you are the kind of person who finds the analogy to convergence in distribution helpful, you can fill in this part of the analogy now.) We identify the limiting object, the graphon, with the limiting \( w \)-function, or rather with the equivalence class of limiting \( w \)-functions.

To sum up: If we start with an infinite exchangeable graph distribution, then what gets realized comes from a (randomly-chosen) extremal distribution. But the limits of sequences of graphs are, precisely, the extremal distributions of the family of exchangeable graphs. So we would seem to have the kind of nice, closed circle which makes statistical inference possible: a sufficiently large realization becomes representative of the underlying process, which lets us infer that process by examining the realization. What I am very much interested in is how to actually use this suggestion to do some concrete, non-parametric statistics for networks. In particular, it would seem that understanding this would open the way to being able to smooth networks and/or bootstrap them, and either one of those would make me very happy.

Specific points of interest:

  1. Understand how to metrize graph convergence, and efficiently calculate the metrics; use for tests of network difference.
  2. Suppose that the sequence of graphs \( G^n \) are sparse, so that the number of edges per node grows less than proportionally to the number of nodes. Then all motif densities tend to zero and we lose the ability to distinguish between graph sequences. What is the best way of defining convergence of sparse graphs? What does this do to the probabilistic analogs of graphons? A huge literature has sprung up around this question (samples from it below).
  3. How does this relate to the issues of projectibility for exponential-family random graph models?
  4. Given a graph sequence, when can we consistently estimate the, or a, limiting \( w \)-function? Bickel, Chen and Levina (below) define a set of statistics whose expected values characterize the \( w \)-function and which can be consistently estimated. This was extremely clever, but inverting the mapping from \( w \) to those expectations looks totally intractable — and indeed they don't even try. My own feeling is that this is more of a job for smoothing than for the method of moments, but I'm not comfortable saying much more, yet.

An idea on sparsity I have failed, and am failing, to turn into something useful

\[ \newcommand{\Expect}[1]{\mathbb{E}\left[ #1 \right]} \] I said above that a graph sequence is "sparse" when the number of edges per node doesn't grow linearly with the number of nodes. The alternative is that a graph sequence is dense, so the number of edges per node is proportional to the number of nodes. The troublesome point is that sequences of graphs generated from the same \( w \)-function can't be sparse, in this sense. To see this, pick your favorite node \( i \), of degree \( D_i \). Then for each \( j \), \( \Expect{G_{ij} | U_i = u} = \int_{[0,1]}{w(u, v) dv} \), and, by additivity of expectation, \( \Expect{D_i | U_i=u} = (n-1) \int_{[0,1}{w(u, v) dv} \) when there are \( n \) nodes. But the latent variables \( U_i \) are IID, so \( \frac{1}{n-1}{D_i} \rightarrow \int_{0}^{1}{w(u, v) dv} \) almost surely, conditional on \( U_i=u \), by the law of large numbers. Since this holds for all \( u \), \( \frac{1}{n-1}{D_i} \rightarrow \int_{[0,1]^2}{w(u,v) du dv} \) almost surely as well. And, unless the \( w \) function is 0 almost everywhere, this limiting density is \( >0 \). Thus the degree of any given node will grow proportionately to the number of nodes.

Some people find this a disturbing prospect in an asymptotic framework for network analysis. After all, if I look at larger and larger samples of a collaboration network, it doesn't seem as though everyone's degree should keep growing in proportion to the number of nodes --- that every doubling in the number of scientists should on average double everyone's degree. On Mondays, Wednesdays and Fridays I share this unease. On Tuesdays and Thursdays, I remind myself that our data-collection processes bare little resemblance to this story, and that anyway asymptotics is all about approximation. After all, no one graph is "dense" or "sparse" in this sense. (On the weekend I try not to think about the issue.)

But, as I said above, the Aldous-Hoover theorem tells us that every exchangeable random graph is either a \( w \)-random graph, or a mixture of \( w \)-random graphs --- and the mixtures will still give us dense graph sequences, so that's no escape. What this implies that if you think growing sparse graph sequences is a desirable property in a network model, you need to abandon exchangeability. What we should use instead of exchangeability is still very much an open question.

Here is one idea which I have been toying with for a number of years, without getting very far. I put it out here now in case anyone else can make something of it; if you do, I'd appreciate an acknowledgment.

In the ordinary time-series / random sequence world, a very natural symmetry that's weaker than exchangeability (= invariance under permutation) is stationarity (= invariance under translation). I suspect we may be able to do something with stationary rather than exchangeable random graphs. For a random sequence, we say that it's stationary if for every block length \( k \) and translation \( h \), the sub-sequence \( (X_1, X_2, \ldots X_k) \) and the sub-sequence \( (X_{h+1}, X_{h+2} \ldots X_{h+k}) \) have the same distribution. (But this doesn't require that \( (X_1, X_2) \) and \( (X_2, X_1) \) have the same distribution, which exchangeability does.) So we could say that a graph is stationary, with respect to a certain ordering of the nodes, when, for every \( k \) and \( h \), the sub-graph formed by nodes \( 1:k \) and that formed by nodes \( (h+1):(h+k) \) are equal in distribution. This would preserve the notion that (in probability) the graph "looks the same everywhere", without requiring the extremely strong form of this that \( w \)-random graphs do.

The program then would be to answer the following questions:

  1. What are the extremal distributions with this symmetry like? (The extremal distributions for exchangeable sequences are IID sequences; for exchangeable random graphs, \( w \)-random graphs; for stationary sequences, stationary and ergodic sequences; etc.).
  2. With a characterization of the extremal distributions in hand, in what sense can sequences of individual graphs converge on those limits? (This would presumably be some sort of ergodic theorem.)
  3. Can distributions over graphs with this symmetry produce sparse graph sequences?

I will just say a little, here, about the third item. From the way I've set up the symmetry in the distribution, the expected number of edges within any group of \( k \) nodes that are contiguous in the given order has to be the same --- we get the same expected number of edges among nodes 1--5 as among 6--10 as among 501017--501021. So that's a contribution to the expected number of edges which is growing proportionately to the number of nodes. Moreover, by considering contiguous groups of length \( 2k \), we see that the expected number of nodes between groups of length \( k \) is also going to grow proportionately to \( n \). But there doesn't seem to be any reason why that number of between-group edges couldn't be considerably less than the number within the groups. In particular, it'd seem like we could have a lower and lower probability of edges between nodes which are further and further apart in the ordering. So I think it should be possible to get distributions over sparse graph sequences which obey this symmetry.

There's also another possible notion of "stationarity", which would go as follows. Pick our favorite node \( i \), and define its "radius 1" neighborhood as the subgraph among \( i \) and its neighbors. We'll could say that the distribution is radius-1 stationary if the distributions for the radius-1 neighborhood of any two nodes \( i \) and \( j \) are equal (up to isomorphism). We define the radius-\( k \) neighborhood around \( i \) recursively, as the subgraph of all nodes whose distance to \( i \) is \( \leq k \), and similarly stationarity out to radius \( k \), and finally over-all stationarity as stationarity out to arbitrarily large radii. (This is a little bit more like how we define stationarity for random fields.) I find this notion a bit less satisfying, because it seems more dependent on the randomly-generated graph, but on the other hand my first notion invoked an ordering of the nodes pulled from (to be polite) the air.

Finally, I should add that my own contribution to the sparse-graph-models literature, with Neil Spencer, doesn't invoke either of these notions of symmetry --- we started with a latent-space generative model and showed it had good properties. (It's stationarity in the latent space.) Tackling the issue from the side of the symmetry is, as I said, something I've played around with for some years, but haven't made much headway with, hence this addition to this notebook.


Notebooks: