Graph Limits and Infinite Exchangeable Arrays
Last update: 07 Dec 2024 23:58First version: 26 April 2012 (or a bit earlier)
An exchangeable random sequence \( X \) is a sequence of random variables,
\( X_1, X_2, \ldots \), whose distribution is invariant under permutation of
the indices. All such sequences are formed by taking mixtures of independent
and identically-distributed (IID) sequences.
(See Exchangeable Random Sequences.) An
exchangeable random array, \( G \), is simply a matrix or array of
random variables \( G_{ij} \) whose distribution is invariant under permutation
of row and column indices (\( i \) and \( j \)). I mostly care about what are
sometimes called
As I said, infinite-dimensional exchangeable distributions are all formed by
taking mixtures of certain basic or
It's easiest to understand what's going on if we restrict ourselves to binary arrays, so \( G_{ij} \) must be either 0 or 1. One very important instance of this --- or at least one I use a lot --- makes \( G \) the adjacency matrix (or "sociomatrix") of a network, with \( G_{ij} = 1 \) if there is a edge linking \( i \) and \( j \), and \( G_{ij}=0 \) otherwise.
For each \( i \), draw an independent random number \( U_i \) uniformly on the unit interval. Now, separately, fix a function \( w(u,v) \) from the unit square \( {[0,1]}^2 \) to the unit interval \( [0,1] \), with the symmetry \( w(u,v) = w(v,u) \). Finally, set \( G_{ij} = 1 \) with probability \( w(U_i, U_j) \), independently across dyads \( ij \) . Conditional on the \( U_i \), all edges are now independent (though not identically distributed). Moreover, \( G_{ij} \) and \) G_{kl} \) are independent, unless the indices overlap. (However, \( G_{ij}\) and \( G_{kl} \) can be dependent given, say, \( G_{jk} \).) But edges with nodes in common are not independent, nor are edges identically distributed, unless the function \( w \) is constant almost everywhere. Call the resulting stochastic graph \( G \) a \( w \)-random graph.
(Using the unit interval for the \( U \) variables is inessential; if we have a measurable mapping \( f \) from \( [0,1] \) to any other space, with a measurable inverse \( f^{-1} \), then set \( V_i = f(U_i) \), and \[ \Pr{\left(G_{ij} = 1\right)} = w^{\prime}(V_i, V_j) = w(f^{-1}(U_i),f^{-1}(U_j)) ~. \] So if you really want to make the variable for each node a 7-dimensional Gaussian rather than a standard uniform, go ahead.)
What are some examples of \( w \)-random graphs? Well, as I said, setting \( w \) to a constant, say \( p \), does in fact force the edges to be IID, each edge being present with probability \( p \), so the whole family of Erdos-Renyi random graphs, i.e., random graphs in the strict sense, is included. Beyond this, a simple possibility is to partition the unit interval into sub-intervals, and force \( w \) to be constant on the rectangles we get by taking products of the sub-intervals. This corresponds exactly to what the sociologists call "stochastic block models", where each node belongs to a discrete type or block of nodes (= sub-interval), and the probability of an edge between \( i \) and \( j \) depends only on which blocks they are in. Community- or module- discovery in networks is mostly based on the assumption that not only is there some underlying block model, but that the probability of an intra- block connection is greater than that of an inter- block edge, no matter the blocks; that is, \( w \) is peaked along the diagonal. Since every measurable function can be approximated arbitrarily-closely by piecewise-constant "simple functions", one can in fact conclude that every \( w \)-random graph can be approximated arbitrarily closely (in distribution) by a stochastic block model, though it might need a truly huge number of blocks to get an adequate approximation. This also gives an easy way to see that two different \( w \) functions can give rise to the same distribution on graphs, so we'll ignore the difference between \( w \) and \( w^{\prime} \) if \( w(u,v) = w^{\prime}(T(u), T(v)) \), where \( T \) is an invertible map from \( [0,1] \) onto \( [0,1] \) that preserves the length of intervals (i.e., preserves Lebesgue measure). The reason we ignore this difference is that \( T \) just "relabels the nodes", without changing the distribution of graphs.
It's not hard to convince yourself that every \( w \)-random graph is exchangeable. (Remember that we see only the edges \( G_{ij} \), and not the node-specific random variables \( U_i \).) What is very hard to show, but is in fact true, is that the distribution of every infinite exchangeable random graph is a mixture of \( w \)-random graph distributions. Symbolically, the way to produce an infinite exchangeable graph is always to go through the recipe \[ \begin{eqnarray*} W & \sim & p\\ U_i|W & \sim_{\mathrm{IID}} & \mathcal{U}(0,1)\\ G_{ij}| W, U_i, U_j &\sim & \mathrm{Bernoulli}(W(U_i,U_j)) \end{eqnarray*} \] for some prior distribution \( p \) over \( w \)-functions.
In the exchangeable-sequence case, if all we have is a single realization of the process, we cannot learn anything about the prior distribution over IID laws. (Similarly, if we have only a single realization of a stationary process, we can only learn about the one ergodic component that realization happens to be in, though in principle we can learn everything about it.) If we have only a single network to learn from, then we cannot learn anything about the prior distribution \( p \), but we can learn about the particular \( W \) that it generated, and that will let us extrapolate to other, currently-unseen parts of the network.
Here is where a very interesting connection comes in to what at first sight seems like a totally different set of ideas. Suppose I have a sequence of graphs \( G^1, G^2, \ldots \), all of finite size. When can I say that this sequence of graphs is converging to a limit, and what kind of object is its limit?
Experience with analysis tells us that we would like converging objects to get more and more similar in their various properties, and one important set of properties for graphs is the appearance of specific sub-graphs, or motifs. For instance, when \( G_{ij} = G_{jk} = G_{ki} = 1 \), we say that \( i,j,k \) form a triangle, and we are often interested in the number of triangles in \( G \). More broadly, let \( H \) be some graph with fewer nodes than \( G \), and define \( m(H,G) \) to be the number of ways of mapping \( H \) onto \( G \) --- picking out nodes in \( G \) and identifying them with nodes in \( H \) such that the nodes in \( G \) have edges if and only if their counterpart nodes in \( H \) have edges. (In a phrase, the number of homomorphisms from \( H \) into \( G \).) The maximum possible number of such mappings is limited by the number of nodes in the two graphs. The density of \( H \) in \( G \) is \[ t(H,G) \equiv \frac{m(H,G)}{{|G| \choose |H|}} \] If \( H \) has more nodes than \( G \), we define \( m(H,G) \) and \( t(H,G) \) to be 0. (Actually, there are a couple of different choices for defining the allowed mappings from \( G \) to \( H \), and so for the normalizing factor in the denominator of \( t \), but these end up not making much difference.)
We can now at last define convergence of a graph sequence: \( G^1, G^2, \ldots \) converge when, for each motif \( H \), the density sequence \( t(H,G^1), t(H,G^2), \ldots \) converges. There are several points to note about this definition:
- If, after a certain point \( n \), the graph sequence becomes constant, \( G^n = G^{n+m} \), then the sequence converges. This is a reasonable sanity-check on our using the word "convergence" here.
- A sequence of isomorphic graphs (i.e., ones which are the same after some re-labeling of the nodes) has already converged, since they all have the same density for every motif. So the definition of convergence is insensitive to isomorphisms. This is good, in a way, because isomorphic graphs really are the same in a natural sense, but bad, because deciding whether two graphs are isomorphic is computationally non-trivial, and may even be NP-complete.
- If the sequence of graphs keep growing, then convergence of the sequence implies convergence not of the number of edges, triangles, four-stars, etc., but of their suitably-normalized densities.
- The definition is strongly analogous to that of "convergence in distribution" (a.k.a. "weak convergence") in probability theory. A sequence of distributions \( P^1, P^2, \ldots \), converges if and only if, for every bounded and continuous function \( f \), the sequence of expected values \[ P^i f \equiv \int{f(x) dP^{i}(x)} \] converges. Densities of motifs act like bounded and continuous "test functions".
- The limit of a sequence of graphs is not necessarily a graph. Analogously, the limit of a sequence of discrete probability distributions, like our empirical distribution at any \( n \), is not necessarily discrete — it might be a distribution with a continuous density, a mixture of a continuous and a discrete part, etc. The people who developed the theory of such graph limits called the limiting objects graphons. Roughly speaking, graphons are to graphs as general probability distributions are to discrete ones.
How are graphons represented, if they are not graphs? Well, they turn out to be representable as symmetric functions from the unit square to the unit interval, i.e., \( w \)-functions! It is easy to see how to turn any finite graph's adjacency matrix into a \( 0-1 \)-valued \( w \)-function: divide the unit interval into \( n \) equal segments, and make \( w \) 0 or 1 on each square depending on whether the corresponding nodes had an edge or not. Call this \( w_G \). It turns out, through an argument I do not feel up to even sketching today, that the density \( t(H,G) \) can be expressed as an integral, which depends on \( H \) with respect to \( w \)-function derived from \( G \): \[ t(H,G) = \int_{[0,1]^{|H|}}{\prod_{(i,j)\in H}{w_{G}(u_i,u_j)} du_1 \ldots du_{|H|}} \] This carries over to the limit: if the sequence \( G^n \) converges, then \[ \lim_{n\rightarrow\infty}{t(H,G^n)} = \int_{[0,1]^{|H|}}{\prod_{(i,j)\in H}{w(u_i,u_j)} du_1 \ldots du_{|H|}} \] for some limiting function \( w \). (If you are the kind of person who finds the analogy to convergence in distribution helpful, you can fill in this part of the analogy now.) We identify the limiting object, the graphon, with the limiting \( w \)-function, or rather with the equivalence class of limiting \( w \)-functions.
To sum up: If we start with an infinite exchangeable graph distribution, then what gets realized comes from a (randomly-chosen) extremal distribution. But the limits of sequences of graphs are, precisely, the extremal distributions of the family of exchangeable graphs. So we would seem to have the kind of nice, closed circle which makes statistical inference possible: a sufficiently large realization becomes representative of the underlying process, which lets us infer that process by examining the realization. What I am very much interested in is how to actually use this suggestion to do some concrete, non-parametric statistics for networks. In particular, it would seem that understanding this would open the way to being able to smooth networks and/or bootstrap them, and either one of those would make me very happy.
Specific points of interest:
- Understand how to metrize graph convergence, and efficiently calculate the metrics; use for tests of network difference.
- Suppose that the sequence of graphs \( G^n \) are sparse, so that the number of edges per node grows less than proportionally to the number of nodes. Then all motif densities tend to zero and we lose the ability to distinguish between graph sequences. What is the best way of defining convergence of sparse graphs? What does this do to the probabilistic analogs of graphons? A huge literature has sprung up around this question (samples from it below).
- How does this relate to the issues of projectibility for exponential-family random graph models?
- Given a graph sequence, when can we consistently estimate the, or a, limiting \( w \)-function? Bickel, Chen and Levina (below) define a set of statistics whose expected values characterize the \( w \)-function and which can be consistently estimated. This was extremely clever, but inverting the mapping from \( w \) to those expectations looks totally intractable — and indeed they don't even try. My own feeling is that this is more of a job for smoothing than for the method of moments, but I'm not comfortable saying much more, yet.
An idea on sparsity I have failed, and am failing, to turn into something useful
\[ \newcommand{\Expect}[1]{\mathbb{E}\left[ #1 \right]} \] I said above that a graph sequence is "sparse" when the number of edges per node doesn't grow linearly with the number of nodes. The alternative is that a graph sequence is dense, so the number of edges per node is proportional to the number of nodes. The troublesome point is that sequences of graphs generated from the same \( w \)-function can't be sparse, in this sense. To see this, pick your favorite node \( i \), of degree \( D_i \). Then for each \( j \), \( \Expect{G_{ij} | U_i = u} = \int_{[0,1]}{w(u, v) dv} \), and, by additivity of expectation, \( \Expect{D_i | U_i=u} = (n-1) \int_{[0,1}{w(u, v) dv} \) when there are \( n \) nodes. But the latent variables \( U_i \) are IID, so \( \frac{1}{n-1}{D_i} \rightarrow \int_{0}^{1}{w(u, v) dv} \) almost surely, conditional on \( U_i=u \), by the law of large numbers. Since this holds for all \( u \), \( \frac{1}{n-1}{D_i} \rightarrow \int_{[0,1]^2}{w(u,v) du dv} \) almost surely as well. And, unless the \( w \) function is 0 almost everywhere, this limiting density is \( >0 \). Thus the degree of any given node will grow proportionately to the number of nodes.Some people find this a disturbing prospect in an asymptotic framework for network analysis. After all, if I look at larger and larger samples of a collaboration network, it doesn't seem as though everyone's degree should keep growing in proportion to the number of nodes --- that every doubling in the number of scientists should on average double everyone's degree. On Mondays, Wednesdays and Fridays I share this unease. On Tuesdays and Thursdays, I remind myself that our data-collection processes bare little resemblance to this story, and that anyway asymptotics is all about approximation. After all, no one graph is "dense" or "sparse" in this sense. (On the weekend I try not to think about the issue.)
But, as I said above, the Aldous-Hoover theorem tells us that every exchangeable random graph is either a \( w \)-random graph, or a mixture of \( w \)-random graphs --- and the mixtures will still give us dense graph sequences, so that's no escape. What this implies that if you think growing sparse graph sequences is a desirable property in a network model, you need to abandon exchangeability. What we should use instead of exchangeability is still very much an open question.
Here is one idea which I have been toying with for a number of years, without getting very far. I put it out here now in case anyone else can make something of it; if you do, I'd appreciate an acknowledgment.
In the ordinary time-series / random sequence world, a very natural symmetry that's weaker than exchangeability (= invariance under permutation) is stationarity (= invariance under translation). I suspect we may be able to do something with stationary rather than exchangeable random graphs. For a random sequence, we say that it's stationary if for every block length \( k \) and translation \( h \), the sub-sequence \( (X_1, X_2, \ldots X_k) \) and the sub-sequence \( (X_{h+1}, X_{h+2} \ldots X_{h+k}) \) have the same distribution. (But this doesn't require that \( (X_1, X_2) \) and \( (X_2, X_1) \) have the same distribution, which exchangeability does.) So we could say that a graph is stationary, with respect to a certain ordering of the nodes, when, for every \( k \) and \( h \), the sub-graph formed by nodes \( 1:k \) and that formed by nodes \( (h+1):(h+k) \) are equal in distribution. This would preserve the notion that (in probability) the graph "looks the same everywhere", without requiring the extremely strong form of this that \( w \)-random graphs do.
The program then would be to answer the following questions:
- What are the extremal distributions with this symmetry like? (The extremal distributions for exchangeable sequences are IID sequences; for exchangeable random graphs, \( w \)-random graphs; for stationary sequences, stationary and ergodic sequences; etc.).
- With a characterization of the extremal distributions in hand, in what sense can sequences of individual graphs converge on those limits? (This would presumably be some sort of ergodic theorem.)
- Can distributions over graphs with this symmetry produce sparse graph sequences?
I will just say a little, here, about the third item. From the way I've set up the symmetry in the distribution, the expected number of edges within any group of \( k \) nodes that are contiguous in the given order has to be the same --- we get the same expected number of edges among nodes 1--5 as among 6--10 as among 501017--501021. So that's a contribution to the expected number of edges which is growing proportionately to the number of nodes. Moreover, by considering contiguous groups of length \( 2k \), we see that the expected number of nodes between groups of length \( k \) is also going to grow proportionately to \( n \). But there doesn't seem to be any reason why that number of between-group edges couldn't be considerably less than the number within the groups. In particular, it'd seem like we could have a lower and lower probability of edges between nodes which are further and further apart in the ordering. So I think it should be possible to get distributions over sparse graph sequences which obey this symmetry.
There's also another possible notion of "stationarity", which would go as follows. Pick our favorite node \( i \), and define its "radius 1" neighborhood as the subgraph among \( i \) and its neighbors. We'll could say that the distribution is radius-1 stationary if the distributions for the radius-1 neighborhood of any two nodes \( i \) and \( j \) are equal (up to isomorphism). We define the radius-\( k \) neighborhood around \( i \) recursively, as the subgraph of all nodes whose distance to \( i \) is \( \leq k \), and similarly stationarity out to radius \( k \), and finally over-all stationarity as stationarity out to arbitrarily large radii. (This is a little bit more like how we define stationarity for random fields.) I find this notion a bit less satisfying, because it seems more dependent on the randomly-generated graph, but on the other hand my first notion invoked an ordering of the nodes pulled from (to be polite) the air.
Finally, I should add that my own contribution to the sparse-graph-models literature, with Neil Spencer, doesn't invoke either of these notions of symmetry --- we started with a latent-space generative model and showed it had good properties. (It's stationarity in the latent space.) Tackling the issue from the side of the symmetry is, as I said, something I've played around with for some years, but haven't made much headway with, hence this addition to this notebook.
- See also:
- Characterizing Mixtures of Processes by Summarizing Statistics
- Exchangeable Random Sequences
- Graph Theory
- Recommended, big picture:
- Christian Borgs, Jennifer Chayes, László Lovász, Vera T. Sós, Balázs Szegedy and Katalin Vesztergombi, "Graph Limits and Parameter Testing", Proceedings of the 38th Annual {ACM} Symposium on the Theory of Computing [STOC 2006], pp. 261--270 [PDF reprint via Dr. Chayes]
- Persi Diaconis and Svante Janson, "Graph Limits and Exchangeable Random Graphs", Rendiconti di Matematica e delle sue Applicazioni 28 (2008): 33--61, arxiv:0712.2749
- Olav Kallenberg, Probabilistic Symmetries and Invariance Principles [Chapter 7 has the best treatment of exchangeable arrays I've seen. The key results are due to Aldous and Hoover in the early 1980s, but their proofs are notoriously hard, and Kallenberg provided the first "natural", probabilistic proofs.]
- Laszlo Lovasz
- "Very large graphs", arxiv:0902.0132
- Large Networks and Graph Limits
- Steffen L. Lauritzen
- "Exchangeable Rasch Matrices", Rendiconti di Matematica e delle sue Applicazioni 28 (2008): 83--95 [PDF reprint via Prof. Lauritzen]
- "Exchangeable Matrices and Random Networks", [PDF slides; earlier lectures (1, 2) probably useful for context]
- Patrick J. Wolfe, Sofia C. Olhede, "Nonparametric graphon estimation", arxiv:1309.5936
- Recommended, close-ups, the general theory of graph limits and exchangeable random graphs:
- David J. Aldous, "Representations for partially exchangeable arrays of random variables", Journal of Multivariate Analysis 11 (1981): 581--598
- Christian Borgs, Jennifer Chayes and László Lovász, "Moments of Two-Variable Functions and the Uniqueness of Graph Limits", Geometric and Functional Analysis 19 (2010): 1597--1619 [PDF preprint]
- Christian Borgs, Jennifer Chayes, László Lovász, Vera T. Sós and Katalin Vesztergombi, "Convergent Sequences of Dense Graphs I: Subgraph Frequencies, Metric Properties and Testing", Advances in Mathematics 219 (2008): 1801--1851 [PDF reprint via Dr. Chayes]
- Christian Borgs, Jennifer Chayes, László Lovász, Vera T. Sós and Katalin Vesztergombi, "Convergent Sequences of Dense Graphs II: Multiway Cuts and Statistical Physics" [PDF preprint via Dr. Borgs]
- Olav Kallenberg, "On the representation theorem for exchangeable arrays", Journal of Multivariate Analysis 30 (1989): 137--154
- Steffen Lauritzen, "Harmonic Analysis of Symmetric Random Graphs", arxiv:1908.06456 [This is an alternative way of getting to graphons and graph limits, by exploiting a correspondence between exchangeable distributions and "characters" on Abelian semi-groups, i.e., functions which act like exponentials. From this point of view, graphons are a more natural (generalized) for networks than are exponential-family random graphs. This relates to work Prof. Lauritzen has done on statistical sufficiency and generalized exponential families in other areas of statistics, linked to below.]
- Laszlo Lovasz, Balazs Szegedy, "Limits of dense graph sequences", arxiv:math/0408173 [The original graph-limits paper. Note especially theorem 2.5, which shows that the probability of \( t(H,G^n) \) being very different from the limiting value is exponentially small in \( n \).]
- Recommended, close-ups, graphon estimation:
- Edoardo M. Airoldi, Thiago B. Costa, Stanley H. Chan, "Stochastic blockmodel approximation of a graphon: Theory and consistent estimation", arxiv:1311.1731
- Peter J. Bickel, Aiyou Chen, and Elizaveta Levina, "The method of moments and degree distributions for network models", Annals of Statistics 39 (2011): 38--59, arxiv:1202.5101
- Stanley H. Chan, Edoardo M. Airoldi, "A Consistent Histogram Estimator for Exchangeable Graph Models", arxiv:1402.1888
- Sourav Chatterjee, "Matrix estimation by Universal Singular Value Thresholding", arxiv:1212.1247
- David S. Choi, Patrick J. Wolfe, "Co-clustering separately exchangeable network data", arxiv:1212.4093
- Olav Kallenberg, "Multivariate Sampling and the Estimation Problem for Exchangeable Arrays", Journal of Theoretical Probability 12 (1999): 859--883
- James Robert Lloyd, Peter Orbanz, Zoubin Ghahramani and Daniel M. Roy, "Random function priors for exchangeable arrays with applications to graphs and relational data", NIPS 2012
- M. E. J. Newman and Tiago P. Peixoto, "Generalized communities in networks", Physical Review Letters 115 (2015): 088701, arxiv:1505.07478
- Recommended, close-ups, the issue of sparsity:
- Christian Borgs, Jennifer T. Chayes, Henry Cohn, and Yufei Zhao, "An \( L^p \) Theory of Sparse Graph Convergence I: Limits, Sparse Random Graph Models, and Power Law Distributions", arxiv:1401.2906
- Christian Borgs, Jennifer Chayes and David Gamarnik, "Convergent sequences of sparse graphs: A large deviations approach", arxiv:1302.4615 [Defining the limit of a sequence of sparse graphs in terms of large deviations of random measures on them]
- Francois Caron, Emily B. Fox, "Bayesian nonparametric models of sparse and exchangeable random graphs", arxiv:1401.1137
- David Gamarnik, "Right-convergence of sparse random graphs", arxiv:1202.3123
- Recommended, close-ups, tangents touched on above:
- J. F. C. Kingman, "Uses of Exchangeability", Annals of Probability 6 (1978): 183--197
- Steffen L. Lauritzen
- "Extreme Point Models in Statistics" (with discussion), Scandinavian Journal of Statistics 11 (1984): 65--91 [JSTOR]
- Extremal Families and Systems of Sufficient Statistics [Mini-review]
- Recommended, close-ups, not otherwise classified but no less valuable on that account:
- Peter J. Bickel and Aiyou Chen, "A nonparametric view of network models and Newman-Girvan and other modularities", Proceedings of the National Academy of Sciences (USA) 106 (2009): 21068--21073 [This is the paper which introduced me, and many others in the network area, to the possibility of using graph-limit and exchangeable-array theory, but in retrospect it is by no means an easy read.]
- Sourav Chatterjee, Persi Diaconis and Allan Sly, "Random graphs with a given degree sequence", Annals of Applied Probability 21 (2011): 1400--1435, arxiv:1005.1136 [Interesting application of the new technology of graph limits to a classic model. May not be terribly practical yet but definitely promising.]
- Sourav Chatterjee and S. R. S. Varadhan, "The large deviation principle for the Erdos-Renyi random graph", arxiv:1008.1946 [Ditto]
- Pride compels me to recommend:
- Lawrence Wang, Network Comparisons using Sample Splitting [Ph.D. thesis, CMU Department of Statistics, 2016]
- Modesty forbids me to recommend:
- Alden Green and CRS, "Bootstrapping Exchangeable Random Graphs", Electronic Journal of Statistics 16 (2022): 1058--1095, arxiv:1711.00813
- CRS, 36-781, Advanced Statistical Network Models, fall 2016
- Neil Spencer and CRS, "Projective, Sparse, and Learnable Latent Position Network Models", Annals of Statistics 51 (2023): 2506--2525, arxiv:1709.09702
- To read:
- Miklós Abért, Tamás Hubai, "Benjamini-Schramm convergence and the distribution of chromatic roots for sparse graphs", arxiv:1201.3861
- David Aldous, Russell Lyons, "Processes on Unimodular Random Networks", arxiv:math/0603062
- Tim Austin, Dmitry Panchenko, "A hierarchical version of the de Finetti and Aldous-Hoover representations", arxiv:1301.1259
- Itai Benjamini, Russell Lyons, Oded Schramm, "Unimodular Random Trees", arxiv:1207.1752
- Béla Bollobás, Svante Janson and Oliver Riordan, "The Phase Transition in Inhomogeneous Random Graphs"
- Béla Bollobás and Oliver Riordan, "Sprase graphs: metrics and random models", arxiv:0812.2656
- Marián Boguñá and Romualdo Pastor-Satorras, "Class of correlated random networks with hidden variables", Physical Review E 68 (2003): 036112, arxiv:cond-mat/0306072
- Christian Borgs, Jennifer T. Chayes, Souvik Dhara, Subhabrata Sen, "Limits of Sparse Configuration Models and Beyond: Graphexes and Multi-Graphexes", arxiv:1907.01605
- Christian Borgs, Jennifer T. Chayes, Henry Cohn, Shirshendu Ganguly, "Consistent nonparametric estimation for heavy-tailed sparse graphs", arxiv:1508.06675
- Christian Borgs, Jennifer T. Chayes, Henry Cohn, László Miklós Lovász, "Identifiability for graphexes and the weak kernel metric", arxiv:1804.03277
- Christian Borgs, Jennifer Chayes, Julia Gaudio, Samantha Petti, Subhabrata Sen, "A large deviation principle for block models", arxiv:2007.14508
- Fan Chung, "From quasirandom graphs to graph limits and graphlets", arxiv:1203.2269
- Harry Crane, "Infinitely exchangeable random graphs generated from a Poisson point process on monotone sets and applications to cluster analysis for networks", arxiv:1110.4088
- Persi Diaconis, Susan Holmes and Svante Janson, "Threshold Graph Limits and Random Threshold Graphs", arxiv:0908.2448
- Mahya Ghandehari, Teddy Mishura, "Robust recovery of Robinson Lp-graphons", arxiv:2303.16598 [We may have been scooped...]
- Jan Grebik, Oleg Pikhurko, "Large deviation principles for graphon sampling", arxiv:2311.06531
- Rajat Subhra Hazra, Frank den Hollander, Maarten Markering, "Large deviation principle for the norm of the Laplacian matrix of inhomogeneous Erdos-Renyi random graphs", arxiv:2307.02324
- Pavol Hell and Jaroslav Nesetril, Graphs and Homomorphisms
- Tue Herlau, Mikkel N. Schmidt, Morten Morup, "Completely random measures for modelling block-structured networks", arxiv:1507.02925
- Brian Karrer, M. E. J. Newman, "Random graphs containing arbitrary distributions of subgraphs", arxiv:1005.1659 [Not sure if this really connects or not...]
- P. Latouche, S. Robin, "Bayesian Model Averaging of Stochastic Block Models to Estimate the Graphon Function and Motif Frequencies in a W-graph Model", arxiv:1310.6150
- Tâm Le Minh, Sophie Donnet, François Massol, Stéphane Robin, "Hoeffding-type decomposition for U-statistics on bipartite networks", arxiv:2308.14518
- A. Martina Neuman, Jason J. Bramburger, "Transferability of Graph Neural Networks using Graphon and Sampling Theories", arxiv:2307.13206
- Terence Tao, "A correspondence principle between (hyper)graph theory and probability theory, and the (hyper)graph removal lemma", arxiv:math/0602037
- Johan Ugander, Lars Backstrom, Jon Kleinberg, "Subgraph Frequencies: Mapping the Empirical and Extremal Geography of Large Graph Collections", arxiv:1304.1548
- To write:
- CRS + co-conspirators to be named later, "Detecting Differences in Network Structure"
- Co-conspirators to be named later + CRS, "Smoothing Networks"