Maximum-entropy random graph model

From HandWiki

Maximum-entropy random graph models are random graph models used to study complex networks subject to the principle of maximum entropy under a set of structural constraints,[1] which may be global, distributional, or local.

Overview

Any random graph model (at a fixed set of parameter values) results in a probability distribution on graphs, and those that are maximum entropy within the considered class of distributions have the special property of being maximally unbiased null models for network inference[2] (e.g. biological network inference). Each model defines a family of probability distributions on the set of graphs of size [math]\displaystyle{ n }[/math] (for each [math]\displaystyle{ n\gt n_0 }[/math] for some finite [math]\displaystyle{ n_0 }[/math]), parameterized by a collection of constraints on [math]\displaystyle{ J }[/math] observables [math]\displaystyle{ \{Q_j(G)\}_{j=1}^J }[/math] defined for each graph [math]\displaystyle{ G }[/math] (such as fixed expected average degree, degree distribution of a particular form, or specific degree sequence), enforced in the graph distribution alongside entropy maximization by the method of Lagrange multipliers. Note that in this context "maximum entropy" refers not to the entropy of a single graph, but rather the entropy of the whole probabilistic ensemble of random graphs.

Several commonly studied random network models are in fact maximum entropy, for example the ER graphs [math]\displaystyle{ G(n,m) }[/math] and [math]\displaystyle{ G(n,p) }[/math] (which each have one global constraint on the number of edges), as well as the configuration model (CM).[3] and soft configuration model (SCM) (which each have [math]\displaystyle{ n }[/math] local constraints, one for each nodewise degree-value). In the two pairs of models mentioned above, an important distinction[4][5] is in whether the constraint is sharp (i.e. satisfied by every element of the set of size-[math]\displaystyle{ n }[/math] graphs with nonzero probability in the ensemble), or soft (i.e. satisfied on average across the whole ensemble). The former (sharp) case corresponds to a microcanonical ensemble,[6] the condition of maximum entropy yielding all graphs [math]\displaystyle{ G }[/math] satisfying [math]\displaystyle{ Q_j(G)=q_j\forall j }[/math] as equiprobable; the latter (soft) case is canonical,[7] producing an exponential random graph model (ERGM).

Model Constraint type Constraint variable Probability distribution
ER, [math]\displaystyle{ G(n,m) }[/math] Sharp, global Total edge-count [math]\displaystyle{ |E(G)| }[/math] [math]\displaystyle{ 1/\binom{\binom{n}{2}}{m}; \ m=|E(G)| }[/math]
ER, [math]\displaystyle{ G(n,p) }[/math] Soft, global Expected total edge-count [math]\displaystyle{ |E(G)| }[/math] [math]\displaystyle{ p^{|E(G)|}(1-p)^{\binom{n}{2}-|E(G)|} }[/math]
Configuration model Sharp, local Degree of each vertex, [math]\displaystyle{ \{\hat{k}_j\}_{j=1}^n }[/math] [math]\displaystyle{ 1/\left\vert\Omega(\{\hat{k}_j\}_{j=1}^n)\right\vert; \Omega(\{k_j\}_{j=1}^n)=\{g\in \mathcal{G}_n:k_j(g)=\hat{k}_j\forall j\}\subset \mathcal{G}_n }[/math]
Soft configuration model Soft, local Expected degree of each vertex, [math]\displaystyle{ \{\hat{k}_j\}_{j=1}^n }[/math] [math]\displaystyle{ Z^{-1}\exp\left[-\sum_{j=1}^n\psi_j k_j(G)\right]; \ -\frac{\partial\ln Z}{\partial \psi_j} = \hat{k}_j }[/math]

Canonical ensemble of graphs (general framework)

Suppose we are building a random graph model consisting of a probability distribution [math]\displaystyle{ \mathbb{P}(G) }[/math] on the set [math]\displaystyle{ \mathcal{G}_n }[/math] of simple graphs with [math]\displaystyle{ n }[/math] vertices. The Gibbs entropy [math]\displaystyle{ S[G] }[/math] of this ensemble will be given by

[math]\displaystyle{ S[G]=-\sum_{G\in \mathcal{G}_n}\mathbb{P}(G)\log\mathbb{P}(G). }[/math]

We would like the ensemble-averaged values [math]\displaystyle{ \{\langle Q_j \rangle\}_{j=1}^J }[/math] of observables [math]\displaystyle{ \{Q_j(G)\}_{j=1}^J }[/math] (such as average degree, average clustering, or average shortest path length) to be tunable, so we impose [math]\displaystyle{ J }[/math] "soft" constraints on the graph distribution:

[math]\displaystyle{ \langle Q_j \rangle = \sum_{G\in \mathcal{G}_n}\mathbb{P}(G)Q_j(G) = q_j, }[/math]

where [math]\displaystyle{ j=1,...,J }[/math] label the constraints. Application of the method of Lagrange multipliers to determine the distribution [math]\displaystyle{ \mathbb{P}(G) }[/math] that maximizes [math]\displaystyle{ S[G] }[/math] while satisfying [math]\displaystyle{ \langle Q_j \rangle=q_j }[/math], and the normalization condition [math]\displaystyle{ \sum_{G\in \mathcal{G}_n}\mathbb{P}(G)=1 }[/math] results in the following:[1]

[math]\displaystyle{ \mathbb{P}(G) = \frac{1}{Z}\exp\left[-\sum_{j=1}^J\psi_j Q_j(G)\right], }[/math]

where [math]\displaystyle{ Z }[/math] is a normalizing constant (the partition function) and [math]\displaystyle{ \{\psi_j\}_{j=1}^J }[/math] are parameters (Lagrange multipliers) coupled to the correspondingly indexed graph observables, which may be tuned to yield graph samples with desired values of those properties, on average; the result is an exponential family and canonical ensemble; specifically yielding an ERGM.

The Erdős–Rényi model [math]\displaystyle{ G(n,m) }[/math]

In the canonical framework above, constraints were imposed on ensemble-averaged quantities [math]\displaystyle{ \langle Q_j \rangle }[/math]. Although these properties will on average take on values specifiable by appropriate setting of [math]\displaystyle{ \{\psi_j\}_{j=1}^J }[/math], each specific instance [math]\displaystyle{ G }[/math] may have [math]\displaystyle{ Q_j(G)\ne q_j }[/math], which may be undesirable. Instead, we may impose a much stricter condition: every graph with nonzero probability must satisfy [math]\displaystyle{ Q_j(G)= q_j }[/math] exactly. Under these "sharp" constraints, the maximum-entropy distribution is determined. We exemplify this with the Erdős–Rényi model [math]\displaystyle{ G(n,m) }[/math].

The sharp constraint in [math]\displaystyle{ G(n,m) }[/math] is that of a fixed number of edges [math]\displaystyle{ m }[/math],[8] that is [math]\displaystyle{ |\operatorname E(G)|=m }[/math], for all graphs [math]\displaystyle{ G }[/math] drawn from the ensemble (instantiated with a probability denoted [math]\displaystyle{ \mathbb{P}_{n,m}(G) }[/math]). This restricts the sample space from [math]\displaystyle{ \mathcal{G}_n }[/math] (all graphs on [math]\displaystyle{ n }[/math] vertices) to the subset [math]\displaystyle{ \mathcal{G}_{n,m}=\{g\in\mathcal{G}_n;|\operatorname E(g)|=m\}\subset \mathcal{G}_n }[/math]. This is in direct analogy to the microcanonical ensemble in classical statistical mechanics, wherein the system is restricted to a thin manifold in the phase space of all states of a particular energy value.

Upon restricting our sample space to [math]\displaystyle{ \mathcal{G}_{n,m} }[/math], we have no external constraints (besides normalization) to satisfy, and thus we'll select [math]\displaystyle{ \mathbb{P}_{n,m}(G) }[/math] to maximize [math]\displaystyle{ S[G] }[/math] without making use of Lagrange multipliers. It is well known that the entropy-maximizing distribution in the absence of external constraints is the uniform distribution over the sample space (see maximum entropy probability distribution), from which we obtain:

[math]\displaystyle{ \mathbb{P}_{n,m}(G)=\frac{1}{|\mathcal{G}_{n,m}|}=\binom{\binom{n}{2}}{m}^{-1}, }[/math]

where the last expression in terms of binomial coefficients is the number of ways to place [math]\displaystyle{ m }[/math] edges among [math]\displaystyle{ \binom{n}{2} }[/math] possible edges, and thus is the cardinality of [math]\displaystyle{ \mathcal{G}_{n,m} }[/math].

Generalizations

A variety of maximum-entropy ensembles have been studied on generalizations of simple graphs. These include, for example, ensembles of simplicial complexes,[9] and weighted random graphs with a given expected degree sequence [10]

See also

References

  1. 1.0 1.1 Park, Juyong; M.E.J. Newman (2004-05-25). "The statistical mechanics of networks". 
  2. van der Hoorn, Pim; Gabor Lippner; Dmitri Krioukov (2017-10-10). "Sparse Maximum-Entropy Random Graphs with a Given Power-Law Degree Distribution". 
  3. Newman, Mark (2010). Networks: An Introduction - Oxford Scholarship. doi:10.1093/acprof:oso/9780199206650.001.0001. ISBN 9780199206650. https://cds.cern.ch/record/1281254. Retrieved 2018-09-13. 
  4. Garlaschelli, Diego; den Hollander, Frank; Roccaverde, Andrea (2018-07-13). "Covariance Structure Behind Breaking of Ensemble Equivalence in Random Graphs". Journal of Statistical Physics 173 (3–4): 644–662. doi:10.1007/s10955-018-2114-x. ISSN 0022-4715. Bibcode2018JSP...173..644G. 
  5. Roccaverde, Andrea (August 2018). "Is breaking of ensemble equivalence monotone in the number of constraints?". Indagationes Mathematicae 30: 7–25. doi:10.1016/j.indag.2018.08.001. ISSN 0019-3577. 
  6. Bianconi, G. (2018-08-21). Multilayer Networks: Structure and Function. Oxford University Press. ISBN 9780198753919. https://global.oup.com/academic/product/multilayer-networks-9780198753919?cc=us&lang=en&. Retrieved 2018-09-13. 
  7. Anand, K.; Bianconi (2009). "Entropy measures for networks: Toward an information theory of complex topologies". Physical Review E 80 (4): 045102. doi:10.1103/PhysRevE.80.045102. PMID 19905379. Bibcode2009PhRvE..80d5102A. 
  8. Erdős, P.; Rényi, A. (2022). "On Random Graphs. I". Publicationes Mathematicae 6 (3–4): 290–297. doi:10.5486/PMD.1959.6.3-4.12. http://www.renyi.hu/~p_erdos/1959-11.pdf. Retrieved 2018-09-13. 
  9. Zuev, Konstantin; Or Eisenberg; Dmitri Krioukov (2015-10-29). "Exponential Random Simplicial Complexes". 
  10. Hillar, Christopher; Andre Wibisono (2013-08-26). "Maximum entropy distributions on graphs".