Buckets:
Title: When does bottom-up beat top-down in hierarchical community detection?
URL Source: https://arxiv.org/html/2306.00833
Published Time: Tue, 25 Nov 2025 02:03:01 GMT
Markdown Content: When does bottom-up beat top-down in hierarchical community detection?
2 Hierarchical Community Detection 1. 2.1 Divisive (top-down) Algorithms 2. 2.2 Agglomerative (bottom-up) Algorithms
3 Tree Recovery from the Bottom 1. 3.1 Hierarchical Stochastic Block Model 2. 3.2 Tree Recovery with Growing Average Degree 3. 3.3 Tree Recovery with Bounded Average Degree
4 Exact Recovery at Intermediate Depths 1. 4.1 Chernoff-Hellinger Divergence 2. 4.2 Exact recovery at Intermediate Depths
5 Discussion 1. 5.1 Previous Work on Exact Recovery in HSBM 2. 5.2 Top-Down HCD and Exact Recovery at Intermediate Depth 3. 5.3 Previous Work on Exact Recovery at Intermediate Depth
2. [6.2 Real Datasets](https://arxiv.org/html/2306.00833v3#S6.SS2 "In 6 Numerical Results ‣ When does bottom-up beat top-down in hierarchical community detection?")
1. [6.2.1 High-School Contact Dataset](https://arxiv.org/html/2306.00833v3#S6.SS2.SSS1 "In 6.2 Real Datasets ‣ 6 Numerical Results ‣ When does bottom-up beat top-down in hierarchical community detection?")
2. [6.2.2 Power Grid](https://arxiv.org/html/2306.00833v3#S6.SS2.SSS2 "In 6.2 Real Datasets ‣ 6 Numerical Results ‣ When does bottom-up beat top-down in hierarchical community detection?")
A Proofs for Section 3 1. A.1 Proof of Theorem 1 2. A.2 Proofs for Section 3.3
B Proofs of Sections 4 and 5 1. B.1 Proof of Theorem 3 2. B.2 Proof of Lemma 1 3. B.3 Comparing Top-down versus Bottom-up Conditions
C Additional Numerical Experiments 1. C.1 Synthetic Data Sets
2. [C.2 Real Data Sets](https://arxiv.org/html/2306.00833v3#A3.SS2 "In Appendix C Additional Numerical Experiments ‣ When does bottom-up beat top-down in hierarchical community detection?")
1. [C.2.1 Military Inter-alliance](https://arxiv.org/html/2306.00833v3#A3.SS2.SSS1 "In C.2 Real Data Sets ‣ Appendix C Additional Numerical Experiments ‣ When does bottom-up beat top-down in hierarchical community detection?")
2. [C.2.2 Football Data Set](https://arxiv.org/html/2306.00833v3#A3.SS2.SSS2 "In C.2 Real Data Sets ‣ Appendix C Additional Numerical Experiments ‣ When does bottom-up beat top-down in hierarchical community detection?")
When does bottom-up beat top-down in hierarchical community detection?
Maximilien Dreveton, Daichi Kuroda, Matthias Grossglauser, Patrick Thiran
{maximilien.dreveton,daichi.kuroda,matthias.grossglauser,patrick.thiran}@epfl.ch
École Polytechnique Fédérale de Lausanne (EPFL)
(September 15, 2025)
Abstract
Hierarchical community detection consists in finding a tree of communities where deeper levels of the hierarchy reveal finer-grained structures. There are two main classes of algorithms for this task. Divisive (top-down) algorithms recursively partition nodes into smaller communities until a stopping criterion indicates that no further splits are necessary. In contrast, agglomerative (bottom-up) algorithms first identify the smallest community structures and then repeatedly merge the communities by using a linkage method. In this work, we prove that a bottom-up algorithm recovers the hierarchy of a hierarchical stochastic block model (HSBM) when the average degree grows unbounded. We also establish the information-theoretic threshold for exact recovery at intermediate depths of the hierarchy and highlight its significance in understanding the limitations of top-down algorithms. Numerical experiments on both synthetic and real datasets demonstrate the superiority of bottom-up methods. In particular, a notable drawback of top-down algorithms is their tendency to produce dendrograms with inversions. These findings contribute to a better understanding of hierarchical clustering techniques and their applications in network analysis.
Keywords: Community detection; hierarchical clustering; Stochastic Block Model;Agglomerative clustering.
Contents
2 Hierarchical Community Detection 1. 2.1 Divisive (top-down) Algorithms 2. 2.2 Agglomerative (bottom-up) Algorithms
3 Tree Recovery from the Bottom 1. 3.1 Hierarchical Stochastic Block Model 2. 3.2 Tree Recovery with Growing Average Degree 3. 3.3 Tree Recovery with Bounded Average Degree
4 Exact Recovery at Intermediate Depths 1. 4.1 Chernoff-Hellinger Divergence 2. 4.2 Exact recovery at Intermediate Depths
5 Discussion 1. 5.1 Previous Work on Exact Recovery in HSBM 2. 5.2 Top-Down HCD and Exact Recovery at Intermediate Depth 3. 5.3 Previous Work on Exact Recovery at Intermediate Depth
2. [6.2 Real Datasets](https://arxiv.org/html/2306.00833v3#S6.SS2 "In 6 Numerical Results ‣ When does bottom-up beat top-down in hierarchical community detection?")
1. [6.2.1 High-School Contact Dataset](https://arxiv.org/html/2306.00833v3#S6.SS2.SSS1 "In 6.2 Real Datasets ‣ 6 Numerical Results ‣ When does bottom-up beat top-down in hierarchical community detection?")
2. [6.2.2 Power Grid](https://arxiv.org/html/2306.00833v3#S6.SS2.SSS2 "In 6.2 Real Datasets ‣ 6 Numerical Results ‣ When does bottom-up beat top-down in hierarchical community detection?")
A Proofs for Section 3 1. A.1 Proof of Theorem 1 2. A.2 Proofs for Section 3.3
B Proofs of Sections 4 and 5 1. B.1 Proof of Theorem 3 2. B.2 Proof of Lemma 1 3. B.3 Comparing Top-down versus Bottom-up Conditions
C Additional Numerical Experiments 1. C.1 Synthetic Data Sets
2. [C.2 Real Data Sets](https://arxiv.org/html/2306.00833v3#A3.SS2 "In Appendix C Additional Numerical Experiments ‣ When does bottom-up beat top-down in hierarchical community detection?")
1. [C.2.1 Military Inter-alliance](https://arxiv.org/html/2306.00833v3#A3.SS2.SSS1 "In C.2 Real Data Sets ‣ Appendix C Additional Numerical Experiments ‣ When does bottom-up beat top-down in hierarchical community detection?")
2. [C.2.2 Football Data Set](https://arxiv.org/html/2306.00833v3#A3.SS2.SSS2 "In C.2 Real Data Sets ‣ Appendix C Additional Numerical Experiments ‣ When does bottom-up beat top-down in hierarchical community detection?")
1 Introduction
A system of pairwise interactions among entities can conveniently be represented by a graph, where the system entities are the nodes and the interactions are the edges. Data collected in such form is increasingly abundant in many disciplines, such as sociology, physics, economics, and biology Newman2018_networksbook. Finding community structures by grouping nodes with similar connection patterns into clusters is one of the most important statistical analysis tasks on networks fortunato2010community; avrachenkov2022statistical. Community structures are often hierarchical. For example, in a co-authorship network, we can partition the researchers, based on their primary discipline (such as mathematics, physics, computer science, etc); each of these fields can be further split into specific sub-disciplines. The sub-division of larger communities into smaller ones provides a finer division of the network.
Hierarchical communities can naturally be inferred using top-down approaches, where the process begins by identifying the largest communities at the top of the hierarchy. These communities are then recursively decomposed into smaller sub-communities until a stopping rule indicates no further division is necessary. To identify the communities at the top of the hierarchy, one strategy is to progressively remove edges with the highest edge-betweenness centrality girvan2002community or with the lowest edge-clustering coefficient radicchi2004defining. Another strategy is to bi-partition the network by using spectral clustering dasgupta2006spectral; balakrishnan2011noise; li2022hierarchical. Top-down algorithms, however, possess several limitations. First, any clustering errors initially made become locked in and propagate to the next rounds of subdivisions, potentially compromising the accuracy of the predicted hierarchy. Second, the recursive splittings can overlook valuable information by ignoring the edges between communities that have already been separated. As a result, these approaches might not fully capture the interconnections between communities across different depths of the hierarchy.
In contrast, agglomerative (or bottom-up) algorithms take a different approach by constructing the hierarchy from the bottom upwards. These algorithms recursively merge smaller communities to form larger ones. Some bottom-up algorithms, such as the one proposed in bonald2018hierarchical; pons2005computing; chang2011general, generate a complete dendrogram. A dendrogram is a tree whose leaves are individual nodes, whose branches and internal nodes represent merged clusters. The length of every branch measures the similarity of its children.
Comparing different hierarchical community detection (HCD) algorithms can be done by establishing theoretical guarantees of their performance on random graph models. The hierarchical stochastic block model (HSBM) is a general model of random graphs containing hierarchical communities. This model defines the hierarchical community structure as a rooted binary tree whose leaves correspond to primitive communities. Each node belongs to a primitive community, and the interactions between two nodes belonging to communities a a and b b depend only on the lowest common ancestor between a a and b b on the tree.
We first establish that recovering the hierarchy of an HSBM is possible using a bottom-up algorithm when the average degree of the graph is only ω(1)\omega(1). Earlier studies required stronger degree growth conditions, such as Θ(N)\Theta(N) in balakrishnan2011noise, ω(N 1/2log pN)\omega(N^{1/2}\log^{p}N) (with p=1/2 p=1/2 in cohen2017hierarchical and p=2 p=2 in lyzinski2016community), or ω(log 1+pN)\omega(\log^{1+p}N) (with p=5 p=5 for dasgupta2006spectral and p>1 p>1 fixed for li2022hierarchical). We further discuss these results in Section5.1.
In the context of hierarchical communities, it is possible to study community recovery at different depths of the hierarchy. We rigorously establish the information-theoretic threshold for the exact recovery of the communities at any intermediate depth of the hierarchy. Exact recovery is the strongest notion of recovery: it is the ability to fully recover the communities when the number of nodes N N tends to infinity. In particular, we show that exact recovery at a larger depth is more challenging than exact recovery at a smaller depth. We further discuss the implication of this information-theoretic threshold to the success of top-down algorithms in Section5.2.
In our numerical experiments, we employ a bottom-up algorithm that follows a two-step process. To infer the underlying primitive communities, we first apply a spectral algorithm on the graph’s Bethe-Hessian matrix saade2014spectral; dall2021unified. Next, we use a bottom-up approach to build the hierarchy. To provide a comprehensive comparison, we evaluate this bottom-up algorithm against recursive spectral bi-partitioning lei2020unified; lei2021consistency, the most relevant top-down algorithm whose performance has been theoretically established and numerically validated. Our findings on synthetic data sets demonstrate that the bottom-up algorithm achieves exact recovery at intermediate depths, up to the information-theoretic thresholds, whereas recursive spectral bi-partitioning fails to do so. Furthermore, we show that the dendrogram produced by recursive spectral bi-partitioning suffers from inversions. Inversions occur when the algorithm incorrectly places a lower-depth cluster above a higher-depth cluster in the dendrogram, thus distorting the true hierarchical structure. Such inversions lead to misleading interpretations of the hierarchical relationships within the data.
The paper is structured as follows. In Section2, we describe top-down and bottom-up approaches for hierarchical community detection. The HSBM is defined in Section3, where we also derive conditions for recovering the hierarchy by the linkage procedure. Next, we investigate the exact recovery at intermediate depths of the hierarchy in Section4. We discuss these results in light of the existing literature in Section5. Finally, Section6 is devoted to the numerical experiments.
Notations
We denote by [N][N] the set {1,⋯,N}{1,\cdots,N}, by Ber(p)\operatorname{Ber}(p) a Bernoulli random variable with parameter p p. The Frobenius norm of a matrix is denoted ∥⋅∥F|\cdot|{F}. The Rényi divergence of order t∈(0,1)t\in(0,1) of a Bernoulli distribution Ber(p)\operatorname{Ber}(p) from another distribution Ber(q)\operatorname{Ber}(q) is defined as D t(Ber(p)∥Ber(q))=1 t−1log((1−p)t(1−q)1−t+p tq 1−t)\operatorname{D}{t}\left(\operatorname{Ber}(p)|\operatorname{Ber}(q)\right)=\frac{1}{t-1}\log\left((1-p)^{t}(1-q)^{1-t}+p^{t}q^{1-t}\right). For t=1/2 t=1/2, we write D 1/2(Ber(p),Ber(q))=−2log((1−p)(1−q)+pq)\operatorname{D}_{1/2}\left(\operatorname{Ber}(p),\operatorname{Ber}(q)\right)=-2\log\left(\sqrt{(1-p)(1-q)}+\sqrt{pq}\right).
We focus on undirected graphs G=(V,E)G=(V,E) whose node set is V=[N]V=[N] and adjacency matrix A=(A ij)∈{0,1}N×N A=\left(A_{ij}\right)\in{0,1}^{N\times N}. For a subset V 1 V_{1} of the node set V V, we let G[V 1]G[V_{1}] be the subgraph of G G induced by V 1 V_{1}. We denote by 𝒩 𝒯\mathcal{N}{\mathcal{T}} (resp., ℒ 𝒯\mathcal{L}{\mathcal{T}}) the internal nodes (resp., the leaves) of a tree 𝒯\mathcal{T}. For an internal node u u of a rooted 𝒯\mathcal{T}, denote by 𝒯[u]\mathcal{T}[u] the sub-tree rooted at u u. We denote lca(u,v)\mathrm{lca}(u,v) the lowest common ancestor between two nodes u,v∈𝒯 u,v\in\mathcal{T}.
2 Hierarchical Community Detection
Many networks present a hierarchical community structure. The primitive communities at the bottom of the hierarchy are a collection of subsets 𝒞=(C 1,⋯,C K)\mathcal{C}=(C_{1},\cdots,C_{K}) that partition the original node set V V into K K disjoint sets. These primitive communities are the leaves of a rooted tree 𝒯\mathcal{T}, which defines the hierarchical relationship between the communities. This section reviews two main strategies for hierarchical community detection(HCD).
2.1 Divisive (top-down) Algorithms
Divisive (top-down) HCD algorithms begin with one single community containing all the nodes. This community is recursively split until a selection rule indicates that no further splits are needed. This can be summarized as follows:
- 1.apply a selection rule to decide if the community contains sub-communities. If no, stop; if yes, split into two communities;
- 2.recursively repeat step 1 on each of the two sub-communities found.
Different choices for the stopping rule or the bi-partitioning algorithms have been explored dasgupta2006spectral; balakrishnan2011noise; li2022hierarchical.
Each recursive splitting of the graph loses some information. For example, consider the two clusters C 0 C_{0} and C 1 C_{1} obtained after the first split. The next step splits C 0 C_{0} (resp., C 1 C_{1}) into two subclusters C 00 C_{00} and C 01 C_{01} (resp., C 10 C_{10} and C 11 C_{11}), based only on the induced sub-graph G[C 0]G[C_{0}] (resp., G[C 1]G[C_{1}]). As a result, the clustering of C 0 C_{0} does not take into account the edges from C 0 C_{0} to C 1 C_{1}. If the edge densities between C 00 C_{00} and C 1 C_{1} are different from the edge densities between C 01 C_{01} and C 1 C_{1}, then this valuable information is not used by a top-down algorithm.
Furthermore, the resulting tree is unweighted. Although it is possible to compute similarities between pairs of predicted clusters (for example using the edge density between two clusters), these similarities are not guaranteed to increase with the depth of the dendrogram; inversions can occur. These inversions, as we will emphasize in the numerical section, lead to discrepancies in the hierarchical structure conveyed by the dendrogram.
2.2 Agglomerative (bottom-up) Algorithms
Agglomerative (bottom-up) HCD algorithms construct a sequence of clusterings in an ascending manner, where the dendrogram is created from its leaves to its root. The initial clustering corresponds to the leaves of the dendrogram, and the hierarchy is progressively built up by iteratively merging the most similar clusters.
A first approach initially assigns each node to a separate community and then merge the clusters that minimize a distance metric newman2004fast; pons2005computing; chang2011general; bonald2018hierarchical. Although these methods generate a complete dendrogram, determining the depth in the hierarchy where the community structure becomes meaningful is an old problem mojena1977hierarchical; gao2022selective.
A second approach directly estimates the bottom clusters by using a flat (non-hierarchical) graph-clustering algorithm. These bottom clusters are then sequentially merged two by two, based on a similarity measure. This can be summarized as follows:
- 1.apply a graph-clustering algorithm to find the bottom clusters;
1. (a)compute the similarity between all pairs of bottom clusters;
2. (b)merge the two clusters that are the most similar,1 1 1 When measuring the similarity between pairs of clusters using edge density, an implicit assumption is that the clustering structure exhibits assortativity–meaning that nodes within the same community are more likely to be connected. However, if the community structure is disassortative, clusters with the lowest similarity are merged. and update the similarities between this new cluster and the existing ones;
3. (c)repeat step (2b) until all clusters have been merged into a single one.
The abundant literature on graph clustering provides numerous candidate algorithms for the first stage. We note that, in most practical applications, the true number of bottom clusters K K is often unknown and needs to be inferred at this stage as well. The second stage is commonly called linkage. Various linkage variants exist, depending on the chosen similarity measures and update rules. In this paper, we adopt the linkage variant used in cohen2017hierarchical; cohen2019hierarchical, employing edge density as the similarity measure between pairs of clusters. The edge density between two node sets V 1,V 2⊂V V_{1},V_{2}\subset V is defined by
ρ(V 1,V 2)=w(V 1,V 2)|V 1|⋅|V 2|where w(V 1,V 2)=∑i∈V 1∑j∈V 2 A ij.\displaystyle\rho\left(V_{1},V_{2}\right)\ =\ \frac{w\left(V_{1},V_{2}\right)}{|V_{1}|\cdot|V_{2}|}\quad\text{ where }\quad w\left(V_{1},V_{2}\right)\ =\ \sum_{i\in V_{1}}\sum_{j\in V_{2}}A_{ij}.(2.1)
Recomputing the similarities in Step 2b is done as follows. Let C^1,⋯,C^K^\widehat{C}{1},\cdots,\widehat{C}{\widehat{K}} be the clusters initially predicted by the flat graph-clustering algorithm. Suppose that at some step of the algorithm, clusters C^k 1\widehat{C}{k{1}} and C^k 2\widehat{C}{k{2}} are the most similar and hence are merged to give a new cluster C^k 1∪k 2=C^k 1∪C^k 2\widehat{C}{k{1}\cup k_{2}}=\widehat{C}{k{1}}\cup\widehat{C}{k{2}}. We have for ℓ∉{k 1,k 2}\ell\not\in{k_{1},k_{2}},
ρ(C^k 1∪k 2,C^ℓ)=|C^k 1||C^k 1∪k 2|ρ(C^k 1,C^ℓ)+|C^k 2||C^k 1∪k 2|ρ(C^k 2,C^ℓ).\displaystyle\rho\left(\widehat{C}{k{1}\cup k_{2}},\widehat{C}{\ell}\right)\ =\ \frac{\left|\widehat{C}{k_{1}}\right|}{\left|\widehat{C}{k{1}\cup k_{2}}\right|}\rho\left(\widehat{C}{k{1}},\widehat{C}{\ell}\right)+\frac{\left|\widehat{C}{k_{2}}\right|}{\left|\widehat{C}{k{1}\cup k_{2}}\right|}\rho\left(\widehat{C}{k{2}},\widehat{C}_{\ell}\right).(2.2)
Therefore, the edge density naturally defines an average-linkage procedure for merging the clusters identified through the initial flat clustering. In particular, this procedure guarantees that the resulting dendrogram will be free from inversions, thus ensuring a coherent hierarchical structure murtagh2017algorithms. Algorithm1 provides a concise summary of this process.
Input: Graph G=(V,E)G=(V,E), partition 𝒞^\widehat{\mathcal{C}} of V V.
1 For all k,ℓ∈[K^]k,\ell\in\left[\widehat{K}\right], compute ρ(C^k,C^ℓ)\rho\left(\widehat{C}{k},\widehat{C}{\ell}\right) as in(2.1);
while|𝒞^|≥2\left|\widehat{\mathcal{C}}\right|\geq 2 do:
- •Let 𝒞^←𝒞^{C^k,C^ℓ}\widehat{\mathcal{C}}\leftarrow\widehat{\mathcal{C}},\backslash,\left{\widehat{C}{k},\widehat{C}{\ell}\right} where C^k,C^ℓ∈argmax C≠C′∈𝒞^ρ(C,C′)\widehat{C}{k},\widehat{C}{\ell}\in\operatorname*{arg,max}\limits_{C\neq C^{\prime}\in\widehat{\mathcal{C}}}\rho\left(C,C^{\prime}\right);
- •For any C∈𝒞^C\in\widehat{\mathcal{C}} compute ρ(C^k∪ℓ,C)\rho\left(\widehat{C}_{k\cup\ell},C\right) as defined in(2.2);
- •Let 𝒞^←𝒞^∪{C^k∪ℓ}\widehat{\mathcal{C}}\leftarrow\widehat{\mathcal{C}}\cup\left{\widehat{C}_{k\cup\ell}\right}.
Return: The tree 𝒯^\widehat{\mathcal{T}} capturing the sequence of nested merges in the while-loop.
Algorithm 1 Average-linkage.
3 Tree Recovery from the Bottom
We study the asymptotic performance of Algorithm1 on a class of random graphs with hierarchical community structures. We define the model in Section3.1. We prove that Algorithm1 recovers the hierarchical tree when the average degree of the graph grows unbounded in Section3.2. Finally, we study the bounded degree regime in Section3.3.
3.1 Hierarchical Stochastic Block Model
The hierarchical stochastic block model (HSBM) is a class of random graphs whose nodes are partitioned into latent hierarchical communities. Before defining this model formally, let us introduce some notations. Each node u u of a rooted binary tree 𝒯\mathcal{T} is represented by a binary string as follows. The root is indexed by the empty string ∅\emptyset. Each non-root node u u of the tree is labeled by the unique binary string u=u 1u 2⋯u q⋯u=u_{1}u_{2}\cdots u_{q}\cdots that records the path from the root (i.e.,u q=1 u_{q}=1 if step q q of the path is along the right branch of the split and u q=0 u_{q}=0 otherwise). The depth of node u u is denoted by |u||u| and coincides with its distance from the root. Finally, with this parametrization, the lowest common ancestor lca(u,v)\mathrm{lca}(u,v) of two nodes u,v∈𝒯 u,v\in\mathcal{T} is the longest common prefix of the binary strings u u and v v. This is also the common ancestor of u u and v v with the largest depth.
Here, we denote by ℒ 𝒯\mathcal{L}{\mathcal{T}} the leaves of the tree 𝒯\mathcal{T}. We will assign each node of the graph G G to one leaf of 𝒯\mathcal{T} and denote by C a C{a} the set of nodes assigned to the leaf a a. This forms the primitive communities 𝒞=(C a)a∈ℒ 𝒯\mathcal{C}=\left(C_{a}\right){a\in\mathcal{L}{\mathcal{T}}}. Any internal node u u of the tree is associated with a super-community C u C_{u} such that C u=∪a∈ℒ 𝒯[u]C a C_{u}=\cup_{a\in\mathcal{L}{\mathcal{T}[u]}}C{a} where ℒ 𝒯[u]\mathcal{L}{\mathcal{T}[u]} denotes the leaves of the sub-tree of 𝒯\mathcal{T} rooted at u u. In particular, we have C∅=V C{\emptyset}=V and C v⊂C u C_{v}\subset C_{u} if v v is a child of u u.
We suppose that the probability p ab p_{ab} of having a link between two nodes belonging to the primitive communities C a C_{a} and C b C_{b} depends only on the lowest common ancestor of a a and b b. Hence, we denote the probability p ab p_{ab} by p(lca(a,b))p(\mathrm{lca}(a,b)). In an assortative setting, we have p(u)<p(v)p(u)<p(v) if v v is a child of u u.
Definition 1.
Let N N be a positive integer, 𝒯\mathcal{T} a rooted binary tree with |ℒ 𝒯|=K|\mathcal{L}{\mathcal{T}}|=K leaves, and π=(π a)a∈ℒ 𝒯\pi=(\pi{a}){a\in\mathcal{L}{\mathcal{T}}} a probability vector. Let p:𝒯→[0,1]p\colon\mathcal{T}\to[0,1] be an increasing function, meaning that for any u,v∈𝒯 u,v\in\mathcal{T}, if v v is a child of u u we have p(u)<p(v)p(u)<p(v). A hierarchical stochastic block model (HSBM) is a graph G=(V,E)G=(V,E) such that V=[N]V=[N] and
- 1.each node i∈[N]i\in[N] is independently assigned to a community C a C_{a} where a a is sampled from [K][K] according to π\pi;
- 2.two nodes i∈C a i\in C_{a} and j∈C b j\in C_{b} are connected with probability p(lca(a,b))p(\mathrm{lca}(a,b)).
An important particular case of HSBM is the binary tree SBM (BTSBM), in which the tree 𝒯\mathcal{T} is full and balanced, and the probability of a link between two nodes in clusters C a C_{a} and C b C_{b} depends only on the depth of lca(a,b)\mathrm{lca}(a,b), i.e.,p(lca(a,b))=p|lca(a,b)|p(\mathrm{lca}(a,b))=p_{|\mathrm{lca}(a,b)|} for all a,b∈ℒ 𝒯 a,b\in\mathcal{L}{\mathcal{T}}. In particular, the number of communities of a BTSBM is K=2 d 𝒯 K=2^{d{\mathcal{T}}}, and assortativity implies that p 0<p 1<⋯<p d 𝒯 p_{0}<p_{1}<\cdots<p_{d_{\mathcal{T}}}. We illustrate an HSBM and a BTSBM in Figure1. Observe that the BTSBM defines the same model as in li2022hierarchical, while the HSBM is a natural extension where the tree is still binary but not necessarily full and complete.
(a)Hierarchical stochastic block model
(b)Binary tree SBM of depth 3
Figure 1: Examples of (a) an HSBM and (b) a BTSBM, with the binary string representation of each node. The link probabilities are p(u)p(u) for the HSBM and p|u|p_{|u|} for the BTSBM. The grey-colored rectangles represent the super-communities.
In our results, we use the following assumptions.
Assumption 1(Fixed hierarchy).
𝒯\mathcal{T} is a rooted binary tree with K K leaves and is independent of N N. The probability vector π∈(0,1)K\pi\in(0,1)^{K} is also independent of N N and satisfies min a∈Kπ a>0\min_{a\in K}\pi_{a}>0.
Assumption 2(Asymptotic scaling).
The edge connection probabilities p p can be written as p(t)=a(t)δ N p(t)=a(t)\delta_{N}, where (a(t))t∈𝒯(a(t))_{t\in\mathcal{T}} are positive constants (independent of N N).
Under Assumption2, the average degree is Θ(Nδ N)\Theta(N\delta_{N}), and we call δ N\delta_{N} the sparsity factor.
3.2 Tree Recovery with Growing Average Degree
In this section, we study the recovery of the hierarchical tree 𝒯\mathcal{T} and the bottom communities 𝒞=(C ℓ 1,⋯,C ℓ K)\mathcal{C}=\left(C_{\ell_{1}},\cdots,C_{\ell_{K}}\right) associated with the K K leaves of 𝒯\mathcal{T}. For an estimator 𝒞^=(C^1,⋯,C^K)\widehat{\mathcal{C}}=\left(\widehat{C}{1},\cdots,\widehat{C}{K}\right) of 𝒞\mathcal{C} verifying |𝒞|=|𝒞^|=K|\mathcal{C}|=|\widehat{\mathcal{C}}|=K, the number of mis-clustered nodes is
loss(𝒞,𝒞^)=min τ∈𝒮[K]∑k=1 K|C ℓ kΔC^τ(k)|,\displaystyle\mathrm{loss}\left(\mathcal{C},\widehat{\mathcal{C}}\right)\ =\ \min_{\tau\in\mathcal{S}{[K]}}\sum{k=1}^{K}\left|C_{\ell_{k}},\Delta,\widehat{C}_{\tau(k)}\right|,(3.1)
where BΔC=(B∪C)(B∩C)B,\Delta,C=\left(B\cup C\right)\backslash\left(B\cap C\right) denotes the symmetric difference between two sets B,C⊂V B,C\subset V. The minimum is taken over the symmetric group 𝒮[K]\mathcal{S}{[K]} of all permutations of[K][K], because we recover the bottom communities only up to a global permutation of the community labels. We study sequences of networks indexed by the number of nodes N N and for which the interaction probabilities might depend on N N. An estimator 𝒞^\widehat{\mathcal{C}} of 𝒞\mathcal{C} achieves exact recovery if loss(𝒞,𝒞^)→a.s.0\mathrm{loss}\left(\mathcal{C},\widehat{\mathcal{C}}\right)\xrightarrow{\rm{a.s.}}0, almost exact recovery if N−1loss(𝒞,𝒞^)→𝑝 0 N^{-1}\mathrm{loss}\left(\mathcal{C},\widehat{\mathcal{C}}\right)\xrightarrow{p}0 and weak recovery if ℙ(N−1loss(𝒞,𝒞^)<‖π‖2 2−ε)→0\mathbb{P}(N^{-1}\mathrm{loss}\left(\mathcal{C},\widehat{\mathcal{C}}\right)<|\pi|^{2}_{2}-\varepsilon)\to 0 for all ε>0\varepsilon>0.2 2 2 The interpretation is as follows. An exact estimator makes no mistakes, while an almost exact estimator makes o(N)o(N) mistakes. A weak estimator makes O(N)O(N) mistakes but still outperforms a naive random guessing, which ignores the graph structure and classifies a node i i into community a a with probability π a\pi{a}, independently of other nodes.
The number of edges between two communities C a C_{a} and C b C_{b} is binomially distributed with mean |C a|⋅|C b|⋅p ab|C_{a}|\cdot|C_{b}|\cdot p_{ab}. Therefore, if p ab=ω(N−2)p_{ab}=\omega(N^{-2}), this binomial random variable is concentrated around its mean and we have ρ(C a,C b)=(1+o(1))p ab\rho(C_{a},C_{b})=(1+o(1))p_{ab}. This suggests that, once the primitive communities are almost exactly recovered, the average-linkage procedure successfully recovers the tree from its leaves if the edge probability between different communities is ω(N−2)\omega\left(N^{-2}\right). The proof is more involved because (i) we compute ρ(C^a,C^b)\rho(\widehat{C}{a},\widehat{C}{b}) and not ρ(C a,C b)\rho(C_{a},C_{b}), and (ii) the estimator 𝒞^\widehat{\mathcal{C}} is correlated with the graph structure. In particular, to establish that ρ(C^a,C^b)=(1+o(1))p ab\rho(\widehat{C}{a},\widehat{C}{b})=(1+o(1))p_{ab}, we impose p ab=ω(N−1)p_{ab}=\omega(N^{-1}).
Theorem 1.
Consider an assortative HSBM. Suppose that Assumptions1 and2 hold, with Nδ N=ω(1)N\delta_{N}=\omega(1). Let 𝒞^\widehat{\mathcal{C}} be an estimator of 𝒞\mathcal{C}, possibly correlated with the graph edges and such that |𝒞^|=|𝒞||\widehat{\mathcal{C}}|=|\mathcal{C}|. If 𝒞^\widehat{\mathcal{C}} is almost exact, then Algorithm1 recovers 𝒯\mathcal{T} (starting from 𝒞^\widehat{\mathcal{C}}).
We prove Theorem1 in AppendixA.1. The assumption of the existence of an almost exact estimator 𝒞^\widehat{\mathcal{C}} of 𝒞\mathcal{C} is not limiting here. Indeed, under the assumptions of Theorem1, we can obtain such an estimator by the spectral algorithm of yun2016optimal or by the agnostic-degree-profiling algorithm of abbe2015recovering. Because the average degree is of the order Θ(Nδ N)\Theta(N\delta_{N}), Theorem1 ensures that Algorithm1 recovers the hierarchy when the average degree grows unbounded.3 3 3 When Nρ N=ω(1)N\rho_{N}=\omega(1), the graph may be disconnected; however, only o(N)o(N) nodes are excluded from the largest connected component, making almost exact recovery possible (but not exact recovery).
3.3 Tree Recovery with Bounded Average Degree
For simplicity, throughout this section, we assume that 𝒯\mathcal{T} is a full and balanced tree, and the edge probabilities satisfy p(t)=a(t)δ N p(t)=a(t)\delta_{N} where (a(t))t∈𝒯(a(t)){t\in\mathcal{T}} are positive constants (independent of N N), with Nδ N=Θ(1)N\delta{N}=\Theta(1). This bounded average degree regime is more challenging, as no algorithm can achieve almost exact recovery.4 4 4 When δ N=Θ(1/N)\delta_{N}=\Theta(1/N) and the average degree exceeds 1, there exists a single giant component containing Θ(N)\Theta(N) vertices, while all other components are of size o(N)o(N). Importantly, this giant component includes vertices from all K K communities; thus, there is no scenario in which two communities are entirely disconnected. Therefore, even in this extremely sparse regime, it remains possible to infer the community structure and estimate inter-community edge densities. We have clarified this point in the revised text to aid reader understanding. However, when Nδ N≥C N\delta_{N}\geq C where C C is a quantity depending only on a(t)a(t) but independent of N N, some algorithms achieve weak recovery abbe2017community. This leads to the following question: can average-linkage recover the tree, but starting with a weak estimator 𝒞^\widehat{\mathcal{C}} of 𝒞\mathcal{C} instead of an almost exact estimator?
To answer this question, we must handle the fact that a weak estimator misclassifies Θ(N)\Theta(N) nodes, as opposed to o(N)o(N) for an almost exact estimator. Recall from the discussion before Theorem1 that the correlation between 𝒞^\widehat{\mathcal{C}} and the graph G G makes it challenging to study the concentration of ρ(C^a,C^b)\rho(\widehat{C}{a},\widehat{C}{b}). As a result, we can no longer guarantee that ρ(C^a,C^b)=(1+o(1))p ab\rho(\widehat{C}{a},\widehat{C}{b})=(1+o(1))p_{ab} when 𝒞^\widehat{\mathcal{C}} is only a weak estimator. To mitigate this issue, we split G G into two graphs G 1 G^{1} and G 2 G^{2} and proceed in two steps. We first apply the flat-clustering algorithm on G 1 G^{1} to obtain 𝒞^1\widehat{\mathcal{C}}^{1}, and next estimate the edge density by
ρ 2(C^a 1,C^b 1)=∑i∈C^a 1,j∈C^b 1 A ij 2|C^a 1|⋅|C^b 1|,\displaystyle\rho_{2}(\widehat{C}{a}^{1},\widehat{C}{b}^{1})\ =\ \frac{\sum_{\begin{subarray}{c}i\in\widehat{C}{a}^{1},j\in\widehat{C}{b}^{1}\end{subarray}}A_{ij}^{2}}{\left|\widehat{C}{a}^{1}\right|\cdot\left|\widehat{C}{b}^{1}\right|},(3.2)
where A 2 A^{2} is the adjacency matrix of G 2 G^{2}. We call this two-step technique graph-splitting.
Definition 2(Graph-splitting).
The graph-splitting of a graph G=(V,E)G=(V,E) with split-probability γ\gamma generates two random graphs G 1=(V,E 1)G_{1}=(V,E_{1}) and G 2=(V,E 2)G_{2}=(V,E_{2}) having the same node set as G G. The graph G 1 G_{1} is formed by independently sampling each edge of G G with probability γ\gamma. The graph G 2 G_{2} consists of the remaining edges that were not includesd in G 1 G_{1},i.e., E 2=E\E 1 E_{2}=E\backslash E_{1}.
When G G is an HSBM with edge connection probabilities p ab p_{ab}, the subgraphs G 1 G_{1} and G 2 G_{2} are also HSBMs with the same community structure as G G, but with respective edge connection probabilities γp ab\gamma p_{ab} and (1−γ)p ab(1-\gamma)p_{ab}. As a result, graph-splitting provides an estimator 𝒞^1\widehat{\mathcal{C}}^{1} that is now independent of G 2 G_{2}, which solves the issue of correlation between 𝒞^\widehat{\mathcal{C}} and G G. However, this estimator still misclassifies Θ(N)\Theta(N) nodes, and we make the following simplifying assumption regarding the misclassified nodes.
Assumption 3.
Assume that there exists a weakly consistent estimator 𝒞^1=(C^1,⋯,C^K)\widehat{\mathcal{C}}^{1}=(\widehat{C}{1},\cdots,\widehat{C}{K}), that is an estimate of 𝒞=(C 1,⋯,C K)\mathcal{C}=(C_{1},\cdots,C_{K}), and let O ab=C a∩𝒞^b 1 O_{ab}=C_{a}\cap\widehat{\mathcal{C}}^{1}{b} be the nodes in cluster C a C{a} but assigned to cluster 𝒞^b 1\widehat{\mathcal{C}}^{1}_{b}. We suppose that
O ab N/K=(1+o(1))ζ(|lca(a,b)|),\frac{O_{ab}}{N/K}\ =\ (1+o(1)),\zeta(|\mathrm{lca}(a,b)|),
where ζ\zeta is a non-decreasing function of the depth satisfying ζ(d−1)+ζ(d−2)<2ζ(d)\zeta(d-1)+\zeta(d-2)<2\zeta(d).
Because all communities of a BTSBM have expected size N/K N/K, the quantity O ab N/K\frac{O_{ab}}{N/K} represents the fraction of vertices in cluster a a that are clustered in cluster b b. Hence, the assumption that ζ\zeta is non-decreasing implies that a node is more likely to be clustered into a cluster closer to its true cluster rather than one further away in the tree. Because a=b a=b if and only if |lca(a,b)|=d|\mathrm{lca}(a,b)|=d, the quantity ζ(d)\zeta(d) is the probability that a node is correctly clustered. In contrast, ζ(h)\zeta(h) for 0≤h≤d−1 0\leq h\leq d-1 is the probability that a node from cluster a a is misclustered into cluster b b whose least common ancestor with a a is at depth h h. The assumption ζ(d−1)+ζ(d−2)<2ζ(d)\zeta(d-1)+\zeta(d-2)<2\zeta(d) ensures that the proportion of misclustered nodes is not too large, and is automatically verified if ζ\zeta is strictly increasing. The following theorem establishes that Algorithm1 successfully recovers the hierarchy, even when starting with an estimator that makes Θ(N)\Theta(N) mistakes.
Theorem 2.
Let G G be a BTSBM. Suppose that Assumptions1 and2 hold and Nδ N=Θ(1)N\delta_{N}=\Theta(1). Let G 1 G_{1} and G 2 G_{2} be the two graphs obtained from G G by graph-splitting with any γ\gamma satisfying γ=1−o(1)\gamma=1-o(1) and 1−γ=ω(1/N)1-\gamma=\omega(1/N). Let 𝒞^1\widehat{\mathcal{C}}^{1} be a clustering obtained from G 1 G^{1} and satisfying Assumption3. Then Algorithm1 correctly recovers 𝒯\mathcal{T} (using the edge density defined in Equation(3.2)).
The assumption 1−γ=ω(1/N)1-\gamma=\omega(1/N) ensures that the edge density of G 2 G_{2} is ω(N−2)\omega(N^{-2}), and hence that G 2 G_{2} contains enough edges so that ρ 2(C^a 1,C^b 1)\rho_{2}(\widehat{C}{a}^{1},\widehat{C}{b}^{1}) defined in Equation(3.2) concentrates around its expectation. The condition γ=1−o(1)\gamma=1-o(1) ensures that if weak recovery is possible from G G, then it also remains possible from G 1 G_{1}: in other words, graph-splitting does not result in a loss of information.
Example 1.
Denote by η≥0\eta\geq 0 the proportion of misclustered nodes and suppose that ζ(0)=⋯=ζ(d−1)\zeta(0)=\cdots=\zeta(d-1). This imposes that ζ(d)=1−η\zeta(d)=1-\eta and ζ(h)=η/(K−1)\zeta(h)=\eta/(K-1) for 0≤h≤d−1 0\leq h\leq d-1. The condition ζ(d−1)+ζ(d−2)<2ζ(d)\zeta(d-1)+\zeta(d-2)<2\zeta(d) in Assumption3 is equivalent to η<1−1/K\eta<1-1/K.
4 Exact Recovery at Intermediate Depths
We now focus on the exact recovery of communities at intermediate depths within the hierarchy and establish tight information-theoretic conditions determining the feasibility of achieving exact recovery at a specific depth. It is intuitively expected that recovering super-communities of smaller depths should be comparatively easier than the recovery at larger depths. Therefore, we expect scenarios where the exact recovery of the primitive communities might be unattainable, but where the exact recovery of super-communities at intermediate depths remains achievable. To provide context, we initially recapitulate key findings on exact recovery in non-hierarchical stochastic block models (SBMs) in Section4.1. We then present our main results in Section4.2, which gives the precise conditions for exact recovery at intermediate depths. Although the results of this section do not specifically involve hierarchical algorithms, we discuss in Section5.2 implications of exact recovery at intermediate depths for the theoretical guarantees of top-down algorithms.
4.1 Chernoff-Hellinger Divergence
The hardness of separating nodes that belong to a primitive community a∈ℒ 𝒯 a\in\mathcal{L}{\mathcal{T}} from nodes in community b∈ℒ 𝒯 b\in\mathcal{L}{\mathcal{T}} is quantified by the Chernoff-Hellinger divergence,5 5 5 abbe2015community originally defined the Chernoff-Hellinger divergence as CH AS(a,b)=sup t∈(0,1)(1−t)∑c∈ℒ 𝒯 π c(tp ac+(1−t)p bc−p ac tp bc 1−t).\mathrm{CH}{AS}(a,b)\ =\ \sup{t\in(0,1)}(1-t)\sum_{c\in\mathcal{L}{\mathcal{T}}}\pi{c}\left(tp_{ac}+(1-t)p_{bc}-p_{ac}^{t}p_{bc}^{1-t}\right). When p ab=o(1)p_{ab}=o(1), we have (1−t)D t(Ber(p ac)∥Ber(p bc))=(1+o(1))(tp ac+(1−t)p bc−p ac tp bc 1−t)(1-t)\operatorname{D}{t}\left(\operatorname{Ber}\left(p{ac}\right)|\operatorname{Ber}\left(p_{bc}\right)\right)=(1+o(1))\left(tp_{ac}+(1-t)p_{bc}-p_{ac}^{t}p_{bc}^{1-t}\right). Hence, our definition of CH(a,b)\mathrm{CH}(a,b) in(4.1) coincides with the original definition CH AS(a,b)\mathrm{CH}_{AS}(a,b) (up to second-order terms). denoted by CH(a,b)=CH(a,b,π,p,𝒯)\mathrm{CH}(a,b)=\mathrm{CH}(a,b,\pi,p,\mathcal{T}) and defined by
CH(a,b)=sup t∈(0,1)(1−t)∑c∈ℒ 𝒯 π cD t(Ber(p ac)∥Ber(p bc)),\displaystyle\mathrm{CH}(a,b)\ =\ \sup_{t\in(0,1)}(1-t)\sum_{c\in\mathcal{L}{\mathcal{T}}}\pi{c}\operatorname{D}{t}\left(\operatorname{Ber}\left(p{ac}\right)|\operatorname{Ber}\left(p_{bc}\right)\right),(4.1)
where D t\operatorname{D}_{t} denotes the Rényi divergence of order t t. The key quantity assessing the possibility or impossibility of exact recovery in an HSBM is the minimal Chernoff-Hellinger divergence between all pairs of clusters. We denote it by I=I(π,p,𝒯)I=I(\pi,p,\mathcal{T}), and it is defined by
I=min a,b∈[K]a≠bCH(a,b).\displaystyle I\ =\ \min_{\begin{subarray}{c}a,b\in[K]\ a\neq b\end{subarray}}\ \mathrm{CH}(a,b).(4.2)
The condition for achieving exact recovery of the communities displays a phase transition. More precisely, when p ab=Θ(logN/N)p_{ab}=\Theta(\log N/N), exact recovery of the primitive communities is possible if lim NI logN>1\lim\frac{NI}{\log N}>1 and is impossible when lim NI logN<1\lim\frac{NI}{\log N}<1 abbe2015community; abbe2015recovering.
Example 2.
Consider a BTSBM with K=2 d K=2^{d} communities and π a=1/K\pi_{a}=1/K for all a∈[K]a\in[K]. Suppose that for all t∈[d]:p t=a tlogN/N t\in[d]\colon p_{t}=a_{t}\log N/N, with a t a_{t} positive constants. Simple computations (see for example(lei2020unified, Lemma 6.6)) yield that
I=1 KD 1/2(Ber(p d),Ber(p d−1))=1+o(1)K(a d−a d−1)2logN N.\displaystyle I\ =\ \frac{1}{K}\operatorname{D}{1/2}\left(\operatorname{Ber}(p{d}),\operatorname{Ber}(p_{d-1})\right)\ =\ \frac{1+o(1)}{K}\left(\sqrt{a_{d}}-\sqrt{a_{d-1}}\right)^{2},\frac{\log N}{N}.
Exact recovery of 𝒞\mathcal{C} is possible if (a d−a d−1)2>K\left(\sqrt{a_{d}}-\sqrt{a_{d-1}}\right)^{2}>K. This condition involves only a d a_{d} and a d−1 a_{d-1}, but not a d−2,a d−3,⋯,a 0 a_{d-2},a_{d-3},\cdots,a_{0}.
4.2 Exact recovery at Intermediate Depths
Although the exact recovery of the primitive communities is feasible when the quantity I I, as defined in Equation(4.2), exceeds the threshold of logN/N\log N/N, this insight alone does not provide the requirements for achieving exact recovery of the super-communities at a smaller depth. In this section, we answer this question by determining the information-theoretic threshold for achieving the exact recovery of super-communities at a specific depth.
We say that a node u u of 𝒯\mathcal{T} is at depth q q if its distance from the root is q q, or if u u is a leaf whose distance from the root is less than q q. The second condition is only needed if the binary tree is not perfect, as we highlight in Example3. We denote by 𝒮 q\mathcal{S}{q} (resp., 𝒮≤q\mathcal{S}{\leq q}) the set of nodes at depth q q (resp., at a depth less than or equal to q q), i.e.,
𝒮 q={u∈𝒯:(|u|=q)or(|u|<qanduis a leaf)}and 𝒮≤q=⋃r≤q 𝒮 r.\displaystyle\mathcal{S}{q}\ =\ \Big{u\in\mathcal{T}\colon\big(|u|=q\big)\text{ or }\big(|u|<q\text{ and }u\text{ is a leaf}\big)\Big}\quad\text{ and }\quad\mathcal{S}{\leq q}\ =\ \bigcup\limits_{r\leq q}\mathcal{S}_{r}.
The set of super-communities at depth q q is hence
sc(q,𝒞,𝒯)={C u}u∈𝒮 q with C u=⋃a∈ℒ 𝒯:a 1:q=u C a.\displaystyle\mathrm{sc}\left(q,\mathcal{C},\mathcal{T}\right)\ =\ \left{C_{u}\right}{u\in\mathcal{S}{q}}\quad\text{ with }\quad C_{u}\ =\ \bigcup_{a\in\mathcal{L}{\mathcal{T}}\colon a{1:q}=u}C_{a}.(4.3)
Example 3.
For the tree of Figure1(a) we have
sc(q,𝒞,𝒯)={{C 0,C 1}ifq=1,{C 00,C 01,C 10,C 11}ifq=2,{C 00,C 010,C 011,C 10,C 11}ifq=3.\displaystyle\mathrm{sc}\left(q,\mathcal{C},\mathcal{T}\right)\ =\ \begin{cases}{C_{0},C_{1}}&\text{ if }q=1,\ {C_{00},C_{01},C_{10},C_{11}}&\text{ if }q=2,\ {C_{00},C_{010},C_{011},C_{10},C_{11}}&\text{ if }q=3.\end{cases}(4.4)
Because the tree of Figure1(a) is not full and complete, the community C 10 C_{10} is present at depth 2 2 and at depth 3 3. In contrast, for the tree of Figure1(b), we have
sc(q,𝒞,𝒯)={{C 0,C 1}ifq=1,{C 00,C 01,C 10,C 11}ifq=2,{C 000,C 001,C 010,C 011,C 100,C 101,C 110,C 111}ifq=3.\displaystyle\mathrm{sc}\left(q,\mathcal{C},\mathcal{T}\right)\ =\ \begin{cases}{C_{0},C_{1}}&\text{ if }q=1,\ {C_{00},C_{01},C_{10},C_{11}}&\text{ if }q=2,\ {C_{000},C_{001},C_{010},C_{011},C_{100},C_{101},C_{110},C_{111}}&\text{ if }q=3.\end{cases}
The recovery of super-communities at depth q q can be affected by mistakes made at depth q′>q q^{\prime}>q. For example, if we consider the super-communities given in(4.4), then the recovery at depth 2 2 of C 01 C_{01} can be exact, even if nodes belonging to C 010 C_{010} are mistakenly classified in C 011 C_{011}. But, this is not the case if nodes in C 010 C_{010} are mistakenly classified in C 00 C_{00}. As a result, the exact recovery of the super-communities at depth 2 2 might still be achievable even if the communities at depth 3 3 cannot be recovered, as long as the errors occur within a super-community. This can occur, for instance, if C 010 C_{010} and C 011 C_{011} are very hard to separate, whereas C 010 C_{010}, C 00 C_{00}, C 10 C_{10} and C 11 C_{11} are easy to separate. We recall from Section3.2 that the hardness to distinguish two communities is expressed in terms of a Chernoff-Hellinger divergence(4.1). The difficulty in separating the primitive communities that do not belong to the same super-community at depth q q is quantified by the minimum Chernoff-Hellinger divergence taken across all pairs of primitive communities that do not belong to the same super-community at depth q q. This is the quantity I q=I(q,π,p,𝒯)I_{q}=I(q,\pi,p,\mathcal{T}) defined as
I q=min a≠b∈ℒ 𝒯 lca(a,b)∈𝒮≤q−1CH(a,b),\displaystyle I_{q}\ =\ \min_{\begin{subarray}{c}a\neq b\in\mathcal{L}{\mathcal{T}}\ \mathrm{lca}(a,b)\in\mathcal{S}{\leq q-1}\end{subarray}}\mathrm{CH}\left(a,b\right),(4.5)
where the condition lca(a,b)∈𝒮≤q−1\mathrm{lca}(a,b)\in\mathcal{S}{\leq q-1} ensures that the lowest common ancestor of a a and b b has a depth less than or equal to q−1 q-1. When q=d q=d, the minimum in Equation(4.5) is taken over all pairs of primitive communities, and the divergence I d I{d} is equal to the divergence I I defined in(4.2). Theorem3 states the condition for recovering the communities at depth q q.
Theorem 3.
Let G G be an HSBM and suppose Assumption1 and2 hold. Let q∈{1,⋯,d 𝒯}q\in{1,\cdots,d_{\mathcal{T}}} and denote by I q I_{q} the quantity defined in(4.5). The following holds:
- (i)exact recovery of the super-communities at depth q q is impossible if lim sup NI q logN<1\limsup\frac{NI_{q}}{\log N}<1;
- (ii)if lim inf NI q logN>1\liminf\frac{NI_{q}}{\log N}>1, then exact recovery of the super-communities at depth q q is possible.
Because I q I_{q} is the minimum taken over all 𝒮≤q−1\mathcal{S}{\leq q-1}, the quantity I q I{q} is non-increasing in q q. This is the reason why exact recovery at a lower intermediate depth q′q^{\prime} is easier than recovery at a higher depth q>q′q>q^{\prime}. We prove Theorem3 in AppendixB.1. Although the quantity I q I_{q} defined in(4.5) is generally hard to simplify, the following lemma provides a simple expression for BTSBMs.
Lemma 1.
For a BTSBM with K=2 d K=2^{d} balanced communities (π a=1/K\pi_{a}=1/K for all a∈ℒ 𝒯 a\in\mathcal{L}_{\mathcal{T}}), the minimum Chernoff-Hellinger divergence defined in Equation(4.5) is
I q=1 K(D 1/2(Ber(p q−1),Ber(p d))+∑k=1 d−q 2 k−1D 1/2(Ber(p q−1),Ber(p d−k))).\displaystyle I_{q}\ =\ \frac{1}{K}\left(\operatorname{D}{1/2}(\operatorname{Ber}(p{q-1}),\operatorname{Ber}(p_{d}))+\sum_{k=1}^{d-q}2^{k-1}\operatorname{D}{1/2}\left(\operatorname{Ber}(p{q-1}),\operatorname{Ber}(p_{d-k})\right)\right).
Lemma1 shows that when q=d q=d, we have I d=K−1D 1/2(Ber(p d−1),Ber(p d))I_{d}=K^{-1}\operatorname{D}{1/2}\left(\operatorname{Ber}(p{d-1}),\operatorname{Ber}(p_{d})\right), and hence we recover the divergence I I defined in(4.2) (see also Example2).
5 Discussion
5.1 Previous Work on Exact Recovery in HSBM
Early works by Dasgupta et al.dasgupta2006spectral establish conditions for the exact recovery by top-down algorithms in an HSBM. Nonetheless, the recovery is ensured only in relatively dense regimes (specifically, average degree at least log 6N\log^{6}N), and the algorithm requires an unspecified choice of hyper-parameters. Balakrishnan et al.balakrishnan2011noise show that a simple recursive spectral bi-partitioning algorithm recovers the hierarchical communities in a class of hierarchically structured weighted networks, where the weighted interactions are perturbed by sub-Gaussian noise. Nonetheless, the conditions require again a dense regime ((balakrishnan2011noise, Assumption 1) states that all weights are strictly positive, hence the average degree scales as Θ(N)\Theta(N)), and the proposed algorithm has no stopping criterion. Recent work by Li et al.li2022hierarchical demonstrates that the same recursive spectral bi-partitioning algorithm exactly recovers the communities, when the average degree grows as Ω(log 2+ϵN)\Omega(\log^{2+\epsilon}N) for some ϵ>0\epsilon>0 (this condition can be further relaxed to Ω(logN)\Omega(\log N) by a refined analysis lei2020unified). The analysis of li2022hierarchical also allows for an unbounded number of communities and provides a consistent stopping criterion.
The linkage++ algorithm proposed by Cohen-Addad et al.cohen2017hierarchical; cohen2019hierarchical is a bottom-up algorithm that first estimates K K primitive communities by SVD and then successively merges them by using a linkage procedure. As SVD requires the number of clusters K K as an input, the authors propose to run the algorithms for every K=2,⋯,O(logN)K=2,\cdots,O(\log N) and to choose the optimal K K that leads to the hierarchical tree with the smallest Dasgupta-cost dasgupta2016cost. Moreover, their analysis requires a dense setting in which the average degree grows as Ω(NlogN)\Omega(\sqrt{N\log N}).
Finally, in a recent paper, Fang and Rohe introduce the T-Stochastic Graph model fang2023t. This model captures hierarchical structures in networks by defining connection probabilities based on distances within an unrooted and potentially non-binary tree T T. To reconstruct the latent hierarchy, the authors propose a bottom-up algorithm that integrates spectral clustering with the Neighbor-Joining algorithm. Unlike average linkage, which greedily merges the closest clusters, Neighbor-Joining iteratively selects pairs of clusters that minimize a criterion incorporating both pairwise distances and their average distances to all other nodes. Their key theoretical result, Theorem 4.3, establishes asymptotic recovery of the latent hierarchy when the average degree of the graph is Ω(log 11.1n)\Omega(\log^{11.1}n).
5.2 Top-Down HCD and Exact Recovery at Intermediate Depth
Exact recovery of communities at intermediate hierarchical depths is essential to the success of top-down algorithms, as current theoretical analyses li2022hierarchical; lei2020unified assume perfect bi-partitioning at each of the first ℓ\ell recursive steps. This requires exact recovery of communities from depth 0 to depth ℓ\ell in the hierarchy, which is only achievable above the information-theoretic threshold for exact recovery at depth ℓ\ell. Because exact recovery requires graph connectivity, the average degree must be Ω(logN)\Omega(\log N). In the following section, we show that the information-theoretic threshold is less stringent than the conditions in li2022hierarchical; lei2020unified. Although there may be room to improve on the conditions of li2022hierarchical; lei2020unified, current proof techniques cannot guarantee hierarchical recovery when the average degree is o(logN)o(\log N).
This raises the following question: Can we establish theoretical guarantees for top-down algorithms when the average degree grows slower than logN\log N? Current proof techniques are constrained by their requirement that all the ℓ\ell first recursive steps must be error-free. This requirement ensures that the divisive algorithm recurse with HSBMs at each step. Indeed, consider an HSBM G=(V,E)G=(V,E) where a bi-partitioning algorithm produces partition 𝒞^=(C^1,C^2)\widehat{\mathcal{C}}=(\widehat{C}{1},\widehat{C}{2}) of V V, yielding the subgraphs G^1=G[C^1]\hat{G}{1}=G[\widehat{C}{1}] and G^2=G[C^2]\hat{G}{2}=G[\widehat{C}{2}]. When 𝒞^\widehat{\mathcal{C}} exactly recovers depth-1 communities, we have either C^1=C 1\widehat{C}{1}=C{1} and C^2=C 1\widehat{C}{2}=C{1}, or C^1=C 1\widehat{C}{1}=C{1} and C^2=C 2\widehat{C}{2}=C{2}, ensuring that G^1\hat{G}{1} and G^2\hat{G}{2} are HSBMs.
This property no longer holds when 𝒞^\widehat{\mathcal{C}} contains errors. Misclustered nodes create dependencies between edges in G^1\hat{G}{1} (resp., in G^2\hat{G}{2}), as their assignments are determined by the bi-partitioning algorithm applied to G G, so that G^1\hat{G}{1} and G^2\hat{G}{2} might bot be HSBMs any more. These correlations are complex to analyze, but we can bypass this difficulty and reframe the problem by viewing G^1\hat{G}{1} and G^2\hat{G}{2} as two perturbed HSBMs, where the perturbations result from connectivity modifications made by an adversary over a subset of nodes, whose size is upper-bounded by the number of misclustering errors. This formulation aligns with recent work on clustering robustness under adversarial perturbations cai2015robust; liu2022minimax; ding2023reaching, and could strengthen the theoretical guarantees for top-down algorithms. Furthermore, this adversarial framework explains why top-down algorithms using spectral clustering—a method known to be sensitive to arbitrary perturbations—are suboptimal.
5.3 Previous Work on Exact Recovery at Intermediate Depth
Sufficient conditions for the exact recovery of super-communities in BTSBMs by recursive spectral bi-partitioning algorithms were established in li2022hierarchical, under the assumptions that all connection probabilities have the same scaling factor δ N\delta_{N} (i.e.,p(t)=Θ(δ N)p(t)=\Theta(\delta_{N}) for all t∈𝒯 t\in\mathcal{T}) verifying δ N=Ω(log 2+ϵN/N)\delta_{N}=\Omega(\log^{2+\epsilon}N/N). The result was later refined in lei2020unified to assume only that δ N=Ω(logN/N)\delta_{N}=\Omega(\log N/N) and in lei2021consistency to enable different scaling factors at different depths of the hierarchy. To compare these results with Theorem3, let us consider a BTSBM with p(t)=a|t|logN/N p(t)=a_{|t|}\log N/N where a|t|a_{|t|} is constant independent of N N. The average connection probability in a super-community of depth q q is then a¯qlogN/N\bar{a}{q}\log N/N where a¯q=1 2 d−q(a d+∑k=1 d−q 2 k−1a d−k)\bar{a}{q}=\frac{1}{2^{d-q}}\left(a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}\right). Then (lei2020unified, Theorem 6.5) states that the recursive spectral bi-partitioning algorithm exactly recovers all the super-communities up to depth ℓ\ell if
(a¯q−a q−1)2>2 q for allq∈[ℓ].\displaystyle\left(\sqrt{\bar{a}{q}}-\sqrt{a{q-1}}\right)^{2}>2^{q}\quad\text{ for all }q\in[\ell].(5.1)
The condition (a¯q−a q−1)2>2 q\left(\sqrt{\bar{a}{q}}-\sqrt{a{q-1}}\right)^{2}>2^{q} is the exact recovery threshold of a symmetric SBM with 2 q 2^{q} communities and intra-community (resp., inter-community) link probabilities a¯qlogN/N\bar{a}{q}\log N/N (resp., a q−1logN/N a{q-1}\log N/N). Although the depth q q of the hierarchy is composed of 2 q 2^{q} super-communities with average intra-connection probability a¯qlogN/N\bar{a}{q}\log N/N and inter-connection probability a q−1logN/N a{q-1}\log N/N, Condition(5.1) does not match the information-theoretic threshold derived in Theorem3. Indeed, the structure of a super-community at depth q q is not an Erdős-Rényi random graph with connection probability a¯qlogN/N\bar{a}_{q}\log N/N, but is instead an SBM with 2 d−q 2^{d-q} primitive communities.
For all q∈[d]q\in[d], let us define J q td J_{q}^{\mathrm{td}} and J q bu J_{q}^{\mathrm{bu}} by
J q td\displaystyle J_{q}^{\mathrm{td}}=1 2 d(a d+∑k=1 d−q 2 k−1a d−k−2 d−qa q−1)2,\displaystyle\ =\ \frac{1}{2^{d}}\left(\sqrt{a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}}-\sqrt{2^{d-q}a_{q-1}}\right)^{2},(5.2) J q bu\displaystyle J_{q}^{\mathrm{bu}}=1 2 d((a q−1−a d)2+∑k=1 d−q 2 k−1(a q−1−a d−k)2).\displaystyle\ =\ \frac{1}{2^{d}}\left(\left(\sqrt{a_{q-1}}-\sqrt{a_{d}}\right)^{2}+\sum_{k=1}^{d-q}2^{k-1}\left(\sqrt{a_{q-1}}-\sqrt{a_{d-k}}\right)^{2}\right).(5.3)
Condition(5.1) for exact recovery of the super-communities up to depth ℓ\ell is equivalent to
min q∈[ℓ]J q td>1,\displaystyle\min_{q\in[\ell]}J_{q}^{\mathrm{td}}>1,(5.4)
whereas the corresponding condition for a bottom-up algorithm given by Theorem3(ii) and Lemma1 can be recast as
J ℓ bu>1.\displaystyle J_{\ell}^{\mathrm{bu}}>1.(5.5)
Let us compare(5.4) and(5.5). First, notice that for some choices of a 0,⋯,a d a_{0},\cdots,a_{d}, the function q↦J q td q\mapsto J_{q}^{\mathrm{td}} given in(5.2) is not monotonically non-increasing,6 6 6 Let d=3 d=3. For (a 0,a 1,a 2,a 3)=(2.2,2.5,3,25)(a_{0},a_{1},a_{2},a_{3})=(2.2,2.5,3,25) we have (J 1 td,J 2 td,J 3 td)=(0.96,1.17,1.33)(J_{1}^{\mathrm{td}},J_{2}^{\mathrm{td}},J_{3}^{\mathrm{td}})=(0.96,1.17,1.33), but for (a 0,a 1,a 2,a 3)=(3,9,15,21)(a_{0},a_{1},a_{2},a_{3})=(3,9,15,21) we have (J 1 td,J 2 td,J 3 td)=(1.89,0.39,0.06)(J_{1}^{\mathrm{td}},J_{2}^{\mathrm{td}},J_{3}^{\mathrm{td}})=(1.89,0.39,0.06). Finally, for (a 0,a 1,a 2,a 3)=(2.2,2.4,4,22)(a_{0},a_{1},a_{2},a_{3})=(2.2,2.4,4,22) we have (J 1 td,J 2 td,J 3 td)=(0.85,1.02,0.90)(J_{1}^{\mathrm{td}},J_{2}^{\mathrm{td}},J_{3}^{\mathrm{td}})=(0.85,1.02,0.90). Thus, the quantities J 1 td,J 2 td,J 3 td J_{1}^{\mathrm{td}},J_{2}^{\mathrm{td}},J_{3}^{\mathrm{td}} can be increasing, decreasing or interlacing. which contradicts the intuitive fact that the recovery at larger depths is harder than the recovery at smaller depths. Moreover, although J d bu=J d td J_{d}^{\mathrm{bu}}=J_{d}^{\mathrm{td}}, Lemma5 (deferred in AppendixB.3) shows that J q bu>J q td J_{q}^{\mathrm{bu}}>J_{q}^{\mathrm{td}} for all q∈[d−1]q\in[d-1], hence Condition(5.4) is strictly more restrictive than Condition(5.5). It is not clear whether the results of lei2020unified can be further improved as exact recovery by spectral bi-partitioning requires entry-wise concentration of the eigenvectors, which is established using sophisticated ℓ 2→∞\ell_{2\to\infty} perturbation bounds.
6 Numerical Results
In this section, we investigate the empirical performance of different HCD strategies on synthetic and real networks. More precisely, we compare Algorithm1 (bottom-up) to the recursive bi-partitioning algorithm of li2022hierarchical (top-down). To obtain the flat clustering input for Algorithm1, we use spectral clustering with the Bethe-Hessian matrix, as proposed in dall2021unified. This spectral algorithm does not require prior knowledge of the number of communities K K and has been shown to perform well on both synthetic and real datasets.7 7 7 The code for this algorithm is available online at https://lorenzodallamico.github.io/codes/. Moreover, recent theoretical work has established that the Bethe-Hessian matrix can be used to recover both the number of communities and the communities(stephan2024community). Our code is available on GitHub at https://github.com/daichikuroda/bottom_up_HCD.
6.1 Synthetic Data Sets
We compare bottom-up and top-down approaches on synthetic data sets by computing the accuracy at different depths. We define accuracy at depth q q as 1−loss(sc(q,𝒞,𝒯),sc(q,𝒞^,𝒯^))/N 1-\mathrm{loss}\left(\mathrm{sc}(q,\mathcal{C},\mathcal{T}),\mathrm{sc}(q,\widehat{\mathcal{C}},\widehat{\mathcal{T}})\right)/N, where loss\mathrm{loss} is given by(3.1) and sc(q,𝒞,𝒯)\mathrm{sc}(q,\mathcal{C},\mathcal{T}) (resp., sc(q,𝒞^,𝒯^)\mathrm{sc}(q,\widehat{\mathcal{C}},\widehat{\mathcal{T}})) are the ground truth (resp., predicted) super-communities defined in(4.3).
6.1.1 Binary Tree SBMs
We first generate synthetic BTSBMs of depth d d, where the probability of an edge between two nodes whose lowest common ancestor has depth k k is p k=a klogN/N p_{k}=a_{k}\log N/N. We compare the accuracy of top-down and bottom-up at each depth. We show in Figure2 the results obtained on BTSBMs of depth 3 3, with 400 400 nodes in each primitive community (thus N=2 3×400=3200 N=2^{3}\times 400=3200 nodes in total). We let a 0=40 a_{0}=40, a 3=100 a_{3}=100, and the values of a 1 a_{1} and a 2 a_{2} vary in the range (a 0,a 3 a_{0},a_{3}), with the condition a 1<a 2 a_{1}<a_{2}. Solid lines in each panel show the exact recovery threshold of the given method at different depths. We observe the strong alignment between the theoretical guarantees and the results of the numerical simulations.
Figure 2: Performance of bottom-up and top-down algorithms on BTSBMs of depth 3, N=3200 N=3200 nodes, and interaction probabilities p k=a klogN/N p_{k}=a_{k}\log N/N, where a 0=40 a_{0}=40 and a 3=100 a_{3}=100, as a function of a 1 a_{1} and a 2 a_{2}. We vary a 1≤a 2 a_{1}\leq a_{2} from a 0 a_{0} to a 3 a_{3}. The empirical performance of the algorithms is measured by the accuracy at each depth, given by the color scale (results are averaged over 10 realizations). Large circles represent exact recovery (i.e., perfect accuracy on each of the 10 runs), and small crosses represent a non-exact recovery. The colored solid lines delimit the theoretical exact recovery thresholds for each algorithm on the various depths (given by Equations(5.4)-(5.5)); for a given depth q q, these equations provide a single condition for bottom-up, but q q conditions for top-down. At depths 1 and 2, the regimes where exact recovery can be achieved are the areas above the solid line(s). At depth 3, the area lies below the threshold drawn by the red line (and above the blue and green lines for top-down; this area forms a small triangle).
6.1.2 Robustness of Linkage to Misclustering Errors
Finally, we evaluate the performance of Algorithm1 when the number of misclassified nodes increases. We generate BTSBMs with full and balanced trees of depth 3 3 with 200 nodes in each community (N=1600 N=1600 in total), and interaction probabilities given by p k=0.08β 3−k p_{k}=0.08\beta^{3-k}, where β\beta is a parameter varying between 0 and 1 1. At the two extremes, β=0\beta=0 leads to a graph with K K communities disconnected from each other, while β=1\beta=1 leads to an Erdős-Rényi graph without community structure.
Figures3(a) and3(b) show two confusion matrices, which are K K-by-K K matrices where each element (i,j)(i,j) is |C i∩C^τ∗(j)||C_{i}\cap\widehat{C}_{\tau^{*}(j)}|, for two different average degree. We observe that mislabeled nodes are typically assigned to a community close to their true one. For instance, a misclassified node from community 0 may be incorrectly clustered into community 1 or 2, but it is highly unlikely to be placed in community 7. To reinforce this observation and further evaluate the validity of Assumption3, we estimate the probability of misclassification at level ℓ∈[K]\ell\in[K], corresponding to the quantity ζ(ℓ)\zeta(\ell) in Assumption3, by
ζ^(ℓ)=1 N∑k=1 K∑m=k K 𝟙{|lca(C k,C m)|=ℓ}⋅|C k∩C^τ∗(m)|,\displaystyle\hat{\zeta}(\ell)\ =\ \frac{1}{N}\sum_{k=1}^{K}\sum_{m=k}^{K}\mathbb{1}{|\mathrm{lca}(C_{k},C_{m})|=\ell}\cdot\left|C_{k}\cap\widehat{C}_{\tau^{*}(m)}\right|,
where τ∗=argmin τ∈𝒮[K]∑k=1 K|C kΔC^τ(k)|\tau^{}=\operatorname{arg,min}{\tau\in\mathcal{S}{[K]}}\sum_{k=1}^{K}\left|C_{k},\Delta,\widehat{C}_{\tau(k)}\right| is the optimal permutation of cluster labels. Figure3(c) shows that, for any given β\beta, we have ζ^(0)<ζ^(1)<ζ^(2)<ζ^(3)\hat{\zeta}(0)<\hat{\zeta}(1)<\hat{\zeta}(2)<\hat{\zeta}(3), showing that ζ^\hat{\zeta} is non-decreasing with ℓ\ell. This observation aligns with Assumption3. Finally, Figure3(d) shows the tree recovery success rates with and without graph-splitting. A successful tree recovery occurs when the following condition is satisfied:
∏k=1 K∏m=k+1 K 𝟙{|lca(C k,C m)|=|lca(𝒞 τ∗(k),𝒞 τ∗(m))|}.\displaystyle\prod_{k=1}^{K}\prod_{m=k+1}^{K}\mathbb{1}\left{|\mathrm{lca}(C_{k},C_{m})|\ =\ |\mathrm{lca}(\mathcal{C}{\tau^{*}(k)},\mathcal{C}{\tau^{*}(m)})|\right}.
The legend indicates the value of γ\gamma used for graph splitting, which represents the proportion of edges allocated to estimating bottom community labels via the flat clustering algorithm, while the remaining edges are used to construct the hierarchy with Algorithm1. We observe that Algorithm1 maintains a high tree recovery rate even when the estimated bottom clusters contain many misclustered nodes. When the expected degree is 5 5, Figure3(c) shows that ζ^(d)=ζ^(3)≈0.6\hat{\zeta}(d)=\hat{\zeta}(3)\approx 0.6, meaning that approximately 40% of the nodes are misclustered. Nevertheless, Figure3(d) indicates that Algorithm1 achieves a tree recovery success rate of around 50%. This demonstrates the robustness of Algorithm1 to a high proportion of misclustered nodes. Furthermore, while we introduced graph splitting for the theoretical analysis, our empirical results show that it may not be needed in practice.
(a)Average degree 5 5
(b)Average degree 10
(c)Types of errors
(d)Tree recovery rate
Figure 3: BTSBMs with depth 3, N=1600 N=1600, and β=0.3\beta=0.3. Figures3(a) and3(b) show two confusion matrices when the expected degree equals 5 and 10. Figure3(c) shows the evolution of ζ\zeta as a function of the expected degree.Figure3(d) shows the tree recovery success rate with and without graph splitting. Results of Figures3(c) and3(d) are averaged over 200 realizations.
6.2 Real Datasets
6.2.1 High-School Contact Dataset
We evaluate HCD algorithms on a dataset of face-to-face interactions among 327 high school students from Lycée Thiers in Marseilles, France Mastrandrea_Fournet_Barrat_2015 (http://www.sociopatterns.org/). The network consists of 9 class-based communities, with weighted edges representing proximity encounters over five days. These classes also correspond to four academic specializations: mathematics & physics (MP), physics & chemistry (PC), engineering (PSI), and biology (BIO). The MP and BIO groups contain three sub-classes each, while PC has two and PSI one. The hierarchical structure should reflect specialties at higher depths and individual classes at lower depths.
The results of both HCD algorithms are shown in Figure4. The bottom-up algorithm consistently detects 31 communities, while the top-down algorithm predicts an average of 8.93 (8 communities in 7 runs, 9 in 93 runs). Both align well with the ground truth, with adjusted mutual information (AMI) scores of 0.938 for bottom-up 8 8 8 AMI is computed after merging the 31 bottom-up communities into 9 super-communities. and 0.945 for top-down. Notably, bottom-up recovers both class and specialization structures while also detecting smaller subgroups, likely representing groups of friend inside a same class, revealing a richer hierarchical structure than the ground truth itself.
Figure 4: Bottom-up and top-down algorithms on the high school data set. Nodes correspond to the students, colors to the true classes, and edges of the graph are in grey. The hierarchical tree is drawn in black, and its root is marked by a star symbol.
6.2.2 Power Grid
We next consider the power grid of continental Europe from the map of the Union for the Coordination of Transmission of Electricity (UCTE). We use the same data set that was previously used for HCD schaub2012markov. Figures5 and6 show the outputs of the bottom-up and top-down, respectively. We observe a higher correlation to geographical positions in the output of bottom-up. Moreover, the dendrogram obtained by top-down shows a significant amount of inversions, contrary to the one obtained by bottom-up.
(a)8 communities.
(b)2 communities.
(c)Dendrogram.
Figure 5: Bottom-up algorithm on the power-grid network.
(a)8 communities.
(b)2 communities.
(c)Dendrogram.
Figure 6: Recursive spectral bi-partitioning algorithm on the power grid network.
7 Conclusion
Top-down approaches need to make partition decisions for large communities, without exploiting side information about the internal structure of these communities. In a sparse regime, finding a bi-partition is as difficult as finding a flat community structure. As a consequence, some nodes are misclassified, and these misclassifications are increasingly locked in as the algorithm progresses down to smaller communities. In contrast, a bottom-up approach inherently exploits lower-level structure, if present. At each step, a bottom-up algorithm needs to classify only the communities at the subsequent higher level, rather than classifying all the nodes individually. This is an easier problem (even if some errors are carried up), because (a) the number of classification decisions is much smaller, and (b) the number of edges available for each decision is much larger (number of edges between two lower-depth communities versus number of incident edges on an individual node).
In this paper, we have quantified this fundamental advantage within a class of random graph models (HSBM). We have proven that the latent tree of an HSBM can be recovered under weaker conditions than in the literature (average degree scaling as ω(1)\omega(1)). Moreover, we have established that super-communities of intermediate depths could be exactly recovered up to the information-theoretic threshold, thus improving upon the previously known conditions for top-down algorithms. Finally, we have shown that the theoretic advantage of bottom-up carries over to relevant scales and real-world data. Both on synthetic and on real data sets, bottom-up HCD achieves better performance than top-down HCD.
Acknowledgements
This work was supported in part by the Swiss National Science Foundation (SNSF) under grant IZBRZ2_186313.
Supplementary material
When does bottom-up beat top-down in hierarchical community detection?
Appendix A Proofs for Section3
A.1 Proof of Theorem1
Proof of Theorem1.
With the assumption of this theorem, 𝒞^\widehat{\mathcal{C}} is an almost exact estimator of the true clusters 𝒞\mathcal{C}. Let a≠b∈ℒ 𝒯 a\neq b\in\mathcal{L}{\mathcal{T}}, and recall the definition of the edge density ρ(⋅,⋅)\rho(\cdot,\cdot) in(2.1). Lemma2 implies that with high probability the edge density between the estimated clusters C^a\widehat{C}{a} and C^b\widehat{C}{b} is ρ(C^a,C^b)=(1+o(1))p ab\rho(\widehat{C}{a},\widehat{C}{b})=(1+o(1))p{ab}. Since the network is assortative and the hierarchy is non-flat, this implies that the first step of the linkage finds the two closest clusters, say C^a 1\widehat{C}{a{1}} and C^a 2\widehat{C}{a{2}}. Then we see that for b∈ℒ 𝒯{a 1,a 2}b\in\mathcal{L}{\mathcal{T}}\backslash{a{1},a_{2}} we have lca(a 1,b)=lca(a 2,b)\mathrm{lca}(a_{1},b)=\mathrm{lca}(a_{2},b) and hence p a 1b=p a 2b p_{a_{1}b}=p_{a_{2}b}. Therefore, from(2.2),
ρ(C^a 1∪a 2,C^b)=|C^a 1||C^a 1|+|C^a 2|ρ(C^a 1,C^b)+|C^a 2||C^a 1|+|C^a 2|ρ(C^a 2,C^b)=(1+o(1))p a 1b.\displaystyle\rho\left(\widehat{C}{a{1}\cup a_{2}},\widehat{C}{b}\right)\ =\ \frac{\left|\widehat{C}{a_{1}}\right|}{\left|\widehat{C}{a{1}}\right|+\left|\widehat{C}{a{2}}\right|}\rho\left(\widehat{C}{a{1}},\widehat{C}{b}\right)+\frac{\left|\widehat{C}{a_{2}}\right|}{\left|\widehat{C}{a{1}}\right|+\left|\widehat{C}{a{2}}\right|}\rho\left(\widehat{C}{a{2}},\widehat{C}{b}\right)\ =\ (1+o(1))p{a_{1}b}.
So ρ(C^a 1∪a 2,C^b)=(1+o(1))p a 1b\rho\left(\widehat{C}{a{1}\cup a_{2}},\widehat{C}{b}\right)=(1+o(1))p{a_{1}b}, and repeating this argument, it follows by induction that the average-linkage procedure correctly recovers the tree. ∎
Let us now state and prove the following auxiliary lemma.
Lemma 2.
Consider an HSBM with the same assumptions as in Theorem1, and let 𝒞^\widehat{\mathcal{C}} be an almost exact estimator of 𝒞\mathcal{C} (𝒞^\widehat{\mathcal{C}} is possibly correlated with the graph edges). Then, for any a,b∈ℒ 𝒯 a,b\in\mathcal{L}{\mathcal{T}} we have ρ(C^a,C^b)=(1+o(1))p ab\rho\left(\widehat{C}{a},\widehat{C}{b}\right)=(1+o(1))p{ab}.
Proof.
We denote by Z∈{0,1}N×K Z\in{0,1}^{N\times K} (resp., by Z^\hat{Z}) the one-hot representation of the true communities 𝒞\mathcal{C} (resp., of the predicted communities 𝒞^\widehat{\mathcal{C}}), that is Z ic=𝟙(i∈C c)Z_{ic}=\mathbb{1}(i\in C_{c}) and Z^ic=𝟙(i∈C^c)\hat{Z}{ic}=\mathbb{1}(i\in\widehat{C}{c}) for all i∈[N]i\in[N] and c∈ℒ 𝒯 c\in\mathcal{L}{\mathcal{T}} (where 𝟙(⋅)\mathbb{1}(\cdot) denotes the indicator function). Let a≠b∈ℒ 𝒯 a\neq b\in\mathcal{L}{\mathcal{T}}. We shorten the edge density ρ(C^a,C^b)\rho\left(\widehat{C}{a},\widehat{C}{b}\right) by p^ab\hat{p}_{ab}. From the definition of the edge density in(2.1), we have
p^ab=w(C^a,C^b)|C^a|⋅|C^b|=∑i∈C^a,j∈C^b A ij|C^a|⋅|C^b|=∑i,j Z^iaZ^jbA ij∑i,j Z^iaZ^jb.\displaystyle\hat{p}{ab}\ =\ \frac{w\left(\widehat{C}{a},\widehat{C}{b}\right)}{\left|\widehat{C}{a}\right|\cdot\left|\widehat{C}{b}\right|}\ =\ \frac{\sum{i\in\widehat{C}{a},j\in\widehat{C}{b}}A_{ij}}{\left|\widehat{C}{a}\right|\cdot\left|\widehat{C}{b}\right|}\ =\ \frac{\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}A_{ij}}{\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}}.
Therefore, a variance-bias decomposition leads to
|p^ab−p ab|\displaystyle\left|\hat{p}{ab}-p{ab}\right|≤|∑i,j Z^iaZ^jb(A ij−𝔼A ij)∑i,j Z^iaZ^jb|⏟E 1+|∑i,j Z^iaZ^jb𝔼A ij∑i,j Z^iaZ^jb−p ab|⏟E 2.\displaystyle\ \leq\ \underbrace{\left|\frac{\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}\left(A_{ij}-\mathbb{E}A_{ij}\right)}{\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}}\right|}{E{1}}+\underbrace{\left|\frac{\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}\mathbb{E}A_{ij}}{\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}}-p_{ab}\right|}{E{2}}.(A.1)
Moreover, because 𝒞^\widehat{\mathcal{C}} is almost exact, we have ∑i,j Z^iaZ^jb=(1+o(1))|C a|⋅|C b|\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}=(1+o(1))|C_{a}|\cdot|C_{b}|. Thus,
∑i,j Z^iaZ^jb=(1+o(1))π aπ bN 2,\displaystyle\sum_{i,j}\hat{Z}{ia}\hat{Z}{jb}\ =\ (1+o(1))\pi_{a}\pi_{b}N^{2},
using the concentration of multinomial random variables.
Let us bound the two terms E 1 E_{1} and E 2 E_{2} on the right-hand side of(A.1) separately. To handle the first term, we will use[guedon2016community, Lemma 4.1]. Let p¯=2N−1(N−1)−1∑i<j Var(A ij)\bar{p}=2N^{-1}(N-1)^{-1}\sum_{i<j}\operatorname{Var}(A_{ij}). Because Var(A ij)=Θ(δ N)\operatorname{Var}(A_{ij})=\Theta(\delta_{N}), we have p¯=Θ(δ N)\bar{p}=\Theta(\delta_{N}). Moreover, because δ N=ω(N−1)\delta_{N}=\omega(N^{-1}), we have p¯≥9N−1\bar{p}\geq 9N^{-1} for N N large enough. Therefore [guedon2016community, Lemma 4.1] ensures that with probability at least 1−e 35−N 1-e^{3}5^{-N} we have
sup s,t∈{−1,1}n|∑i,j(A ij−𝔼A ij)s it j|≤ 3NNp¯.\displaystyle\sup_{s,t\in{-1,1}^{n}}\left|\sum_{i,j}\left(A_{ij}-\mathbb{E}A_{ij}\right)s_{i}t_{j}\right|\ \leq\ 3N\sqrt{N\bar{p}}.
Applying Grothendieck’s inequality[guedon2016community, Theorem 3.1], we obtain
sup X 1,⋯,X N Y 1,⋯,Y N∀i∈[N]:‖X i‖2≤1∀j∈[N]:‖Y j‖2≤1|∑i,j(A ij−𝔼A ij)X i TY j|≤ 3cNNp¯,\displaystyle\sup_{\begin{subarray}{c}X_{1},\cdots,X_{N}\ Y_{1},\cdots,Y_{N}\ \forall i\in[N]\colon|X_{i}|{2}\leq 1\ \forall j\in[N]\colon|Y{j}|{2}\leq 1\end{subarray}}\left|\sum{i,j}\left(A_{ij}-\mathbb{E}A_{ij}\right)X_{i}^{T}Y_{j}\right|\ \leq\ 3cN\sqrt{N\bar{p}},(A.2)
where c c is Grothendieck’s constant, verifying 0<c<2 0<c<2. Using(A.2) with X i=Z^ia X_{i}=\hat{Z}{ia} and Y j=Z^jb Y{j}=\hat{Z}_{jb} leads to
E 1=O(p¯N)=o(δ N),\displaystyle E_{1}\ =\ O\left(\sqrt{\frac{\bar{p}}{N}}\right)\ =\ o(\delta_{N}),
because p=Θ(δ N)p=\Theta(\delta_{N}) and N−1=o(δ N)N^{-1}=o(\delta_{N}).
To handle the second term E 2 E_{2} in the right-hand side of(A.1), we first notice that
E 2\displaystyle E_{2}=|∑i,j Z^iaZ^jb(𝔼A ij−p ab)|∑i,j Z^iaZ^jb.\displaystyle\ =\ \frac{\left|\sum\limits_{i,j}\hat{Z}{ia}\hat{Z}{jb}\left(\mathbb{E}A_{ij}-p_{ab}\right)\right|}{\sum\limits_{i,j}\hat{Z}{ia}\hat{Z}{jb}}.(A.3)
Moreover, for all i,j∈[N]i,j\in[N] we have
𝔼A ij=∑a′,b′∈ℒ 𝒯 p a′b′Z ia′Z jb′,\displaystyle\mathbb{E}A_{ij}=\sum_{a^{\prime},b^{\prime}\in\mathcal{L}{\mathcal{T}}}p{a^{\prime}b^{\prime}}Z_{ia^{\prime}}Z_{jb^{\prime}},
because 𝔼A ij=p a′b′\mathbb{E}A_{ij}=p_{a^{\prime}b^{\prime}} if i∈C a′i\in C_{a^{\prime}} and j∈C b′j\in C_{b^{\prime}}. Thus,
𝔼A ij−p ab=∑a′,b′∈ℒ 𝒯(a′,b′)≠(a,b)(p a′b′−p ab)Z ia′Z jb′.\displaystyle\mathbb{E}A_{ij}-p_{ab}\ =\ \sum_{\begin{subarray}{c}a^{\prime},b^{\prime}\in\mathcal{L}{\mathcal{T}}\ (a^{\prime},b^{\prime})\neq(a,b)\end{subarray}}(p{a^{\prime}b^{\prime}}-p_{ab})Z_{ia^{\prime}}Z_{jb^{\prime}}.
Let c=max a′,b′p a′b′p ab c=\max_{a^{\prime},b^{\prime}}\frac{p_{a^{\prime}b^{\prime}}}{p_{ab}}. Under our assumptions, c=Θ(1)c=\Theta(1). We bound the numerator appearing in Equation(A.3) as follows:
|∑i<j Z^iaZ^jb(𝔼A ij−p ab)|\displaystyle\left|\sum\limits_{i<j}\hat{Z}{ia}\hat{Z}{jb}\left(\mathbb{E}A_{ij}-p_{ab}\right)\right|≤p ab(c+1)∑a′,b′∈ℒ 𝒯(a′,b′)≠(a,b)∑i<j Z^iaZ^jbZ ia′Z jb′\displaystyle\ \leq\ p_{ab}\left(c+1\right)\sum_{\begin{subarray}{c}a^{\prime},b^{\prime}\in\mathcal{L}{\mathcal{T}}\ (a^{\prime},b^{\prime})\neq(a,b)\end{subarray}}\sum{i<j}\hat{Z}{ia}\hat{Z}{jb}Z_{ia^{\prime}}Z_{jb^{\prime}} ≤ 2cp ab∑a′,b′∈ℒ 𝒯(a′,b′)≠(a,b)|C^a∩C a′|⋅|C^b∩C b′|,\displaystyle\ \leq\ 2cp_{ab}\sum_{\begin{subarray}{c}a^{\prime},b^{\prime}\in\mathcal{L}{\mathcal{T}}\ (a^{\prime},b^{\prime})\neq(a,b)\end{subarray}}\left|\widehat{C}{a}\cap C_{a^{\prime}}\right|\cdot\left|\widehat{C}{b}\cap C{b^{\prime}}\right|,
since c≥1 c\geq 1. Let us denote V\C a=∪a′≠a C a′V\backslash C_{a}=\cup_{a^{\prime}\neq a}C_{a^{\prime}} by C a c C_{a}^{c}. We have
∑a′,b′∈ℒ 𝒯(a′,b′)≠(a,b)|C^a∩C a′|⋅|C^b∩C b′|\displaystyle\sum_{\begin{subarray}{c}a^{\prime},b^{\prime}\in\mathcal{L}{\mathcal{T}}\ (a^{\prime},b^{\prime})\neq(a,b)\end{subarray}}\left|\widehat{C}{a}\cap C_{a^{\prime}}\right|\cdot\left|\widehat{C}{b}\cap C{b^{\prime}}\right|=∑a′≠a∑b′∈ℒ 𝒯|C^a∩C a′|⋅|C^b∩C b′|+|C^a∩C a|∑b′≠b|C^b∩C b′|\displaystyle\ =\ \sum_{a^{\prime}\neq a}\sum_{b^{\prime}\in\mathcal{L}{\mathcal{T}}}\left|\widehat{C}{a}\cap C_{a^{\prime}}\right|\cdot\left|\widehat{C}{b}\cap C{b^{\prime}}\right|+\left|\widehat{C}{a}\cap C{a}\right|\sum_{b^{\prime}\neq b}\left|\widehat{C}{b}\cap C{b^{\prime}}\right| =|C^a∩C a c|⋅|C^b|+|C^a∩C a|⋅|C^b∩C b c|\displaystyle\ =\ \left|\widehat{C}{a}\cap C^{c}{a}\right|\cdot\left|\widehat{C}{b}\right|+\left|\widehat{C}{a}\cap C_{a}\right|\cdot\left|\widehat{C}{b}\cap C{b}^{c}\right| ≤(|C^b|+|C^a|)loss(𝒞,𝒞^),\displaystyle\ \leq\ \left(\left|\widehat{C}{b}\right|+\left|\widehat{C}{a}\right|\right)\mathrm{loss}\left(\mathcal{C},\widehat{\mathcal{C}}\right),
where the last line uses |C^a∩C a|≤|C^a||\widehat{C}{a}\cap C{a}|\leq|\widehat{C}{a}| and loss(𝒞,𝒞^)=∑c=1 K|C^c∩C c c|+|C^c c∩C c|\mathrm{loss}(\mathcal{C},\widehat{\mathcal{C}})=\sum{c=1}^{K}\left|\widehat{C}{c}\cap C^{c}{c}\right|+\left|\widehat{C}{c}^{c}\cap C{c}\right|. Hence,
E 2≤ 2cp ab(1|𝒞^a|+1|𝒞^b|)loss(𝒞,𝒞^).\displaystyle E_{2}\ \leq\ 2cp_{ab}\left(\frac{1}{|\widehat{\mathcal{C}}{a}|}+\frac{1}{|\widehat{\mathcal{C}}{b}|}\right)\mathrm{loss}(\mathcal{C},\widehat{\mathcal{C}}).
The assumption that 𝒞^\widehat{\mathcal{C}} is almost exact ensures that loss(𝒞,𝒞^)=o(N)\mathrm{loss}(\mathcal{C},\widehat{\mathcal{C}})=o(N) and |C^a|,|C^b|=Θ(N)|\widehat{C}{a}|,|\widehat{C}{b}|=\Theta(N). Therefore, E 2=o(p ab)E_{2}=o(p_{ab}). This concludes the proof. ∎
A.2 Proofs for Section3.3
In this section, we prove the Theorem2 and Proposition1. We first start in SectionA.2.1 by stating a lemma that gives a general condition for the robustness of average-linkage with respect to errors present in an estimator 𝒞\widetilde{\mathcal{C}} of 𝒞\mathcal{C}, where we assume that 𝒞\widetilde{\mathcal{C}} is independent of the graph G G. We then show in SectionsA.2.2 andA.2.3 how to prove Theorem2 and Proposition1 from this Lemma3. The main ingredients of the proof of Lemma3 are given in SectionA.2.4, whereas the most tedious computations are detailed in SectionA.2.5.
A.2.1 A General Lemma
Let us denote by B(h)B(h) the number of bottom communities that have similarity h h with a bottom community a a. By symmetry of BTSBMs, this value B(h)B(h) does not depend on a a, and we have
B(h)\displaystyle B(h)={2 d−h−1 h∈{0,1,⋯,d−1},1 h=d.\displaystyle\ =\ \begin{cases}2^{d-h-1}&h\in{0,1,\cdots,d-1},\ 1&h=d.\end{cases}(A.4)
We notice for 0≤h 1≤d 0\leq h_{1}\leq d, B(h)B(h) satisfies
∑h=h 1 d B(h)=B(h 1−1),\displaystyle\sum_{h=h_{1}}^{d}B(h)\ =\ B(h_{1}-1),(A.5)
where by a slight abuse of notation we define B(−1)=2 d B(-1)=2^{d}. The value of B(h)B(h) is also equal to the number of bottom communities whose similarity with the bottom community a a is no less than h+1 h+1. Denote by p¯h\bar{p}{h} the expected edge density inside the super community whose similarity with the bottom community a a is no less than h h. By symmetry of BTSBMs, the value of p¯h\bar{p}{h} does not depend on a a, and we have
p¯h=∑s=h d B(s)p(s)B(h−1).\displaystyle\bar{p}{h}\ =\ \frac{\sum{s=h}^{d}B(s)p(s)}{B(h-1)}.(A.6)
Let us now state the following lemma.
Lemma 3.
Let G G be a BTSBM whose latent binary tree 𝒯\mathcal{T} has depth d 𝒯≥2 d_{\mathcal{T}}\geq 2. Suppose that π a=1/K\pi_{a}=1/K for all a∈ℒ 𝒯 a\in\mathcal{L}{\mathcal{T}} and that min u∈𝒯p(u)=ω(N−2)\min\limits{u\in\mathcal{T}}p(u)=\omega(N^{-2}). Let 𝒞\widetilde{\mathcal{C}} be a clustering obtained from 𝒞\mathcal{C} as described in Section3.2, and 𝒯^\widehat{\mathcal{T}} be the hierarchical tree obtained by average-linkage from 𝒞\widetilde{\mathcal{C}}. Then 𝒯^=𝒯\widehat{\mathcal{T}}=\mathcal{T} iff for all h ac≤d−2 h_{ac}\leq d-2 the following condition is satisfied
∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)(ζ(h 1)−ζ(h ac))(ζ(h 2)−ζ(h ac))(p h 1−p h ac)\displaystyle\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +\displaystyle+∑h 1=h ac+1 d−1(B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac))+(p d−1−p h ac)(ζ(d)−ζ(h ac))2\displaystyle\ \sum_{h_{1}=h_{ac}+1}^{d-1}\left(B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)\right)+\left(p_{d-1}-p_{h_{ac}}\right)\left(\zeta(d)-\zeta(h_{ac})\right)^{2} +\displaystyle+2(p d−p d−1)(ζ(d−1)−ζ(h ac))(ζ(d)−ζ(d−1)+ζ(h ac)2)\displaystyle\quad 2\left(p_{d}-p_{d-1}\right)\left(\zeta(d-1)-\zeta(h_{ac})\right)\left(\zeta(d)-\frac{\zeta(d-1)+\zeta(h_{ac})}{2}\right)
\displaystyle>0.\displaystyle\ 0.(A.7)
The proof of Lemma3 is tedious, and we defer it to SectionA.2.4. Let us first establish Theorem2 and Proposition1 from Lemma3.
A.2.2 Proof of Theorem2
Proof of Theorem2 as a corollary of Lemma3.
Because of assortativity, p h 2−p h 1>0 p_{h_{2}}-p_{h_{1}}>0 for all h 2>h 1 h_{2}>h_{1} and p¯h>p h ac\bar{p}{h}>p{h_{ac}} for all h>h ac h>h_{ac}. Moreover, we have ζ(h 1)≤ζ(h 2)\zeta(h_{1})\ \leq\ \zeta(h_{2}) and ζ(h)−ζ(h ac)≥0\zeta(h)-\zeta(h_{ac})\geq 0 for all h 1<h 2 h_{1}<h_{2} and h>h ac h>h_{ac}. Therefore, for all h ac≤d−2 h_{ac}\leq d-2, we have
∑h 1=h ac+1 d−1\displaystyle\sum_{h_{1}=h_{ac}+1}^{d-1}∑h 2=h 1+1 d 2B(h 1)B(h 2)(ζ(h 1)−ζ(h ac))(ζ(h 2)−ζ(h ac))(p h 1−p h ac)\displaystyle\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +\displaystyle+∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac)+(p d−1−p h ac)(ζ(d)−ζ(h ac))2≥ 0,\displaystyle\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)+\left(p_{d-1}-p_{h_{ac}}\right)\left(\zeta(d)-\zeta(h_{ac})\right)^{2}\ \geq\ 0,
because each term in the left-hand side of the inequality is non-negative.
Moreover, because ζ(d)>ζ(d−1)+ζ(h ac)2\zeta(d)>\frac{\zeta(d-1)+\zeta(h_{ac})}{2},
(p d−p d−1)(ζ(d−1)−ζ(h ac))(ζ(d)−ζ(d−1)+ζ(h ac)2)>0,\left(p_{d}-p_{d-1}\right)\left(\zeta(d-1)-\zeta(h_{ac})\right)\left(\zeta(d)-\frac{\zeta(d-1)+\zeta(h_{ac})}{2}\right)>0,
and thus Condition(3) is satisfied. ∎
A.2.3 An Adversarial Setting where ζ\zeta is not Non-decreasing
In an adversarial setting, ζ\zeta might not be non-decreasing. For instance, consider a scenario where misclustered nodes are deliberately assigned to a cluster chosen uniformly at random from those farthest away from their true cluster. Define 𝒞adversarial=𝒞(𝒞,ζ adversarial)\widetilde{\mathcal{C}}{\mathrm{adversarial}}=\widetilde{\mathcal{C}}(\mathcal{C},\zeta{\mathrm{adversarial}}) where
ζ adversarial(h)={η 2 d−1 ifh=0,0 ifh∈{1,⋯,d−1}.\displaystyle\zeta_{\mathrm{adversarial}}(h)\ =\ \begin{cases}\frac{\eta}{2^{d-1}}&\text{ if }h=0,\ 0&\text{ if }h\in{1,\cdots,d-1}.\end{cases}(A.8)
Under additional conditions given in Proposition1, the average-linkage procedure recovers the tree from 𝒞~adversarial\widetilde{\mathcal{C}}_{\mathrm{adversarial}}.
Proposition 1.
Let G G be a BTSBM whose latent binary tree 𝒯\mathcal{T} has depth K=Θ(1)K=\Theta(1) leaves. Suppose that π a=1/K\pi_{a}=1/K for all a∈ℒ 𝒯 a\in\mathcal{L}{\mathcal{T}} and that min u∈𝒯p(u)=ω(N−2)\min\limits{u\in\mathcal{T}}p(u)=\omega(N^{-2}). Suppose that η<1/2\eta<1/2. Then the average-linkage procedure correctly recovers 𝒯\mathcal{T} (starting from 𝒞~adversarial\widetilde{\mathcal{C}}_{\mathrm{adversarial}}) if one of the following conditions is verified:
(i)p¯1<p d−1 or(ii)p¯1≥p d−1andη<η−.(i)\ \bar{p}{1}<p{d-1}\qquad\text{ or }\qquad(ii)\ \bar{p}{1}\geq p{d-1}\text{ and }\eta<\eta_{-}.
where p¯1=1 2 d−1(p d+∑k=1 d−1 2 k−1p d−k)\bar{p}{1}=\frac{1}{2^{d-1}}\left(p{d}+\sum_{k=1}^{d-1}2^{k-1}p_{d-k}\right) and η−=p d−1+p¯1−2p 0−(p¯1−p d−1)(p¯1−p 0)p d−1+3p¯1−4p 0\eta_{-}=\frac{p_{d-1}+\bar{p}{1}-2p{0}-\sqrt{(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})}}{p_{d-1}+3\bar{p}{1}-4p{0}}.
Proof of Proposition1 as a corollary of Lemma3.
When h ac>0 h_{ac}>0, by substituting ζ(h)\zeta(h) with Equation(A.8), the left hand side of Condition(3) can be written as
(p d−1−p h ac)(1−η)2>0,\displaystyle\left(p_{d-1}-p_{h_{ac}}\right)\left(1-\eta\right)^{2}>0,
which therefore satisfies Condition(3).
When h ac=0 h_{ac}=0, we can replace ζ(h)\zeta(h) with Equation(A.8) and denote η 2 d−1\frac{\eta}{2^{d-1}} by ζ\zeta for convenience. This allows us to express the left-hand side of Condition(3) as follows:
∑h 1=1 d−1∑h 2=h 1+1 d−1 2B(h 1)B(h 2)ζ 2(p h 1−p 0)⏟A 1−∑h 1=1 d−1 2B(h 1)ζ(1−η−ζ)(p h 1−p 0)⏟A 2\displaystyle\underbrace{\sum_{h_{1}=1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d-1}2B(h_{1})B(h_{2})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)}{A{1}}-\underbrace{\sum_{h_{1}=1}^{d-1}2B(h_{1})\zeta\left(1-\eta-\zeta\right)\left(p_{h_{1}}-p_{0}\right)}{A{2}} +\displaystyle+∑h 1=1 d−1 B 2(h 1)ζ 2(p¯h 1+1−p 0)⏟A 3+(p d−1−p 0)(1−η−ζ)2−2(p d−p d−1)ζ(1−η−ζ 2).\displaystyle\ \underbrace{\sum_{h_{1}=1}^{d-1}B^{2}(h_{1})\zeta^{2}\left(\bar{p}{h{1}+1}-p_{0}\right)}{A{3}}+\left(p_{d-1}-p_{0}\right)\left(1-\eta-\zeta\right)^{2}-2\left(p_{d}-p_{d-1}\right)\zeta\left(1-\eta-\frac{\zeta}{2}\right).(A.9)
We notice that
A 1\displaystyle A_{1}=∑h 1=1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ 2(p h 1−p 0)−∑h 1=1 d−1 2B(h 1)B(d)ζ 2(p h 1−p 0),\displaystyle\ =\ \sum_{h_{1}=1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\sum_{h_{1}=1}^{d-1}2B(h_{1})B(d)\zeta^{2}\left(p_{h_{1}}-p_{0}\right), =(a)∑h 1=1 d−1 2B 2(h 1)ζ 2(p h 1−p 0)−∑h 1=1 d−1 2B(h 1)B(d)ζ 2(p h 1−p 0),\displaystyle=^{(a)}\sum_{h_{1}=1}^{d-1}2B^{2}(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\sum_{h_{1}=1}^{d-1}2B(h_{1})B(d)\zeta^{2}\left(p_{h_{1}}-p_{0}\right),(A.10)
where in (a), we use Equation(A.5). We also have
A 3\displaystyle A_{3}=∑h 1=1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)ζ 2(p h 2−p 0),\displaystyle=\sum_{h_{1}=1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\zeta^{2}\left(p_{h_{2}}-p_{0}\right), =(a)∑h 2=1 d∑h 1=1 h 2−1 B(h 1)B(h 2)(p h 2−p 0)ζ 2,\displaystyle=^{(a)}\sum_{h_{2}=1}^{d}\sum_{h_{1}=1}^{h_{2}-1}B(h_{1})B(h_{2})\left(p_{h_{2}}-p_{0}\right)\zeta^{2}, =∑h 2=1 d(∑h 1=1 d B(h 1)−∑h 1=h 2 d B(h 1))B(h 2)(p h 2−p 0)ζ 2,\displaystyle=\sum_{h_{2}=1}^{d}\left(\sum_{h_{1}=1}^{d}B(h_{1})-\sum_{h_{1}=h_{2}}^{d}B(h_{1})\right)B(h_{2})\left(p_{h_{2}}-p_{0}\right)\zeta^{2}, =(b)∑h 2=1 d(B(0)−B(h 2−1))B(h 2)(p h 2−p 0)ζ 2,\displaystyle=^{(b)}\sum_{h_{2}=1}^{d}\left(B(0)-B(h_{2}-1)\right)B(h_{2})\left(p_{h_{2}}-p_{0}\right)\zeta^{2}, =(c)∑h 1=1 d B(0)B(h 1)ζ 2(p h 1−p 0)−(∑h 1=1 d−1 2B 2(h 1)ζ 2(p h 1−p 0)+ζ 2(p d−p 0))\displaystyle=^{(c)}\sum_{h_{1}=1}^{d}B(0)B(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\left(\sum_{h_{1}=1}^{d-1}2B^{2}(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)+\zeta^{2}\left(p_{d}-p_{0}\right)\right)(A.11)
where in (a), we swap sums from ∑h 1=1 d−1∑h 2=h 1+1 d\sum_{h_{1}=1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d} to ∑h 2=1 d∑h 1=1 h 2−1\sum_{h_{2}=1}^{d}\sum_{h_{1}=1}^{h_{2}-1}, in (b), we use Equation(A.5). Finally, to obtain the last equality (c), we re-index h 2 h_{2} to h 1 h_{1} and use
B(h−1)={2B(h)h∈{0,1,⋯,d−1},B(d)h=d,\displaystyle B(h-1)\ =\ \begin{cases}2B(h)&h\in{0,1,\cdots,d-1},\ B(d)&h=d,\end{cases}(A.12)
(Recall also that B(d)=1 B(d)=1.) By combining Equations(A.10) and (A.11), we obtain
A 1+A 2+A 3\displaystyle A_{1}+A_{2}+A_{3}=∑h 1=1 d−1 2B 2(h 1)ζ 2(p h 1−p 0)−∑h 1=1 d−1 2B(h 1)ζ(1−η)(p h 1−p 0)\displaystyle\ =\ \sum_{h_{1}=1}^{d-1}2B^{2}(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\sum_{h_{1}=1}^{d-1}2B(h_{1})\zeta\left(1-\eta\right)\left(p_{h_{1}}-p_{0}\right) +∑h 1=1 d B(0)B(h 1)ζ 2(p h 1−p 0)−(∑h 1=1 d−1 2B 2(h 1)ζ 2(p h 1−p 0)+ζ 2(p d−p 0)),\displaystyle\qquad+\sum_{h_{1}=1}^{d}B(0)B(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\left(\sum_{h_{1}=1}^{d-1}2B^{2}(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)+\zeta^{2}\left(p_{d}-p_{0}\right)\right), =−∑h 1=1 d−1 2B(h 1)ζ(1−η)(p h 1−p 0)+∑h 1=1 d B(0)B(h 1)ζ 2(p h 1−p 0)−ζ 2(p d−p 0),\displaystyle=-\sum_{h_{1}=1}^{d-1}2B(h_{1})\zeta\left(1-\eta\right)\left(p_{h_{1}}-p_{0}\right)+\sum_{h_{1}=1}^{d}B(0)B(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\zeta^{2}\left(p_{d}-p_{0}\right), =−∑h 1=1 d 2B(h 1)ζ(1−η)(p h 1−p 0)+2ζ(1−η)(p d−p 0)\displaystyle=-\sum_{h_{1}=1}^{d}2B(h_{1})\zeta\left(1-\eta\right)\left(p_{h_{1}}-p_{0}\right)+2\zeta\left(1-\eta\right)(p_{d}-p_{0}) +∑h 1=1 d B(0)B(h 1)ζ 2(p h 1−p 0)−ζ 2(p d−p 0),\displaystyle\qquad+\sum_{h_{1}=1}^{d}B(0)B(h_{1})\zeta^{2}\left(p_{h_{1}}-p_{0}\right)-\zeta^{2}\left(p_{d}-p_{0}\right), =(a)−2B(0)ζ(1−η)(p¯1−p 0)+2ζ(1−η)(p d−p 0)+B 2(0)ζ 2(p¯1−p 0)−ζ 2(p d−p 0),\displaystyle=^{(a)}-2B(0)\zeta\left(1-\eta\right)\left(\bar{p}{1}-p{0}\right)+2\zeta\left(1-\eta\right)(p_{d}-p_{0})+B^{2}(0)\zeta^{2}\left(\bar{p}{1}-p{0}\right)-\zeta^{2}\left(p_{d}-p_{0}\right), =−η(1−η)(p¯1−p 0)+2ζ(1−η)(p d−p 0)+η 2(p¯1−p 0)−ζ 2(p d−p 0),\displaystyle=-\eta\left(1-\eta\right)\left(\bar{p}{1}-p{0}\right)+2\zeta\left(1-\eta\right)(p_{d}-p_{0})+\eta^{2}\left(\bar{p}{1}-p{0}\right)-\zeta^{2}\left(p_{d}-p_{0}\right),
where in (a) we use Equations(A.6) and(A.12), and in the last equality we use B(0)ζ=η B(0)\zeta=\eta.
By substituting A 1+A 2+A 3 A_{1}+A_{2}+A_{3}, we can rewrite Equation(A.2.3) as
(η 2−2η(1−η))(p¯1−p 0)+2ζ(1−η)(p d−p 0)−ζ 2(p d−p 0)\displaystyle\left(\eta^{2}-2\eta\left(1-\eta\right)\right)\left(\bar{p}{1}-p{0}\right)+2\zeta\left(1-\eta\right)(p_{d}-p_{0})-\zeta^{2}\left(p_{d}-p_{0}\right) +(p d−1−p 0)((1−η)2+ζ 2−2ζ(1−η))−(p d−p d−1)ζ(2(1−η)−ζ),\displaystyle\qquad+\left(p_{d-1}-p_{0}\right)\left(\left(1-\eta\right)^{2}+\zeta^{2}-2\zeta\left(1-\eta\right)\right)-\left(p_{d}-p_{d-1}\right)\zeta\left(2(1-\eta)-\zeta\right), =(η 2−2η(1−η))(p¯1−p 0)+(p d−1−p 0)(1−η)2.\displaystyle\ =\ \left(\eta^{2}-2\eta\left(1-\eta\right)\right)\left(\bar{p}{1}-p{0}\right)+\left(p_{d-1}-p_{0}\right)\left(1-\eta\right)^{2}.
Therefore, Condition(3) is equivalent to
(η 2−2η(1−η))(p¯1−p 0)+(p d−1−p 0)(1−η)2>0.\displaystyle\left(\eta^{2}-2\eta\left(1-\eta\right)\right)\left(\bar{p}{1}-p{0}\right)+\left(p_{d-1}-p_{0}\right)\left(1-\eta\right)^{2}>0.(A.13)
If p d−1>p¯1 p_{d-1}>\bar{p}_{1} we have
(η 2−2η(1−η))(p¯1−p 0)+(p d−1−p 0)(1−η)2\displaystyle\left(\eta^{2}-2\eta\left(1-\eta\right)\right)\left(\bar{p}{1}-p{0}\right)+\left(p_{d-1}-p_{0}\right)\left(1-\eta\right)^{2}=(p¯1−p 0)((1−η)2+η 2−2η(1−η)),\displaystyle\ =\ (\bar{p}{1}-p{0})\left((1-\eta)^{2}+\eta^{2}-2\eta(1-\eta)\right), =(p¯1−p 0)(1−2η)2,\displaystyle\ =\ (\bar{p}{1}-p{0})\left(1-2\eta\right)^{2},
and this quantity is positive since p¯1>p 0\bar{p}{1}>p{0}. Therefore, p d−1>p¯1 p_{d-1}>\bar{p}_{1} is a sufficient condition to recover the tree, and this proves point(1) of Proposition1.
If p d−1≤p¯1 p_{d-1}\leq\bar{p}_{1} we can rewrite the left-hand side of Equation(A.13) (by “completing the square”) as
(p d−1+3p¯1−4p 0)η 2−2(p d−1+p¯1−2p 0)η+(p d−1−p 0),\displaystyle(p_{d-1}+3\bar{p}{1}-4p{0})\eta^{2}-2(p_{d-1}+\bar{p}{1}-2p{0})\eta+(p_{d-1}-p_{0}), =(p d−1+3p¯1−4p 0)(η−p d−1+p¯1−2p 0 p d−1+3p¯1−4p 0)2+(p d−1−p 0)−(p d−1+p¯1−2p 0)2 p d−1+3p¯1−4p 0,\displaystyle\ =\ (p_{d-1}+3\bar{p}{1}-4p{0})\left(\eta-\frac{p_{d-1}+\bar{p}{1}-2p{0}}{p_{d-1}+3\bar{p}{1}-4p{0}}\right)^{2}+(p_{d-1}-p_{0})-\frac{(p_{d-1}+\bar{p}{1}-2p{0})^{2}}{p_{d-1}+3\bar{p}{1}-4p{0}}, =(p d−1+3p¯1−4p 0)(η−p d−1+p¯1−2p 0 p d−1+3p¯1−4p 0)2−(p¯1−p d−1)(p¯1−p 0)p d−1+3p¯1−4p 0.\displaystyle\ =\ (p_{d-1}+3\bar{p}{1}-4p{0})\left(\eta-\frac{p_{d-1}+\bar{p}{1}-2p{0}}{p_{d-1}+3\bar{p}{1}-4p{0}}\right)^{2}-\frac{(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})}{p_{d-1}+3\bar{p}{1}-4p{0}}.
Therefore, Equation(A.13) is equivalent to
(p d−1+3p¯1−4p 0)(η−p d−1+p¯1−2p 0 p d−1+3p¯1−4p 0)2−(p¯1−p d−1)(p¯1−p 0)p d−1+3p¯1−4p 0>0.\displaystyle(p_{d-1}+3\bar{p}{1}-4p{0})\left(\eta-\frac{p_{d-1}+\bar{p}{1}-2p{0}}{p_{d-1}+3\bar{p}{1}-4p{0}}\right)^{2}-\frac{(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})}{p_{d-1}+3\bar{p}{1}-4p{0}}>0.
Because (p d−1+3p¯1−4p 0)>0(p_{d-1}+3\bar{p}{1}-4p{0})>0, this inequality is satisfied if η<η−\eta<\eta_{-} or if η>η+\eta>\eta_{+}, where
η−\displaystyle\eta_{-}=p d−1+p¯1−2p 0−(p¯1−p d−1)(p¯1−p 0)p d−1+3p¯1−4p 0,\displaystyle\ =\ \frac{p_{d-1}+\bar{p}{1}-2p{0}-\sqrt{(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})}}{p_{d-1}+3\bar{p}{1}-4p{0}}, η+\displaystyle\eta_{+}=p d−1+p¯1−2p 0+(p¯1−p d−1)(p¯1−p 0)p d−1+3p¯1−4p 0.\displaystyle\ =\ \frac{p_{d-1}+\bar{p}{1}-2p{0}+\sqrt{(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})}}{p_{d-1}+3\bar{p}{1}-4p{0}}.
Notice that η+=p¯1−p d−1(2p¯1−p 0−p¯1−p d−1)2(p d−1+3p¯1−4p 0)+1 2>1 2\eta_{+}=\frac{\sqrt{\bar{p}{1}-p{d-1}}(2\sqrt{\bar{p}{1}-p{0}}-\sqrt{\bar{p}{1}-p{d-1}})}{2(p_{d-1}+3\bar{p}{1}-4p{0})}+\frac{1}{2}>\frac{1}{2}, and hence the condition η>η+\eta>\eta_{+} cannot be verified (recall that 0≤η<1/2 0\leq\eta<1/2). In contrast, we have p d−1+p¯1−2p 0 p d−1+3p¯1−4p 0−1 2=p d−1−p¯1 2(p d−1+3p¯1−4p 0)≤0\frac{p_{d-1}+\bar{p}{1}-2p{0}}{p_{d-1}+3\bar{p}{1}-4p{0}}-\frac{1}{2}=\frac{p_{d-1}-\bar{p}{1}}{2(p{d-1}+3\bar{p}{1}-4p{0})}\leq 0 and hence η−≤1 2\eta_{-}\leq\frac{1}{2}. Moreover, using
(p d−1+p¯1−2p 0)2−(p¯1−p d−1)(p¯1−p 0)\displaystyle(p_{d-1}+\bar{p}{1}-2p{0})^{2}-(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})=p d−1 2+4p 0 2+3p¯1p d−1−5p d−1p 0−3p 0p¯1\displaystyle\ =\ p_{d-1}^{2}+4p_{0}^{2}+3\bar{p}{1}p{d-1}-5p_{d-1}p_{0}-3p_{0}\bar{p}{1} = 3(p d−1−p 0)(p¯1−p 0)+(p d−1−p 0)2,\displaystyle\ =\ 3(p{d-1}-p_{0})(\bar{p}{1}-p{0})+(p_{d-1}-p_{0})^{2},
p d−1+p¯1−2p 0>0 p_{d-1}+\bar{p}{1}-2p{0}>0, and (p¯1−p d−1)(p¯1−p 0)>0(\bar{p}{1}-p{d-1})(\bar{p}{1}-p{0})>0, it follows that η−>0\eta_{-}>0. Therefore, when p d−1≤p¯1 p_{d-1}\leq\bar{p}{1}, condition(3) is satisfied if 0≤η<η−0\leq\eta<\eta{-}. ∎
A.2.4 Proof of Lemma3
Proof of Lemma3.
We proceed by showing that the edge density ρ(Ca,Cb)\rho\left(\tilde{C}{a},\tilde{C}{b}\right) is concentrated around its mean. We denote by O ab=C a∩Cb O_{ab}=C_{a}\cap\tilde{C}{b} the nodes in cluster C a C{a} but assigned to cluster Cb\tilde{C}{b}. In particular, the set of all misclassified nodes O O is given by O=∪a,b∈ℒ 𝒯 b≠a O ab O=\cup{\begin{subarray}{c}a,b\in\mathcal{L}{\mathcal{T}}\ b\neq a\end{subarray}}O{ab} and is independent of the edges. Therefore, w(O ab,O cd)∼Bin(|O ab|⋅|O cd|,p ac)w\left(O_{ab},O_{cd}\right)\sim\operatorname{Bin}\left(|O_{ab}|\cdot|O_{cd}|,p_{ac}\right). We note, from the definition of the edge density(2.1) and the fact that C~a=∪k∈ℒ 𝒯 O ka\tilde{C}{a}=\cup{k\in\mathcal{L}{\mathcal{T}}}O{ka}, that
ρ(Ca,Cb)=w(Ca,Cb)|Ca|⋅|Cb|with w(Ca,Cb)=∑k,ℓ∈ℒ 𝒯 w(O ka,O ℓb),\displaystyle\rho\left(\tilde{C}{a},\tilde{C}{b}\right)\ =\ \frac{w\left(\tilde{C}{a},\tilde{C}{b}\right)}{\left|\tilde{C}{a}\right|\cdot\left|\tilde{C}{b}\right|}\quad\text{ with }\quad w\left(\tilde{C}{a},\tilde{C}{b}\right)\ =\ \sum_{k,\ell\in\mathcal{L}{\mathcal{T}}}w\left(O{ka},O_{\ell b}\right),(A.14)
We will express 𝔼w(O ka,O ℓb)\mathbb{E}w\left(O_{ka},O_{\ell b}\right) for each scenario. The concentration ρ(Ca,Cb)\rho\left(\tilde{C}{a},\tilde{C}{b}\right) around its mean then follows from Chernoff’s bound.
In the remainder of the proof, we let a,b,c∈ℒ 𝒯 a,b,c\in\mathcal{L}{\mathcal{T}} be three different leaves such that b b is the closest leaf from a a, i.e.,b=argmax k∈ℒ 𝒯|lca(a,k)|b=\operatorname*{arg,max}\limits{k\in\mathcal{L}_{\mathcal{T}}}|\mathrm{lca}(a,k)| and c∉{a,b}c\not\in{a,b}. To conclude that the linkage procedure outputs the correct tree, we have to verify that
∀c∉{a,b}:ρ(Ca,Cb)>ρ(Ca,Cc).\displaystyle\forall c\not\in{a,b}\colon\rho\left(\tilde{C}{a},\tilde{C}{b}\right)>\rho\left(\tilde{C}{a},\tilde{C}{c}\right).
Since ρ(Ca,Cc)=(1+o(1))ρ(Cb,Cc)\rho\left(\tilde{C}{a},\tilde{C}{c}\right)=(1+o(1))\rho\left(\tilde{C}{b},\tilde{C}{c}\right), Equation(2.2) ensures ρ(Ca∪b,Cc)=(1+o(1))ρ(Ca,Cc)\rho\left(\tilde{C}{a\cup b},\tilde{C}{c}\right)=(1+o(1))\rho\left(\tilde{C}{a},\tilde{C}{c}\right) after the merge, which in turn guarantees that the new super community a∪b a\cup b does not merge with any another community before all the bottom communities are merged. After all the bottom communities are merged, we again obtain balanced (super) communities and repeat these steps for these super communities as we did for the bottom communities.
The random variables w(O ka,O ℓb)w\left(O_{ka},O_{\ell b}\right) follow a Binomial distribution such that 𝔼w(O ka,O ℓb)=(N K)2ζ(|lca(k,a)|)ζ(|lca(ℓ,b)|)p|lca(k,ℓ)|\mathbb{E}w\left(O_{ka},O_{\ell b}\right)\ =\ \left(\frac{N}{K}\right)^{2}\zeta(|\mathrm{lca}(k,a)|)\zeta(|\mathrm{lca}(\ell,b)|)p_{|\mathrm{lca}(k,\ell)|}, where ζ(h)\zeta(h) is the fraction of nodes mislabeled from a bottom community with similarity h h to bottom community a a. We define h 1=|lca(k,a)|h_{1}=|\mathrm{lca}(k,a)|, h 2=|lca(ℓ,b)|h_{2}=|\mathrm{lca}(\ell,b)|, and h ac=|lca(a,c)|h_{ac}=|\mathrm{lca}(a,c)|.
When h 1=h ac<h 2≤d h_{1}=h_{ac}<h_{2}\leq d, each node of the fraction ζ(h 1)=ζ(h ac)\zeta(h_{1})=\zeta(h_{ac}) of the bottom cluster c c is mislabeled as a node inside a uniformly randomly chosen bottom cluster whose similarity with the bottom cluster a a is no less than h 1+1=h ac+1 h_{1}+1=h_{ac}+1. Furthermore, each node of the fraction ζ(h 2)\zeta(h_{2}) of the bottom cluster a a is mislabeled as a node inside a uniformly randomly chosen bottom cluster whose similarity with the bottom cluster a a is h 2(<h 1=h ac)h_{2}(<h_{1}=h_{ac}). Therefore, in this case, the similarity between a node from a a and a node from c c is exactly the same as when the two nodes are randomly picked from super community C u:C u=⋃b∈ℒ 𝒯:b 1:(h ac+1)=a 1:(h ac+1)C b C_{u}:C_{u}\ =\ \bigcup_{b\in\mathcal{L}{\mathcal{T}}\colon b{1:(h_{ac}+1)}=a_{1:(h_{ac}+1)}}C_{b}. Hence, the expected edge probability between these two nodes is p¯h 1+1=p¯h ac+1\bar{p}{h{1}+1}=\bar{p}{h{ac}+1}. Similarly, when h 1=h 2<h ac h_{1}=h_{2}<h_{ac}, each node of the fraction ζ(h 1)=ζ(h h 2)\zeta(h_{1})=\zeta(h_{h_{2}}) of both bottom clusters a a and c c is mislabeled as a node inside a uniformly randomly chosen bottom cluster whose similarity with the bottom cluster a a (and also with c c) is h 1=h 2 h_{1}=h_{2}. Therefore, in this case, the expected edge probability between a node from a a and a node from c c is p¯h 1+1=p¯h 2+1\bar{p}{h{1}+1}=\bar{p}{h{2}+1}. In the other cases, the similarity between bottom clusters a a and c c is min(p h 1,p h 2,p h ac)=min(p h 1,p h ac)\min(p_{h_{1}},p_{h_{2}},p_{h_{ac}})=\min(p_{h_{1}},p_{h_{ac}}).
Now we obtain whp that for any c≠a c\neq a,
𝔼w(O ka,O ℓc)=(N K)2×{ζ(h 1)ζ(h 2)p h 1 if0≤h 1<h acandh 1<h 2≤d,ζ(h ac)ζ(h 2)p¯h ac+1 ifh 1=h acandh ac<h 2≤d,ζ(h 1)ζ(h 2)p h ac ifh ac<h 1<h 2≤d,ζ 2(h 1)p¯h 1+1 if0≤h 1=h 2<h ac,ζ 2(h 1)p h ac ifh ac≤h 1=h 2≤d,\displaystyle\mathbb{E}w\left(O_{ka},O_{\ell c}\right)\ =\ \left(\frac{N}{K}\right)^{2}\times\begin{cases}\zeta(h_{1})\zeta(h_{2})p_{h_{1}}&\text{ if }0\leq h_{1}<h_{ac}\text{ and }h_{1}<h_{2}\leq d,\ \zeta(h_{ac})\zeta(h_{2})\bar{p}{h{ac}+1}&\text{ if }h_{1}=h_{ac}\text{ and }h_{ac}<h_{2}\leq d,\ \zeta(h_{1})\zeta(h_{2})p_{h_{ac}}&\text{ if }h_{ac}<h_{1}<h_{2}\leq d,\ \zeta^{2}(h_{1})\bar{p}{h{1}+1}&\text{ if }0\leq h_{1}=h_{2}<h_{ac},\ \zeta^{2}(h_{1})p_{h_{ac}}&\text{ if }h_{ac}\leq h_{1}=h_{2}\leq d,\ \end{cases}
The special case h ac=d−1 h_{ac}=d-1 occurs when c=b c=b. In that case, we have
𝔼w(O ka,O ℓb)=(N K)2×{ζ(h 1)ζ(h 2)p h 1 if0≤h 1<d−1andh 1<h 2≤d,ζ(d−1)ζ(d)p d if(h 1,h 2)=(d−1,d),ζ 2(h 1)p¯h 1+1 ifh 1=h 2<d−1,ζ 2(h 1)p d−1 ifd−1≤h 1=h 2≤d,\displaystyle\mathbb{E}w\left(O_{ka},O_{\ell b}\right)\ =\ \left(\frac{N}{K}\right)^{2}\times\begin{cases}\zeta(h_{1})\zeta(h_{2})p_{h_{1}}&\text{ if }0\leq h_{1}<d-1\text{ and }h_{1}<h_{2}\leq d,\ \zeta(d-1)\zeta(d)p_{d}&\text{ if }(h_{1},h_{2})=(d-1,d),\ \zeta^{2}(h_{1})\bar{p}{h{1}+1}&\text{ if }h_{1}=h_{2}<d-1,\ \zeta^{2}(h_{1})p_{d-1}&\text{ if }d-1\leq h_{1}=h_{2}\leq d,\ \end{cases}
Because c∉{a,b}c\notin{a,b}, we have h ac<d−1 h_{ac}<d-1. Thus, 𝔼w(O ka,O ℓb)−𝔼w(O ka,O ℓc)\mathbb{E}w\left(O_{ka},O_{\ell b}\right)-\mathbb{E}w\left(O_{ka},O_{\ell c}\right) is equal to
(N K)2×{0 if0≤h 1<h acandh 1<h 2≤d,ζ(h ac)ζ(h 2)(p h ac−p¯h ac+1)ifh 1=h ac<h 2≤d,ζ(h 1)ζ(h 2)(p h 1−p h ac)ifh ac<h 1<h 2≤dandh 1<d−1,ζ(d−1)ζ(d)(p d−p h ac)if(h 1,h 2)=(d−1,d),0 if0≤h 1=h 2<h ac,ζ 2(h 1)(p¯h 1+1−p h ac)ifh ac≤h 1=h 2<d−1,ζ 2(h 1)(p d−1−p h ac)ifd−1≤h 1=h 2≤d.\displaystyle\left(\frac{N}{K}\right)^{2}\times\begin{cases}0&\text{ if }0\leq h_{1}<h_{ac}\text{ and }h_{1}<h_{2}\leq d,\ \zeta(h_{ac})\zeta(h_{2})\left(p_{h_{ac}}-\bar{p}{h{ac}+1}\right)&\text{ if }h_{1}=h_{ac}<h_{2}\leq d,\ \zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)&\text{ if }h_{ac}<h_{1}<h_{2}\leq d\text{ and }h_{1}<d-1,\ \zeta(d-1)\zeta(d)\left(p_{d}-p_{h_{ac}}\right)&\text{ if }(h_{1},h_{2})=(d-1,d),\ 0&\text{ if }0\leq h_{1}=h_{2}<h_{ac},\ \zeta^{2}(h_{1})\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)&\text{ if }h_{ac}\leq h_{1}=h_{2}<d-1,\ \zeta^{2}(h_{1})\left(p_{d-1}-p_{h_{ac}}\right)&\text{ if }d-1\leq h_{1}=h_{2}\leq d.\ \end{cases}
Because B(h)B(h) is the number of bottom communities having a similarity h h with the bottom community a a, we obtain
ρ(Ca,Cb)−ρ(Ca,Cc)\displaystyle\rho\left(\tilde{C}{a},\tilde{C}{b}\right)-\rho\left(\tilde{C}{a},\tilde{C}{c}\right)
=\displaystyle\ =(1+o(1))(−∑h 2=h ac+1 d 2 B(h ac)B(h 2)ζ(h ac)ζ(h 2)(p¯h ac+1−p h ac)\displaystyle(1+o(1))\Bigg(-\sum_{h_{2}=h_{ac}+1}^{d}2B(h_{ac})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(\bar{p}{h{ac}+1}-p_{h_{ac}}\right)
+∑h 1=h ac+1 d−2∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h 1)ζ(h 2)(p h 1−p h ac)+2ζ(d−1)ζ(d)(p d−p h ac)\displaystyle\qquad\qquad\quad+\sum_{h_{1}=h_{ac}+1}^{d-2}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)+2\zeta(d-1)\zeta(d)\left(p_{d}-p_{h_{ac}}\right)
+∑h 1=h ac d−1 B 2(h 1)ζ 2(h 1)(p¯h 1+1−p h ac)+ζ 2(d)(p d−1−p h ac)).\displaystyle\qquad\qquad\quad+\sum_{h_{1}=h_{ac}}^{d-1}B^{2}(h_{1})\zeta^{2}(h_{1})\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)+\zeta^{2}(d)\left(p_{d-1}-p_{h_{ac}}\right)\Bigg).(A.15)
We show in SectionA.2.5 that we can further express ρ(Ca,Cb)−ρ(Ca,Cc)\rho\left(\tilde{C}{a},\tilde{C}{b}\right)-\rho\left(\tilde{C}{a},\tilde{C}{c}\right) as
(1+o(1))(∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2 B(h 1)B(h 2)(ζ(h 1)−ζ(h ac))(ζ(h 2)−ζ(h ac))(p h 1−p h ac)\displaystyle(1+o(1))\Bigg(\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac)\displaystyle\qquad\qquad\qquad\qquad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right) +2(ζ(d−1)−ζ(h ac))(ζ(d)−ζ(d−1)+ζ(h ac)2)(p d−p d−1)\displaystyle\qquad\qquad\qquad\qquad+2\left(\zeta(d-1)-\zeta(h_{ac})\right)\left(\zeta(d)-\frac{\zeta(d-1)+\zeta(h_{ac})}{2}\right)\left(p_{d}-p_{d-1}\right) +(ζ(d)−ζ(h ac))2(p d−1−p h ac)).\displaystyle\qquad\qquad\qquad\qquad+\left(\zeta(d)-\zeta(h_{ac})\right)^{2}\left(p_{d-1}-p_{h_{ac}}\right)\Bigg).(A.16)
Therefore, the condition ρ(Ca,Cb)−ρ(Ca,Cc)>0\rho\left(\tilde{C}{a},\tilde{C}{b}\right)-\rho\left(\tilde{C}{a},\tilde{C}{c}\right)>0 is equivalent to condition(3). ∎
A.2.5 Additional Computation for the Proof of Lemma3
In this subsection, we detail the tedious computations that allow us to transform Equation(A.2.4) into Equation(A.2.4). for the proof of Lemma3. In order to highlight the differences between the lines, we sometimes use bold characters. Let Δρ=ρ(Ca,Cb)−ρ(Ca,Cc)\Delta\rho=\rho\left(\tilde{C}{a},\tilde{C}{b}\right)-\rho\left(\tilde{C}{a},\tilde{C}{c}\right). From Equation(A.2.4), we have
Δρ=\displaystyle\Delta\rho\ =(1+o(1))(−∑h 2=h ac+1 d 2 B(h ac)B(h 2)ζ(h ac)ζ(h 2)(p¯h ac+1−p h ac)\displaystyle(1+o(1))\Bigg(-\sum_{h_{2}=h_{ac}+1}^{d}2B(h_{ac})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(\bar{p}{h{ac}+1}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−2∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h 1)ζ(h 2)(p h 1−p h ac)\displaystyle\qquad\qquad\quad+\ \sum_{h_{1}=h_{ac}+1}^{d-2}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right) +2B(d−1)B(d)ζ(d−1)ζ(d)(p d−p h ac)+∑h 1=h ac d−2 B 2(h 1)ζ 2(h 1)(p¯h 1+1−p h ac)\displaystyle\qquad\qquad\quad+2B(d-1)B(d)\zeta(d-1)\zeta(d)\left(p_{d}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}}^{d-2}B^{2}(h_{1})\zeta^{2}(h_{1})\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right) +∑h 1=d−1 d B 2(h 1)ζ 2(h 1)(p d−1−p h ac))\displaystyle\qquad\qquad\quad+\sum_{h_{1}=d-1}^{d}B^{2}(h_{1})\zeta^{2}(h_{1})\left(p_{d-1}-p_{h_{ac}}\right)\Bigg) =\displaystyle\ =(1+o(1))(−∑h 1=h ac+1 d∑𝒉 𝟐=𝒉 𝒂𝒄+𝟏 𝒅 2 B(h 1)B(h 2)ζ(h ac)ζ(h 2)(𝒑 𝒉 𝟏−p h ac)\displaystyle(1+o(1))\Bigg(-\sum_{h_{1}=h_{ac}+1}^{d}\boldsymbol{\sum_{h_{2}=h_{ac}+1}^{d}}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(\boldsymbol{p_{h_{1}}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−2∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h 1)ζ(h 2)(p h 1−p h ac)\displaystyle\qquad\qquad\quad+\sum_{h_{1}=h_{ac}+1}^{d-2}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right) +2ζ(d−1)ζ(d)(p d−p h ac)+∑h 1=h ac d−2∑h=h 1+1 d B(h 1)𝑩(𝒉)ζ 2(h 1)(𝒑 𝒉−p h ac)\displaystyle\qquad\qquad\quad+2\zeta(d-1)\zeta(d)\left(p_{d}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}}^{d-2}\sum_{h=h_{1}+1}^{d}B(h_{1})\boldsymbol{B(h)}\zeta^{2}(h_{1})\left(\boldsymbol{p_{h}}-p_{h_{ac}}\right) +(ζ 2(d−1)+ζ 2(d))(p d−1−p h ac)).\displaystyle\qquad\qquad\quad+\left(\zeta^{2}(d-1)+\zeta^{2}(d)\right)\left(p_{d-1}-p_{h_{ac}}\right)\Bigg).
Thus,
Δρ=\displaystyle\Delta\rho\ =(1+o(1))(−∑h 1=h ac+1 d∑h 2=h ac+1 d 2 B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac)\displaystyle(1+o(1))\Bigg(-\sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 𝒅−𝟏∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h 1)ζ(h 2)(p h 1−p h ac)\displaystyle\qquad\qquad\quad+\sum_{h_{1}=h_{ac}+1}^{\boldsymbol{d-1}}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right) − 2𝜻(𝒅−𝟏)𝜻(𝒅)(𝒑 𝒅−𝟏−𝒑 𝒉 𝒂𝒄)+2ζ(d−1)ζ(d)(p d−p h ac)\displaystyle\qquad\qquad\quad\boldsymbol{-\ 2\zeta(d-1)\zeta(d)\left(p_{d-1}-p_{h_{ac}}\right)}+2\zeta(d-1)\zeta(d)\left(p_{d}-p_{h_{ac}}\right) +∑h 1=h ac 𝒅−𝟏∑h=h 1+1 d B(h 1)B(h)ζ 2(h 1)(p h−p h ac)−𝜻 𝟐(𝒅−𝟏)(𝒑 𝒅−𝒑 𝒉 𝒂𝒄)\displaystyle\qquad\qquad\quad+\sum_{h_{1}=h_{ac}}^{\boldsymbol{d-1}}\sum_{h=h_{1}+1}^{d}B(h_{1})B(h)\zeta^{2}(h_{1})\left(p_{h}-p_{h_{ac}}\right)\boldsymbol{-\zeta^{2}(d-1)\left(p_{d}-p_{h_{ac}}\right)} +(ζ 2(d−1)+ζ 2(d))(p d−1−p h ac)),\displaystyle\qquad\qquad\quad+\left(\zeta^{2}(d-1)+\zeta^{2}(d)\right)\left(p_{d-1}-p_{h_{ac}}\right)\Bigg), =\displaystyle\ =(1+o(1))(−∑h 1=h ac+1 d∑h 2=h ac+1 d 2B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac)⏟T 1\displaystyle(1+o(1))\Bigg(-\underbrace{\sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)}{T{1}} +∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h 1)ζ(h 2)(p h 1−p h ac)⏟T 2\displaystyle\qquad\qquad\quad+\underbrace{\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)}{T{2}} +∑h 1=h ac d−1∑h=h 1+1 d B(h 1)B(h)ζ 2(h 1)(p h−p h ac)⏟T 3+𝜻 𝟐(𝒅)(𝒑 𝒅−𝟏−𝒑 𝒉 𝒂𝒄)\displaystyle\qquad\qquad\quad+\underbrace{\sum_{h_{1}=h_{ac}}^{d-1}\sum_{h=h_{1}+1}^{d}B(h_{1})B(h)\zeta^{2}(h_{1})\left(p_{h}-p_{h_{ac}}\right)}{T{3}}+\boldsymbol{\zeta^{2}(d)\left(p_{d-1}-p_{h_{ac}}\right)} +(𝟐 𝜻(𝒅−𝟏)𝜻(𝒅)−𝜻 𝟐(𝒅−𝟏))(𝒑 𝒅−𝒑 𝒅−𝟏)).\displaystyle\qquad\qquad\quad+\boldsymbol{\left(2\zeta(d-1)\zeta(d)-\zeta^{2}(d-1)\right)\left(p_{d}-p_{d-1}\right)}\Bigg).(A.17)
We compute T 1 T_{1} and T 3 T_{3} separately. We have
T 1\displaystyle T_{1}=∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac)+∑h 1=h ac+1 d 2B 2(h 1)ζ(h ac)ζ(h 1)(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d}2B^{2}(h_{1})\zeta(h_{ac})\zeta(h_{1})\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d∑h 2=h ac+1 h 1−1 2B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac),\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{h_{1}-1}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right), =∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac)+∑h 1=h ac+1 d 2B 2(h 1)ζ(h ac)ζ(h 1)(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d}2B^{2}(h_{1})\zeta(h_{ac})\zeta(h_{1})\left(p_{h_{1}}-p_{h_{ac}}\right) +∑𝒉 𝟏=𝒉 𝒂𝒄+𝟏 𝒅−𝟏∑𝒉 𝟐=𝒉 𝟏+𝟏 𝒅 2B(h 1)ζ(h ac)ζ(h 1)(𝑩(𝒉 𝟐)p h 2−𝑩(𝒉 𝟐)p h ac),\displaystyle\quad+\boldsymbol{\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}}2B(h_{1})\zeta(h_{ac})\zeta(h_{1})\left(\boldsymbol{B(h_{2})}p_{h_{2}}-\boldsymbol{B(h_{2})}p_{h_{ac}}\right), =∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac)+∑h 1=h ac+1 d 2B 2(h 1)ζ(h ac)ζ(h 1)(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d}2B^{2}(h_{1})\zeta(h_{ac})\zeta(h_{1})\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 2𝑩 𝟐(𝒉 𝟏)ζ(h ac)ζ(h 1)(𝒑¯𝒉 𝟏+𝟏−p h ac),\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}2\boldsymbol{B^{2}(h_{1})}\zeta(h_{ac})\zeta(h_{1})\left(\boldsymbol{\bar{p}{h{1}+1}}-p_{h_{ac}}\right),
and
T 3\displaystyle T_{3}=∑h 1=h ac+1 d∑h 2=h ac h 1−1 B(h 2)B(h 1)ζ 2(h 2)(p h 1−p h ac),\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}}^{h_{1}-1}B(h_{2})B(h_{1})\zeta^{2}(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right), =∑h 1=h ac+1 d∑h 2=𝒉 𝒂𝒄+𝟏 h 1−1 B(h 1)B(h 2)ζ 2(h 2)(p h 1−p h ac)+∑𝒉 𝟏=𝒉 𝒂𝒄+𝟏 𝒅 𝑩(𝒉 𝒂𝒄)𝑩(𝒉 𝟏)𝜻 𝟐(𝒉 𝒂𝒄)(𝒑 𝒉 𝟏−𝒑 𝒉 𝒂𝒄),\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=\boldsymbol{h_{ac}+1}}^{h_{1}-1}B(h_{1})B(h_{2})\zeta^{2}(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)\boldsymbol{+\sum_{h_{1}=h_{ac}+1}^{d}B(h_{ac})B(h_{1})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right)}, =∑h 1=h ac+1 d∑h 2=h ac+1 h 1−1 B(h 1)B(h 2)ζ 2(h 2)(p h 1−p h ac)+∑h 1=h ac+1 d∑𝒉 𝟐=𝒉 𝒂𝒄+𝟏 𝒅 B(h 1)𝑩(𝒉 𝟐)ζ 2(h ac)(p h 1−p h ac),\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{h_{1}-1}B(h_{1})B(h_{2})\zeta^{2}(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d}\boldsymbol{\sum_{h_{2}=h_{ac}+1}^{d}}B(h_{1})\boldsymbol{B(h_{2})}\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right), =∑h 1=h ac+1 d∑h 2=h ac+1 h 1−1 B(h 1)B(h 2)ζ 2(h 2)(p h 1−p h ac)+∑h 1=h ac+1 d∑h 2=h ac+1 𝒉 𝟏−𝟏 B(h 1)B(h 2)ζ 2(h ac)(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{h_{1}-1}B(h_{1})B(h_{2})\zeta^{2}(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{\boldsymbol{h_{1}-1}}B(h_{1})B(h_{2})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d 𝑩 𝟐(𝒉 𝟏)ζ 2(h ac)(p h 1−p h ac)+∑h 1=h ac+1 d−1∑𝒉 𝟐=𝒉 𝟏+𝟏 d B(h 1)B(h 2)ζ 2(h ac)(p h 1−p h ac).\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d}\boldsymbol{B^{2}(h_{1})}\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{\boldsymbol{h_{2}=h_{1}+1}}^{d}B(h_{1})B(h_{2})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right).
We can further manipulate T 3 T_{3} to obtain
T 3\displaystyle T_{3}=∑h 1=h ac+1 d∑h 2=h ac+1 h 1−1 B(h 1)B(h 2)(ζ 2(h 2)+ζ 2(h ac))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d}\sum_{h_{2}=h_{ac}+1}^{h_{1}-1}B(h_{1})B(h_{2})\left(\zeta^{2}(h_{2})+\zeta^{2}(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d B 2(h 1)ζ 2(h ac)(p h 1−p h ac)+∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)ζ 2(h ac)(p h 1−p h ac),\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d}B^{2}(h_{1})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right), =∑𝒉 𝟏=𝒉 𝒂𝒄+𝟏 𝒅−𝟏∑𝒉 𝟐=𝒉 𝟏+𝟏 𝒅 B(h 1)(ζ 2(h 1)+ζ 2(h ac))(𝑩(𝒉 𝟐)p h 2−𝑩(𝒉 𝟐)p h ac)\displaystyle\ =\ \boldsymbol{\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}}B(h_{1})\left(\zeta^{2}(h_{1})+\zeta^{2}(h_{ac})\right)\left(\boldsymbol{B(h_{2})}p_{h_{2}}-\boldsymbol{B(h_{2})}p_{h_{ac}}\right) +∑h 1=h ac+1 d B 2(h 1)ζ 2(h ac)(p h 1−p h ac)+∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)ζ 2(h ac)(p h 1−p h ac),\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d}B^{2}(h_{1})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right), =∑h 1=h ac+1 d−1 𝑩 𝟐(𝒉 𝟏)(ζ 2(h 1)+ζ 2(h ac))(𝒑¯𝒉 𝟏+𝟏−p h ac)+∑h 1=h ac+1 d B 2(h 1)ζ 2(h ac)(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\boldsymbol{B^{2}(h_{1})}\left(\zeta^{2}(h_{1})+\zeta^{2}(h_{ac})\right)\left(\boldsymbol{\bar{p}{h{1}+1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d}B^{2}(h_{1})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)ζ 2(h ac)(p h 1−p h ac).\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right).
Combining the expressions for T 1 T_{1} and T 3 T_{3}, we obtain
−T 1+T 3\displaystyle-T_{1}+T_{3} =−∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h ac)ζ(h 2)(p h 1−p h ac)−∑h 1=h ac+1 d 2B 2(h 1)ζ(h ac)ζ(h 1)(p h 1−p h ac)\displaystyle\ =\ -\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{ac})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right)-\sum_{h_{1}=h_{ac}+1}^{d}2B^{2}(h_{1})\zeta(h_{ac})\zeta(h_{1})\left(p_{h_{1}}-p_{h_{ac}}\right) −∑h 1=h ac+1 d−1 2B 2(h 1)ζ(h ac)ζ(h 1)(p¯h 1+1−p h ac)+∑h 1=h ac+1 d−1 B 2(h 1)(ζ 2(h 1)+ζ 2(h ac))(p¯h 1+1−p h ac)\displaystyle\quad-\sum_{h_{1}=h_{ac}+1}^{d-1}2B^{2}(h_{1})\zeta(h_{ac})\zeta(h_{1})\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta^{2}(h_{1})+\zeta^{2}(h_{ac})\right)\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right) +∑h 1=h ac+1 d B 2(h 1)ζ 2(h ac)(p h 1−p h ac)+∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)ζ 2(h ac)(p h 1−p h ac)\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d}B^{2}(h_{1})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\zeta^{2}(h_{ac})\left(p_{h_{1}}-p_{h_{ac}}\right) =∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)(ζ 2(h ac)−2ζ(h ac)ζ(h 2))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(h_{2})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d B 2(h 1)(ζ 2(h ac)−2ζ(h ac)ζ(h 1))(p h 1−p h ac)\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d}B^{2}(h_{1})\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(h_{1})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac).\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right).
Furthermore,
−T 1+T 3\displaystyle-T_{1}+T_{3} =∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)(ζ 2(h ac)−2ζ(h ac)ζ(h 2))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(h_{2})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 𝒅−𝟏∑𝒉 𝟐=𝒉 𝟏+𝟏 𝒅 B(h 1)𝑩(𝒉 𝟐)(ζ 2(h ac)−2ζ(h ac)ζ(h 1))(p h 1−p h ac)\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{\boldsymbol{d-1}}\boldsymbol{\sum_{h_{2}=h_{1}+1}^{d}}B(h_{1})\boldsymbol{B(h_{2})}\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(h_{1})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +𝑩 𝟐(𝒅)(𝜻 𝟐(𝒉 𝒂𝒄)−𝟐𝜻(𝒉 𝒂𝒄)𝜻(𝒅))(𝒑 𝒅−𝒑 𝒉 𝒂𝒄)+∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac),\displaystyle\quad+\boldsymbol{B^{2}(d)\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(d)\right)\left(p_{d}-p_{h_{ac}}\right)}+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right), =∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)(2ζ 2(h ac)−2ζ(h ac)ζ(h 2)−2ζ(h ac)ζ(h 1))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\left(2\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(h_{2})-2\zeta(h_{ac})\zeta(h_{1})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +(ζ 2(h ac)−2ζ(h ac)ζ(d))(p d−p h ac)+∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac),\displaystyle\quad+\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(d)\right)\left(p_{d}-p_{h_{ac}}\right)+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right),
and
−T 1+T 3−(p d−p h ac)(ζ 2(h ac)−2ζ(h ac)ζ(d))+T 2\displaystyle-T_{1}+T_{3}-\left(p_{d}-p_{h_{ac}}\right)\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(d)\right)+T_{2} =∑h 1=h ac+1 d−1∑h 2=h 1+1 d B(h 1)B(h 2)(2ζ 2(h ac)−2ζ(h ac)ζ(h 2)−2ζ(h ac)ζ(h 1))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}B(h_{1})B(h_{2})\left(2\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(h_{2})-2\zeta(h_{ac})\zeta(h_{1})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac)\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)ζ(h 1)ζ(h 2)(p h 1−p h ac),\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\zeta(h_{1})\zeta(h_{2})\left(p_{h_{1}}-p_{h_{ac}}\right), =∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)(ζ(h 1)ζ(h 2)+ζ 2(h ac)−ζ(h ac)ζ(h 2)−ζ(h ac)ζ(h 1))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})\zeta(h_{2})+\zeta^{2}(h_{ac})-\zeta(h_{ac})\zeta(h_{2})-\zeta(h_{ac})\zeta(h_{1})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac),\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right), =∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2B(h 1)B(h 2)(𝜻(𝒉 𝟏)−𝜻(𝒉 𝒂𝒄))(𝜻(𝒉 𝟐)−𝜻(𝒉 𝒂𝒄))(p h 1−p h ac)\displaystyle\ =\ \sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\boldsymbol{\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)}\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac).\displaystyle\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right).
Finally we come back to Equation(A.17) and obtain
Δρ\displaystyle\Delta\rho=(1+o(1))(−T 1+T 2+T 3+ζ 2(d)(p d−1−p h ac)\displaystyle\ =\ (1+o(1))\Big(-T_{1}+T_{2}+T_{3}+\zeta^{2}(d)\left(p_{d-1}-p_{h_{ac}}\right) +(2 ζ(d−1)ζ(d)−ζ 2(d−1))(p d−p d−1)),\displaystyle\qquad\qquad\qquad\quad+\left(2\zeta(d-1)\zeta(d)-\zeta^{2}(d-1)\right)\left(p_{d}-p_{d-1}\right)\Big), =(1+o(1))(∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2 B(h 1)B(h 2)(ζ(h 1)−ζ(h ac))(ζ(h 2)−ζ(h ac))(p h 1−p h ac)\displaystyle\ =\ (1+o(1))\Big(\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac)+ζ 2(d)(p d−1−p h ac)\displaystyle\qquad\qquad\qquad\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)+\zeta^{2}(d)\left(p_{d-1}-p_{h_{ac}}\right) +(ζ 2(h ac)−2 ζ(h ac)ζ(d))(p d−p h ac)+(2 ζ(d−1)ζ(d)−ζ 2(d−1))(p d−p d−1)),\displaystyle\qquad\qquad\qquad\quad+\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(d)\right)\left(p_{d}-p_{h_{ac}}\right)+\left(2\zeta(d-1)\zeta(d)-\zeta^{2}(d-1)\right)\left(p_{d}-p_{d-1}\right)\Big), =(1+o(1))(∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2 B(h 1)B(h 2)(ζ(h 1)−ζ(h ac))(ζ(h 2)−ζ(h ac))(p h 1−p h ac)\displaystyle\ =\ (1+o(1))\Big(\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac)+ζ 2(d)(p d−1−p h ac)\displaystyle\qquad\qquad\qquad\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right)+\zeta^{2}(d)\left(p_{d-1}-p_{h_{ac}}\right) +(ζ 2(h ac)−2ζ(h ac)ζ(d))((p d−𝒑 𝒅−𝟏)+(𝒑 𝒅−𝟏−p h ac))\displaystyle\qquad\qquad\qquad\quad+\left(\zeta^{2}(h_{ac})-2\zeta(h_{ac})\zeta(d)\right)\left(\left(p_{d}\boldsymbol{-p_{d-1}}\right)+\left(\boldsymbol{p_{d-1}}-p_{h_{ac}}\right)\right) +(2 ζ(d−1)ζ(d)−ζ 2(d−1))(p d−p d−1)),\displaystyle\qquad\qquad\qquad\quad+\left(2\zeta(d-1)\zeta(d)-\zeta^{2}(d-1)\right)\left(p_{d}-p_{d-1}\right)\Big), =(1+o(1))(∑h 1=h ac+1 d−1∑h 2=h 1+1 d 2 B(h 1)B(h 2)(ζ(h 1)−ζ(h ac))(ζ(h 2)−ζ(h ac))(p h 1−p h ac)\displaystyle=(1+o(1))\Bigg(\sum_{h_{1}=h_{ac}+1}^{d-1}\sum_{h_{2}=h_{1}+1}^{d}2B(h_{1})B(h_{2})\left(\zeta(h_{1})-\zeta(h_{ac})\right)\left(\zeta(h_{2})-\zeta(h_{ac})\right)\left(p_{h_{1}}-p_{h_{ac}}\right) +∑h 1=h ac+1 d−1 B 2(h 1)(ζ(h 1)−ζ(h ac))2(p¯h 1+1−p h ac)\displaystyle\qquad\qquad\qquad\quad+\sum_{h_{1}=h_{ac}+1}^{d-1}B^{2}(h_{1})\left(\zeta(h_{1})-\zeta(h_{ac})\right)^{2}\left(\bar{p}{h{1}+1}-p_{h_{ac}}\right) +2(ζ(d−1)−ζ(h ac))(ζ(d)−ζ(d−1)+ζ(h ac)2)(p d−p d−1)\displaystyle\qquad\qquad\qquad\quad+2\left(\zeta(d-1)-\zeta(h_{ac})\right)\left(\zeta(d)-\frac{\zeta(d-1)+\zeta(h_{ac})}{2}\right)\left(p_{d}-p_{d-1}\right) +(ζ(d)−ζ(h ac))2(p d−1−p h ac)),\displaystyle\qquad\qquad\qquad\quad+\left(\zeta(d)-\zeta(h_{ac})\right)^{2}\left(p_{d-1}-p_{h_{ac}}\right)\Bigg),
and this last expression establishes Equation(A.2.4).
Appendix B Proofs of Sections4 and5
B.1 Proof of Theorem3
Let us first recall some notations and results for the general SBM. Let π∈(0,1)K\pi\in(0,1)^{K} be a probability vector and p∈(0,1)K×K p\in(0,1)^{K\times K} a symmetric matrix. We denote G∼𝖲𝖡𝖬(N,π,p)G\sim{\sf SBM}(N,\pi,p)if
- •each node i∈[N]i\in[N] of G G is assigned to a unique community C k C_{k}, with k∼Multi(1,[K],π)k\sim\mathrm{Multi}(1,[K],\pi);
- •two nodes i∈C k i\in C_{k} and j∈C ℓ j\in C_{\ell} are connected with probability p kℓ p_{k\ell}.
In the following, we denote by 𝒜={A 1,⋯,A t}\mathcal{A}={A_{1},\cdots,A_{t}} a collection of t t non-empty and non-overlapping subsets of [K][K] such that ∪r=1 t A r=[K]\cup_{r=1}^{t}A_{r}=[K]. An algorithm exactly recovers 𝒜\mathcal{A} if it assigns each node i i in G G to an element of {A 1,⋯,A t}{A_{1},\cdots,A_{t}} that contains its true community (up to a relabelling of the A r A_{r}’s) with probability 1−o(1)1-o(1). For two non-overlapping subsets A,B⊂[K]A,B\subset[K], we denote by CH(A,B)=CH(A,B,π,p)\mathrm{CH}\left(A,B\right)=\mathrm{CH}(A,B,\pi,p) the quantity
CH(A,B)=min a∈A b∈Bsup t∈(0,1)(1−t)∑c=1 K π cD t(Ber(p ab)∥Ber(p bc)),\displaystyle\mathrm{CH}\left(A,B\right)\ =\ \min_{\begin{subarray}{c}a\in A\ b\in B\end{subarray}},\sup_{t\in(0,1)}(1-t)\sum_{c=1}^{K}\pi_{c}\operatorname{D}{t}\left(\operatorname{Ber}(p{ab})|\operatorname{Ber}(p_{bc})\right),
and by I(𝒜)=I(𝒜,π,p)I(\mathcal{A})=I(\mathcal{A},\pi,p) the quantity
I(𝒜)=min A r≠A t∈𝒜CH(A r,A t).\displaystyle I(\mathcal{A})\ =\ \min_{A_{r}\neq A_{t}\in\mathcal{A}}\mathrm{CH}\left(A_{r},A_{t}\right).
We define the finest partition of [K][K] with threshold τ\tau the partition 𝒜 τ∗\mathcal{A}^{*}_{\tau} of [K][K] such that
𝒜 τ∗=argmax 𝒜{|𝒜|:I(𝒜)>τ}.\displaystyle\mathcal{A}^{}_{\tau}\ =\ \operatorname{arg,max}_{\mathcal{A}}\left{\left|\mathcal{A}\right|\colon I(\mathcal{A})>\tau\right}.
It is the partition of [K][K] in the largest number of subsets, among all partitions that verify I(𝒜)>τ I(\mathcal{A})>\tau. The following theorem holds.
Theorem 4.
Let G∼𝖲𝖡𝖬(N,π,p)G\sim{\sf SBM}(N,\pi,p) and 𝒜\mathcal{A} a partition of [K][K] in t t non-empty and non-overlapping elements. Suppose that no two rows of p p are equal. Note that if τ\tau is too large, such a partition might not exist. The following holds.
- (i)if I(𝒜)<(1−Ω(1))N−1logN I(\mathcal{A})<(1-\Omega(1)),,N^{-1}\log N, then no algorithm can exactly recover 𝒜\mathcal{A};
- (ii)the agnostic-degree-profiling algorithm of[abbe2015recovering] exactly recovers the finest partition with threshold τ=(1+Ω(1))N−1logN\tau=(1+\Omega(1)),,N^{-1}\log N.
Proof.
The proof of point(i) can be found in [abbe2015community, Theorem 1] while point(ii) corresponds to [abbe2015recovering, Theorem 4]. ∎
Finally, we have the following lemma for the estimation of the link probabilities p ab p_{ab}.
Lemma 4.
Let 𝒞^\widehat{\mathcal{C}} be an exact estimator of 𝒞\mathcal{C} and suppose that min u∈𝒯p(u)=ω(N−2)\min_{u\in\mathcal{T}}p(u)=\omega(N^{-2}). Then with high probability ρ(C^a,C^b)=(1+o(1))p ab\rho(\widehat{C}{a},\widehat{C}{b})=(1+o(1))p_{ab}.
Proof.
We assume that the permutation τ\tau in the definition of the loss function (Equation(3.1)) is the identity. Furthermore, we shorten ρ(C^a,C^b)\rho(\widehat{C}{a},\widehat{C}{b}) by p^ab\hat{p}{ab}. Since 𝒞^\widehat{\mathcal{C}} is an exact estimator of 𝒞\mathcal{C}, we have for N N large enough that C a=C^a C{a}=\widehat{C}{a} for all a∈ℒ 𝒯 a\in\mathcal{L}{\mathcal{T}}. Hence,
p^ab=∑i∈C a,j∈C b A ij|C a|⋅|C b|.\displaystyle\hat{p}{ab}\ =\ \frac{\sum{i\in C_{a},j\in C_{b}}A_{ij}}{|C_{a}|\cdot|C_{b}|}.
Since ∑i∈C a,j∈C b A ij∼Bin(|C a|⋅|C b|,p ab)\sum_{i\in C_{a},j\in C_{b}}A_{ij}\sim\operatorname{Bin}(|C_{a}|\cdot|C_{b}|,p_{ab}) and p ab=ω(N−2)p_{ab}=\omega(N^{-2}) as well as |C a|,|C b|=Θ(N)|C_{a}|,|C_{b}|=\Theta(N), we conclude that p^ab=(1+o(1))p ab\hat{p}{ab}=(1+o(1))p{ab} using the concentration of binomial distribution. ∎
We can now prove Theorem3.
Proof of Theorem3.
Since HSBM is a special instance of the general SBM, in which the communities are indexed by elements of ℒ 𝒯\mathcal{L}{\mathcal{T}} instead of elements of [K][K], we can directly apply Theorem4. The set 𝒮 q\mathcal{S}{q} of nodes at depth q q naturally forms a partition 𝒜=(A t)t∈𝒮 q\mathcal{A}=(A_{t}){t\in\mathcal{S}{q}} of ℒ 𝒯\mathcal{L}{\mathcal{T}} as follows: ℓ∈A t\ell\in A{t} iff lca(t,ℓ)=t\mathrm{lca}(t,\ell)=t. Exactly recovering this partition is equivalent to recovering exactly sc(q,𝒞,𝒯)\mathrm{sc}(q,\mathcal{C},\mathcal{T}), and therefore point(i) of Theorem3 follows from point(i) of Theorem4.
Conversely, point(ii) of Theorem4 implies that we can recover the finest partition q∗q^{}. In the case of an HSBM, this corresponds to the largest q q such that I q>(1+Ω(1))N−1logN I_{q}>(1+\Omega(1)),N^{-1}\log N. Since q↦I q q\mapsto I_{q} is non-decreasing we have q≥q∗q\geq q^{}. Moreover, using Lemma4, we proceed as in the proof of Theorem1 to show that we recover the tree 𝒯[𝒮≤q∗]\mathcal{T}[\mathcal{S}{\leq q^{}}]. In particular, we can recover all the super-communities at any higher depth q≥q∗q\geq q^{}, which are exactly the depths verifying I q>(1+Ω(1))N−1logN I{q}>(1+\Omega(1)),N^{-1}\log N. ∎
B.2 Proof of Lemma1
Proof of Lemma1.
Let a,b a,b be two leaves of 𝒯\mathcal{T} such that their least common ancestor is at a depth s s strictly less than q q, i.e.,|lca(a,b)|=s<q|\mathrm{lca}(a,b)|=s<q. For any leaf c∈ℒ T c\in\mathcal{L}{T} we have p ac=p bc p{ac}=p_{bc} if c∉ℒ 𝒯[u]c\not\in\mathcal{L}{\mathcal{T}[u]}. Therefore, the sum in(4.1) can be limited to c∈ℒ 𝒯[u]c\in\mathcal{L}{\mathcal{T}[u]} so that
CH(a,b)\displaystyle\mathrm{CH}\left(a,b\right)=1 Ksup t∈(0,1)(1−t)∑c∈ℒ 𝒯[u]D t(Ber(p ac)∥Ber(p bc)).\displaystyle\ =\ \frac{1}{K}\sup_{t\in(0,1)}(1-t)\sum_{c\in\mathcal{L}{\mathcal{T}[u]}}\operatorname{D}{t}\left(\operatorname{Ber}(p_{ac})|\operatorname{Ber}(p_{bc})\right).
In the following, we let u=lca(a,b)u=\mathrm{lca}(a,b) and s=|u|s=|u|. For any two nodes v,w∈𝒯 v,w\in\mathcal{T}, we denote by sim(v,w)\mathrm{sim}(v,w) the depth of the least common ancestor to v v and w w, that is sim(v,w)=|lca(v,w)|\mathrm{sim}(v,w)=|\mathrm{lca}(v,w)|. Finally, we denote by Γ a,b(v,w)\Gamma_{a,b}(v,w) the set of leaves of 𝒯[u]\mathcal{T}[u] for which the common ancestor with a a is v v and the common ancestor with b b is w w, i.e.,
Γ a,b(v,w)={c∈ℒ 𝒯[u]:lca(a,c)=vandlca(b,c)=w}.\displaystyle\Gamma_{a,b}(v,w)\ =\ \left{c\in\mathcal{L}_{\mathcal{T}[u]}\colon\mathrm{lca}(a,c)=v\text{ and }\mathrm{lca}(b,c)=w\right}.
We have
|Γ a,b(v,w)|={1 ifsim(a,v)=dandsim(b,w)=s(this is equivalent to v=a and w=u),1 ifsim(a,v)=sandsim(b,w)=d(this is equivalent to v=u and w=b),2 k−1 ifsim(a,v)=d−kandsim(b,w)=sfor somek∈[d−s−1],2 k−1 ifsim(a,v)=sandsim(b,w)=d−kfor somek∈[d−s−1],0 otherwise.\displaystyle\left|\Gamma_{a,b}(v,w)\right|\ =\ \begin{cases}1&\text{ if }\mathrm{sim}(a,v)=d\text{ and }\mathrm{sim}(b,w)=s\text{ (this is equivalent to $v=a$ and $w=u$)},\ 1&\text{ if }\mathrm{sim}(a,v)=s\text{ and }\mathrm{sim}(b,w)=d\text{ (this is equivalent to $v=u$ and $w=b$)},\ 2^{k-1}&\text{ if }\mathrm{sim}(a,v)=d-k\text{ and }\mathrm{sim}(b,w)=s\text{ for some }k\in[d-s-1],\ 2^{k-1}&\text{ if }\mathrm{sim}(a,v)=s\text{ and }\mathrm{sim}(b,w)=d-k\text{ for some }k\in[d-s-1],\ 0&\text{ otherwise.}\end{cases}
Moreover, for any c∈Γ a,b(v,w)c\in\Gamma_{a,b}(v,w) we have p ac=p(lca(a,c))=p sim(a,v)p_{ac}=p(\mathrm{lca}(a,c))=p_{\mathrm{sim}(a,v)} and similarly p bc=p sim(b,w)p_{bc}=p_{\mathrm{sim}(b,w)}. Thus,
CH(a,b)=1 Ksup t∈(0,1)(1−t){D t(p d∥p s)+D t(p s∥p d)+∑k=1 d−s−1 2 k−1[D t(p d−k∥p s)+D t(p s∥p d−k)]},\displaystyle\mathrm{CH}(a,b)\ =\ \frac{1}{K}\sup_{t\in(0,1)}(1-t)\left{\operatorname{D}{t}\left(p{d}|p_{s}\right)+\operatorname{D}{t}\left(p{s}|p_{d}\right)+\sum_{k=1}^{d-s-1}2^{k-1}\left[\operatorname{D}{t}\left(p{d-k}|p_{s}\right)+\operatorname{D}{t}\left(p{s}|p_{d-k}\right)\right]\right},(B.1)
where we shortened D t(Ber(p)∥Ber(q))\operatorname{D}{t}(\operatorname{Ber}(p)|\operatorname{Ber}(q)) by D t(p,q)\operatorname{D}{t}(p,q).
Finally, let P,Q P,Q be two probability distributions. By the concavity of t↦(1−t)D t(P∥Q)t\mapsto(1-t)\operatorname{D}{t}(P|Q) and the relation (1−t)D t(P∥Q)=tD 1−t(Q∥P)(1-t)\operatorname{D}{t}(P|Q)=t\operatorname{D}_{1-t}(Q|P) (see for example[vanErven2014renyi]), the function inside the sup\sup of EquationB.1 is concave and symmetric around t=1/2 t=1/2. Therefore the sup\sup of EquationB.1 is achieved at t=1/2 t=1/2. Hence,
CH(a,b)=1 K[D 1/2(p s,p d)+∑k=1 d−s−1 2 k−1D 1/2(p s,p d−k)]=H s.\displaystyle\mathrm{CH}(a,b)\ =\ \frac{1}{K}\left[\operatorname{D}{1/2}\left(p{s},p_{d}\right)+\sum_{k=1}^{d-s-1}2^{k-1}\operatorname{D}{1/2}\left(p{s},p_{d-k}\right)\right]\ =\ H_{s}.
This last quantity H s H_{s} depends on a,b a,b only via their similarity sim(a,b)=s\mathrm{sim}(a,b)=s. Moreover, since the network is assortative we have p s<p s+1 p_{s}<p_{s+1} and thus D 1/2(p s,p ℓ)>D 1/2(p s+1,p ℓ)\operatorname{D}{1/2}\left(p{s},p_{\ell}\right)>\operatorname{D}{1/2}\left(p{s+1},p_{\ell}\right) for ℓ≥s+1\ell\geq s+1. Therefore s↦H s s\mapsto H_{s} is decreasing, and consequently
I q=min a≠b∈ℒ 𝒯 sim(a,b)<qCH(a,b)=min s<qH s=H q−1.\displaystyle I_{q}\ =\ \min_{\begin{subarray}{c}a\neq b\in\mathcal{L}{\mathcal{T}}\ \mathrm{sim}(a,b)<q\end{subarray}}\mathrm{CH}(a,b)\ =\ \min{s<q}H_{s}\ =\ H_{q-1}.
∎
B.3 Comparing Top-down versus Bottom-up Conditions
Lemma 5.
For any q∈{1,⋯,d−1}q\in{1,\cdots,d-1} we have J q bu>J q td J_{q}^{\mathrm{bu}}>J_{q}^{\mathrm{td}}. When q=d q=d, we have J d bu=J d td J_{d}^{\mathrm{bu}}=J_{d}^{\mathrm{td}}.
Proof of Lemma5.
We have
2 dJ q bu\displaystyle 2^{d}J_{q}^{\mathrm{bu}}=a q+2 d−qa q−1+∑k=1 d−q 2 k−1a d−k−2a q−1(a d+∑k=1 d−q 2 k−1a d−k),\displaystyle\ =\ a_{q}+2^{d-q}a_{q-1}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}-2\sqrt{a_{q-1}}\left(\sqrt{a_{d}}+\sum_{k=1}^{d-q}2^{k-1}\sqrt{a_{d-k}}\right), 2 dJ q td\displaystyle 2^{d}J_{q}^{\mathrm{td}}=a q+2 d−qa q−1+∑k=1 d−q 2 k−1a d−k−22 d−qa q−1a d+∑k=1 d−q 2 k−1a d−k.\displaystyle\ =\ a_{q}+2^{d-q}a_{q-1}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}-2\sqrt{2^{d-q}a_{q-1}}\sqrt{a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}}.
Hence,
2 d(J q bu−J q td)\displaystyle 2^{d}\left(J_{q}^{\mathrm{bu}}-J_{q}^{\mathrm{td}}\right)= 2a q−1(2 d−q 2a d+∑k=1 d−q 2 k−1a d−k−(a d+∑k=1 d−q 2 k−1a d−k))\displaystyle\ =\ 2\sqrt{a_{q-1}}\left(2^{\frac{d-q}{2}}\sqrt{a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}}-\left(\sqrt{a_{d}}+\sum_{k=1}^{d-q}2^{k-1}\sqrt{a_{d-k}}\right)\right) = 2a q−1(D−E)\displaystyle\ =\ 2\sqrt{a_{q-1}}(D-E) =2a q−1 D+E(D 2−E 2),\displaystyle\ =\ \frac{2\sqrt{a_{q-1}}}{D+E}(D^{2}-E^{2}),
where
D= 2 d−q 2(a d+∑k=1 d−q 2 k−1a d−k)and E=a d+∑k=1 d−q 2 k−1a d−k.\displaystyle D\ =\ 2^{\frac{d-q}{2}}\sqrt{\left(a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}\right)}\quad\text{ and }\quad E\ =\ \sqrt{a_{d}}+\sum_{k=1}^{d-q}2^{k-1}\sqrt{a_{d-k}}.
Since 2a q−1 D+E>0\frac{2\sqrt{a_{q-1}}}{D+E}>0, we focus on showing D≥E D\geq E.
First, we notice that when d=q d=q, D=E=a d D=E=\sqrt{a_{d}} and hence J d td=J d bu J_{d}^{\mathrm{td}}=J_{d}^{\mathrm{bu}}.
Next, when d−q≥1 d-q\geq 1, we have
D 2−E 2\displaystyle D^{2}-E^{2}= 2 d−q(a d+∑k=1 d−q 2 k−1a d−k)−(a d+∑k=1 d−q 2 k−1a d−k)2\displaystyle\ =\ 2^{d-q}\left(a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}\right)-\left(\sqrt{a_{d}}+\sum_{k=1}^{d-q}2^{k-1}\sqrt{a_{d-k}}\right)^{2} =(2 d−q−1)a d+∑k=1 d−q 2 k−1(2 d−q−2 k−1)a d−k−∑k=1 d−q 2 ka da d−k−∑k=1 d−q−1∑l>k d−q 2 k+l−1a d−ka d−l.\displaystyle\ =\ (2^{d-q}-1)a_{d}+\sum_{k=1}^{d-q}2^{k-1}\left(2^{d-q}-2^{k-1}\right)a_{d-k}-\sum_{k=1}^{d-q}2^{k}\sqrt{a_{d}}\sqrt{a_{d-k}}-\sum_{k=1}^{d-q-1}\sum_{l>k}^{d-q}2^{k+l-1}\sqrt{a_{d-k}}\sqrt{a_{d-l}}.
Noticing that ∑k=1 d−q 2 k−1=2 d−q−1\sum_{k=1}^{d-q}2^{k-1}=2^{d-q}-1 and that ∑l=1,l≠k d−q 2 k+l−2=2 k−1(∑l=1 d−q 2 l−1−2 k−1)=2 k−1(2 d−q−2 k−1−1)\sum_{l=1,l\neq k}^{d-q}2^{k+l-2}=2^{k-1}\left(\sum_{l=1}^{d-q}2^{l-1}-2^{k-1}\right)=2^{k-1}(2^{d-q}-2^{k-1}-1) leads to
D 2−E 2=\displaystyle D^{2}-E^{2}\ =(∑k=1 d−q 2 k−1a d+∑k=1 d−q 2 k−1a d−k−2∑k=1 d−q 2 k−1a da d−k)\displaystyle\left(\sum_{k=1}^{d-q}2^{k-1}a_{d}+\sum_{k=1}^{d-q}2^{k-1}a_{d-k}-2\sum_{k=1}^{d-q}2^{k-1}\sqrt{a_{d}}\sqrt{a_{d-k}}\right) +(∑k=1 d−q∑l=1 l≠k d−q 2 k+l−2a d−k−2∑k=1 d−q−1∑l>k d−q 2 k+l−2a d−ka d−l)\displaystyle\qquad+\left(\sum_{k=1}^{d-q}\sum_{\begin{subarray}{c}l=1\ l\neq k\end{subarray}}^{d-q}2^{k+l-2}a_{d-k}-2\sum_{k=1}^{d-q-1}\sum_{l>k}^{d-q}2^{k+l-2}\sqrt{a_{d-k}}\sqrt{a_{d-l}}\right) =\displaystyle\ =\∑k=1 d−q 2 k−1(a d−a d−k)2+∑k=1 d−q−1∑l>k d−q 2 k+l−2(a d−k−a d−l)2.\displaystyle\sum_{k=1}^{d-q}2^{k-1}\left(\sqrt{a_{d}}-\sqrt{a_{d-k}}\right)^{2}+\sum_{k=1}^{d-q-1}\sum_{l>k}^{d-q}2^{k+l-2}\left(\sqrt{a_{d-k}}-\sqrt{a_{d-l}}\right)^{2}.
Using the network’s assortativity, we conclude that this last quantity is strictly greater than zero, and hence J q bu>J q td J_{q}^{\mathrm{bu}}>J_{q}^{\mathrm{td}} for all q≤d−1 q\leq d-1. ∎
Appendix C Additional Numerical Experiments
C.1 Synthetic Data Sets
C.1.1 Ternary Tree SBMs
As the hierarchical community structure cannot always be represented by a binary tree, we also perform experiments on ternary-tree stochastic block models with depth 3 (the ternary tree is drawn in Figure7(a)), 100 nodes in each bottom cluster, and the probability of an edge between two nodes whose lowest common ancestor has depth k k is p k=a klogN/N p_{k}=a_{k}\log N/N.
We first show in Figure7 the dendrograms and trees obtained by top-down and bottom-up algorithms. Although both algorithms generate binary trees, ternary structures appear in Figures7(b) and7(c). Nevertheless, we observe in Figure7(c) that the dendrograms obtained by the top-down algorithm show some inversions.
Next, we proceed as in Section6.1.1, by fixing a 0 a_{0} to 40 and a 3 a_{3} to 100, and by varying the values of a 1 a_{1} and a 2 a_{2} from a 0 a_{0} to a 3 a_{3} (with the condition a 1<a 2 a_{1}<a_{2}). We observe in Figure8 differences in the performances of top-down and bottom-up. Indeed, in this setting, the bottom-up algorithm recovers communities exactly up to the theoretical thresholds. Moreover, the accuracy obtained by top-down is lower than the one obtained by bottom-up.
(a)Ground truth tree
(b)bottom-up
(c)top-down
Figure 7: (a) A ternary tree of depth 3 used as ground truth. (b)-(c) Dendrograms obtained by bottom-up and top-down algorithms on a Ternary Tree SBM of depth 3, N=2700 N=2700, and interaction probabilities p k=a klogN/N p_{k}=a_{k}\log N/N with (a 0,a 1,a 2,a 3)=(10,30,40,130)(a_{0},a_{1},a_{2},a_{3})=(10,30,40,130).
Figure 8: Performance of bottom-up and top-down algorithms on Ternary Tree SBMs of depth 3, N=2700 N=2700 nodes, and interaction probabilities p k=a klogN/N p_{k}=a_{k}\log N/N with a 0=10 a_{0}=10 and a 3=130 a_{3}=130, as a function of a 1 a_{1} and a 2 a_{2}. We vary a 1≤a 2 a_{1}\leq a_{2} from a 0 a_{0} to a 3 a_{3}. The performance of the algorithms is measured by the accuracy at each depth (averaged over 10 realizations), and the exact recovery threshold at different depths is shown in coloured solid lines. Exact recovery is shown with large circles and non-exact recovery with small crosses.
C.1.2 Unbalanced HSBM
We evaluate the performance of HCD algorithms on HSBM whose binary tree is not necessarily full and balanced. Similar to the BTSBM, we assume that the depth of the tree 𝒯\mathcal{T} determines the link probabilities, i.e.,p(u)=p|u|p(u)=p_{|u|} for all u∈𝒯 u\in\mathcal{T}. Because the tree is unbalanced, the bottom communities no longer have the same depth. In our experiments, the size of a bottom community having depth k k is 100⋅2 5−k 100\cdot 2^{5-k}, so that the total number of nodes is N=3200 N=3200.
To assess the accuracy of tree recovery, we define the similarity matrix S(𝒯,𝒞)S(\mathcal{T},\mathcal{C}) the N N-by-N N matrix such that for i∈C a i\in C_{a} and j∈C b j\in C_{b} (with a,b∈ℒ 𝒯 a,b\in\mathcal{L}_{\mathcal{T}}) we have
(S(𝒯,𝒞))ij=|lca 𝒯(a,b)|,\displaystyle\Big(S\left(\mathcal{T},\mathcal{C}\right)\Big){ij}\ =\ |\mathrm{lca}{\mathcal{T}}(a,b)|,
where |lca 𝒯(a,b)||\mathrm{lca}{\mathcal{T}}(a,b)| is the depth of the lowest common ancestor of C a C{a} and C b C_{b} in the tree 𝒯\mathcal{T}. The tree recovery error is then defined as
‖S(𝒯^,𝒞^)−S(𝒯,𝒞)‖F 2‖S(𝒯,𝒞)‖F 2.\frac{|S(\widehat{\mathcal{T}},\widehat{\mathcal{C}})-S(\mathcal{T},\mathcal{C})|^{2}{F}}{|S(\mathcal{T},\mathcal{C})|^{2}{F}}.
This metric quantifies the discrepancy between the estimated and true similarity matrices.
Figures9 and10 compare different HCD algorithms on HSBMs whose corresponding unbalanced trees are shown in Figures9(a) and10(a). We observe that bottom-up and top-down approaches perform similarly well, though one may outperform the other depending on β\beta and the chosen evaluation metric.
(a)Unbalanced tree
(b)AMI
(c)Tree similarity
(d)Estimated K K
Figure 9: Performance of top-down and bottom-up algorithms on HSBMs, where the tree is given in Figure9(a), with N=3200 N=3200 nodes and p k=64β 5−klogN N p_{k}=64,\beta^{5-k},\frac{\log{N}}{N}. The results are averaged over 100 realizations, and error bars show the standard error but are typically smaller than the symbols.
(a)Unbalanced tree
(b)AMI
(c)Tree similarity
(d)Estimated K K
Figure 10: Performance of top-down and bottom-up algorithms on HSBMs, where the tree is given in Figure10(a) and with N=3200 N=3200 nodes and p k=144β 5−klogN N p_{k}=144,\beta^{5-k},\frac{\log{N}}{N}. The results are averaged over 100 realizations, and error bars show the standard error but are typically smaller than the symbols.
C.1.3 Increased Number of Bottom Communities
We evaluate the accuracy of HCD algorithms on a BTSBM of depth 6, resulting in K=2 6=64 K=2^{6}=64 bottom communities. Figures11 compare the performance of different HCD algorithms on BTSBM. We observe that bottom-up and top-down approaches perform similarly well, though one may outperform the other depending on β\beta and the chosen evaluation metric.
(a)AMI
(b)Tree similarity
(c)Estimated K K
Figure 11: Performance of top-down and bottom-up algorithms on BTSBMs of depth 6, with N=6400 N=6400 nodes (K=64 K=64 and 100 nodes per community) and p k=81β 6−klogN N p_{k}=81,\beta^{6-k},\frac{\log{N}}{N}. The results are averaged over 10 realizations, and error bars show the standard error but are typically smaller than the symbols.
C.1.4 Another Bottom-up Approach
In this subsection, we consider another type of hierarchical community detection algorithm that first identifies bottom communities then aggregates them. The method called synthesis[fang2023t], differs from linkage-based approaches by employing the (sparse) neighbor-joining algorithm for aggregation. As a result, synthesis can handle more general tree structures than the binary rooted trees produced by bottom-up and top-down methods, and it outputs unrooted, potentially non-binary trees.
Figure12 reports the performance of our focal bottom-up algorithm with synthesis. We evaluate performance using three metrics: bottom community accuracy, Robinson-Foulds distance from the ground-truth tree, and the recovery rate of the true hierarchical structures. The Robinson-Foulds distance is a metric to measure the distance between two potentially unrooted trees and is normalized to range from 0 to 1. The tree recovery rate is defined as the proportion of outputs with a Robinson-Foulds distance of zero. Although Fang and Rohe[fang2023t] use sparse neighbor-joining to accommodate non-binary trees, we apply the standard neighbor-joining algorithm for a fair comparison since our ground-truth tree structures are binary.
(a)Bottom accuracy
(b)Robinson-Foulds distance
(c)Recovery rate
Figure 12: Performance of synthesis and bottom-up algorithms on BTSBMs of depth 5, with N=6400 N=6400 nodes (K=32 K=32 and 200 nodes per community) and p k−1/p k=0.36 p_{k-1}/p_{k}=0.36. The results are averaged over 10 realizations, and error bars show the standard error but are typically smaller than the symbols.
C.2 Real Data Sets
C.2.1 Military Inter-alliance
We next consider the network of military alliances between countries. The data is provided by the Alliance Treaty Obligations and Provisions (ATOP) project[leeds2002alliance]. We select the year 2018 (as this is the most recent year available). We define two countries as allied if they share a defensive alliance (we do not consider non-aggression pacts, as those are more numerous and historically not necessarily well respected). This leads to a network of 133 countries and 1391 alliances. Some important countries such as India or Switzerland are missing as they do not share any defensive alliances with anybody. Moreover, the graph is not connected, as a small component made of three countries (China, North Korea and Cuba) is disconnected from the rest of the world.
Figures13 and14 show the output of bottom-up and top-down algorithms. Bottom-up predicts 7 bottom communities, which represent geopolitical alliances based on political affiliation and geography (European countries, Eurasian countries, Arabic countries, Western African countries and Central/Southern African countries). The top level splits the graph’s largest connected component into 3 clusters: Western countries, Eurasian countries, and African and Middle-East countries. While some of these clusters are also recovered by top-down HCD, the separation of African countries by top-down algorithm appears worse.
(a)Highest depth.
(b)Middle depth (after 2 merges).
(c)Lowest depth (after 3 merges).
(d)Dendrogram.
Figure 13: Output of bottom-up algorithm on the military alliance network. The dendrogram does not show the disconnected component (China, Cuba, North Korea).
(a)Highest depth (8 clusters).
(b)Middle depth (6 clusters).
(c)Lowest depth (3 clusters).
(d)Dendrogram.
Figure 14: Output of top-down algorithm on the military alliance network. The dendrogram does not show the disconnected component (China, Cuba, North Korea).
C.2.2 Football Data Set
We also test HCD algorithms on the United States college (American) football dataset[girvan2002community]. This network represents the schedule of Division I games for the 2000 season. Each node in the network corresponds to a college football team, and edges represent the regular-season games between the teams. The teams are categorized into 11 conferences, in which games are more frequent between the members. Each conference has 8 to 12 teams. We exclude the ”independent” teams which do not belong to any conferences. Since the original community labels appear to be based on the 2001 season, while the edges represent the games played during the 2000 season, we proceed to the same correction as in[evans2010clique].
The results obtained by the different HCD algorithms are given in Figure 15. First, we observe that all the algorithms perform well (AMI for bottom-up, top-down, Paris, and Bayesian are respectively 0.962, 0.892 0.892, 0.965, and 0.976). However, top-down has more errors than the other algorithms. Interestingly, bottom-up, top-down, and Bayesian algorithms predict 10 clusters (more precisely, Bayesian detects 10.1 communities averaged over 100 runs), as they tend to infer Big West and Mountain West conferences as a single cluster. Finally, we can restore some geographical proximity among conferences from the hierarchy inferred by bottom-up. For example, Conference USA is composed of teams located in the Southern United States, while the Southeastern Conference’s member institutions are located primarily in the South Central and Southeastern United States. Another example is that teams belonging to Pacific Ten, Big West, and Mountain West are all located in the West, and these conferences are also close in the bottom-up dendrogram.
(a)bottom-up
(b)top-down
Figure 15: Output of bottom-up and top-down algorithms on the football data set. The colours correspond to conferences, and grey edges indicate having regular-season games between the two teams. The hierarchical tree is drawn in black, and its root is marked by a star symbol.
Generated on Sun Nov 23 17:50:46 2025 by L a T e XML
Xet Storage Details
- Size:
- 222 kB
- Xet hash:
- 51eeb7cbb1bd886c788540c64e3e2e5e765100118dd8740c2c74cdcd887d9b21
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.









































