chunk_id
float64
0
165
chunk_text
stringlengths
1
11.1k
chunk_text_tokens
float64
1
2k
serialized_text
stringlengths
2
11.2k
serialized_text_tokens
float64
1
2.03k
100
We consider the posterior distribution of π∈Sn𝜋subscript𝑆𝑛\pi\in{S}_{n}italic_π ∈ italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT given G1,G2subscript𝐺1subscript𝐺2G_{1},G_{2}italic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_G start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, along with the additional side i...
780
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models Proof: We consider the posterior distribution of π∈Sn𝜋subscript𝑆𝑛\pi\in{S}_{n}italic_π ∈ italic_S start_POSTSUBSCRIPT ita...
821
101
, 1 = . , 2 = ℙ⁢(π∗=π∣G1,G2,π∗⁢{[n]\ℋ∗},𝝈1,𝝈2)ℙsubscript𝜋conditional𝜋subscript𝐺1subscript𝐺2subscript𝜋\delimited-[]𝑛subscriptℋsuperscript𝝈1superscript𝝈2\displaystyle\mathbb{P}\left(\pi_{*}=\pi\mid G_{1},G_{2},\pi_{*}\left\{[n]% \backslash\mathcal{H}_{*}\right\},{\boldsymbol{\sigma}}^{1},{\boldsymbol{% \sigma}}...
1,682
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models Proof: , 1 = . , 2 = ℙ⁢(π∗=π∣G1,G2,π∗⁢{[n]\ℋ∗},𝝈1,𝝈2)ℙsubscript𝜋conditional𝜋subscript𝐺1subscript𝐺2subscript𝜋\delimite...
1,723
102
)}\mathds{1}\left(\pi\in\mathcal{T}_{*}\right)start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG ( italic_b ) end_ARG end_RELOP divide start_ARG blackboard_P ( italic_π start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT = italic_π ∣ italic_A , italic_B ) blackboard_P ( italic_π start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT = itali...
1,425
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models Proof: )}\mathds{1}\left(\pi\in\mathcal{T}_{*}\right)start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG ( italic_b ) en...
1,466
103
where C1subscript𝐶1C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and C2subscript𝐶2C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are constants that do not depend on π𝜋\piitalic_π. The inequality (a)𝑎(a)( italic_a ) holds from Definition 7, the inequality (b)𝑏(b)( italic_b ) holds since given π∗=πsub...
1,874
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models Proof: where C1subscript𝐶1C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and C2subscript𝐶2C_{2}italic_C start_POSTS...
1,915
104
, 1 = d4⁢log⁡11−ρ2𝑑411superscript𝜌2\displaystyle\frac{d}{4}\log\frac{1}{1-\rho^{2}}divide start_ARG italic_d end_ARG start_ARG 4 end_ARG roman_log divide start_ARG 1 end_ARG start_ARG 1 - italic_ρ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG. , 2 = ≤log⁡n−n⁢s2⁢p+q2−log⁡d−ω⁢(1)≤log⁡|ℋ∗+|−log⁡d.absent𝑛𝑛supersc...
596
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models Proof: , 1 = d4⁢log⁡11−ρ2𝑑411superscript𝜌2\displaystyle\frac{d}{4}\log\frac{1}{1-\rho^{2}}divide start_ARG italic_d end_AR...
637
105
Let n1=|V+|subscript𝑛1superscript𝑉n_{1}=|V^{+}|italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = | italic_V start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT | and n2=|V−|subscript𝑛2superscript𝑉n_{2}=|V^{-}|italic_n start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = | italic_V start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT |, and l...
1,019
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models X-A Proof of Lemma 9 Proof: Let n1=|V+|subscript𝑛1superscript𝑉n_{1}=|V^{+}|italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIP...
1,068
106
, 1 = Var⁡(|ℋ∗+|)Varsubscriptsuperscriptℋ\displaystyle\operatorname{Var}(|\mathcal{H}^{+}_{*}|)roman_Var ( | caligraphic_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT | ). , 2 = =𝔼⁢(|ℋ∗+|2)−𝔼⁢(|ℋ∗+|)2absent𝔼superscriptsubscriptsuperscriptℋ2𝔼superscriptsubscriptsuperscriptℋ2\d...
1,565
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models X-A Proof of Lemma 9 Proof: , 1 = Var⁡(|ℋ∗+|)Varsubscriptsuperscriptℋ\displaystyle\operatorname{Var}(|\mathcal{H}^{+}_{*}|)r...
1,614
107
, 1 = 𝔼⁢(|ℋ∗+|)𝔼subscriptsuperscriptℋ\displaystyle\mathbb{E}(|\mathcal{H}^{+}_{*}|)blackboard_E ( | caligraphic_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT | ). , 2 = =n1⁢(1−p⁢s2)n1−1⁢(1−q⁢s2)n2absentsubscript𝑛1superscript1𝑝superscript𝑠2subscript𝑛11superscript1𝑞superscri...
1,814
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery X Proof of Theorem 6: Impossibility of Exact Matching in Correlated Contextual Stochastic Block Models X-A Proof of Lemma 9 Proof: , 1 = 𝔼⁢(|ℋ∗+|)𝔼subscriptsuperscriptℋ\displaystyle\mathbb{E}(|\mathcal{H}^{+}_{*}|)blackboard_...
1,863
108
Recall that the graph G1subscript𝐺1G_{1}italic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT consists of a database X𝑋Xitalic_X and an adjacency matrix A𝐴Aitalic_A, while the graph G2subscript𝐺2G_{2}italic_G start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT consists of a database Y𝑌Yitalic_Y and an adjacency matrix B𝐵Bitalic_B...
1,616
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XI Proof of Theorem 7: Achievability of Exact Community Recovery in Correlated Contextual Stochastic Block Models Recall that the graph G1subscript𝐺1G_{1}italic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT consists of a database ...
1,656
109
Assume that (⁢27⁢)italic-(27italic-)\eqref{eq:assumption}italic_( italic_) holds. Let G∼CSBM⁢(n,p,q;R,d)similar-to𝐺CSBM𝑛𝑝𝑞𝑅𝑑G\sim\textnormal{CSBM}(n,p,q;R,d)italic_G ∼ CSBM ( italic_n , italic_p , italic_q ; italic_R , italic_d ) with community labels 𝛔:[n]→{+,−}:𝛔→delimited-[]𝑛{\boldsymbol{\sigma}}:[n]\to\{+,...
1,817
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XI Proof of Theorem 7: Achievability of Exact Community Recovery in Correlated Contextual Stochastic Block Models Theorem 14 (Theorem 4.1 in [5]). Assume that (⁢27⁢)italic-(27italic-)\eqref{eq:assumption}italic_( italic_) holds...
1,872
110
, 1 = ℙ⁢(𝐨𝐯⁢(𝝈^⁢(G1+π∗G2),𝝈)≠1)=o⁢(1).ℙ𝐨𝐯^𝝈subscriptsubscript𝜋subscript𝐺1subscript𝐺2𝝈1𝑜1\mathbb{P}\left(\mathbf{ov}(\hat{{\boldsymbol{\sigma}}}(G_{1}+_{\pi_{*}}G_{2})% ,{\boldsymbol{\sigma}})\neq 1\right)=o(1).blackboard_P ( bold_ov ( over^ start_ARG bold_italic_σ end_ARG ( italic_G start_POSTSUBSCRIPT 1 en...
262
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XI Proof of Theorem 7: Achievability of Exact Community Recovery in Correlated Contextual Stochastic Block Models Theorem 14 (Theorem 4.1 in [5]). , 1 = ℙ⁢(𝐨𝐯⁢(𝝈^⁢(G1+π∗G2),𝝈)≠1)=o⁢(1).ℙ𝐨𝐯^𝝈subscriptsubscript𝜋subscript�...
317
111
Suppose that (27) holds. First, we generate a graph H𝐻Hitalic_H with the following distributions for the edges and node attributes. Let 𝝈Hsuperscript𝝈𝐻{\boldsymbol{\sigma}}^{H}bold_italic_σ start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT be the community labels of the graph H𝐻Hitalic_H. The probability of an ed...
1,070
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models Proof of Theorem 8. Suppose that (27) holds. First, we generate a graph H𝐻Hitalic_H with the following distribut...
1,120
112
Suppose that (27) holds. If (⁢29⁢)italic-(29italic-)\eqref{eq:csbm rec imp cond1}italic_( italic_) holds, then for any estimator 𝛔~~𝛔\tilde{{\boldsymbol{\sigma}}}over~ start_ARG bold_italic_σ end_ARG, we have , 1 = ℙ⁢(𝐨𝐯⁢(𝝈~⁢(H),𝝈H)=1)=o⁢(1).ℙ𝐨𝐯~𝝈𝐻superscript𝝈𝐻1𝑜1\mathbb{P}(\mathbf{ov}(\tilde{{\boldsymbol{...
1,029
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models Lemma 11. Suppose that (27) holds. If (⁢29⁢)italic-(29italic-)\eqref{eq:csbm rec imp cond1}italic_( italic_) hold...
1,076
113
where (s01,s10,s11):=(s⁢(1−s)1−(1−s)2,s⁢(1−s)1−(1−s)2,s21−(1−s)2)assignsubscript𝑠01subscript𝑠10subscript𝑠11𝑠1𝑠1superscript1𝑠2𝑠1𝑠1superscript1𝑠2superscript𝑠21superscript1𝑠2\left(s_{01},s_{10},s_{11}\right):=\left(\frac{s(1-s)}{1-(1-s)^{2}},\frac{s(1-% s)}{1-(1-s)^{2}},\frac{s^{2}}{1-(1-s)^{2}}\right)( italic_...
1,551
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models Lemma 11. where (s01,s10,s11):=(s⁢(1−s)1−(1−s)2,s⁢(1−s)1−(1−s)2,s21−(1−s)2)assignsubscript𝑠01subscript𝑠10subscr...
1,598
114
For two sequences {Xn}subscript𝑋𝑛\{X_{n}\}{ italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } and {Yn}subscript𝑌𝑛\{Y_{n}\}{ italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } of random variables and a deterministic sequence {rn}n=1∞⊆(0,+∞)superscriptsubscriptsubscript𝑟𝑛𝑛10\{r_{n}\}_{n=1}^{\infty}\...
717
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 For two sequences {Xn}subscript𝑋𝑛\{X_{n}\}{ italic_X start_POSTSUBSCRIPT italic_n end_P...
769
115
When (27) holds, Abbe et al. [5] demonstrated that the condition (a−b)2+c2<1superscript𝑎𝑏2𝑐21\frac{(\sqrt{a}-\sqrt{b})^{2}+c}{2}<1divide start_ARG ( square-root start_ARG italic_a end_ARG - square-root start_ARG italic_b end_ARG ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_c end_ARG start_ARG 2 end_ARG < 1...
713
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Proof of Lemma 11. When (27) holds, Abbe et al. [5] demonstrated that the condition (a−b)...
771
116
For a,b,c>0𝑎𝑏𝑐0a,b,c>0italic_a , italic_b , italic_c > 0, we have that , 1 = I∗⁢(a,b,c)=I⁢(−1/2,a,b,c)=supt∈ℝI⁢(t,a,b,c).superscript𝐼𝑎𝑏𝑐𝐼12𝑎𝑏𝑐subscriptsupremum𝑡ℝ𝐼𝑡𝑎𝑏𝑐I^{*}(a,b,c)=I(-1/2,a,b,c)=\sup_{t\in\mathbb{R}}I(t,a,b,c).italic_I start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_a , italic_b , i...
1,352
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 12. For a,b,c>0𝑎𝑏𝑐0a,b,c>0italic_a , italic_b , italic_c > 0, we have that , 1 =...
1,408
117
Let the adjacency matrix of the graph H𝐻Hitalic_H be denoted by J𝐽Jitalic_J, and let Z:=[(𝒇1,𝒈1),(𝒇2,𝒈2,),…,(𝒇n,𝒈n)]⊤∈ℝn×2⁢dZ:=[({\boldsymbol{f}}_{1},{\boldsymbol{g}}_{1}),({\boldsymbol{f}}_{2},{% \boldsymbol{g}}_{2},),\ldots,({\boldsymbol{f}}_{n},{\boldsymbol{g}}_{n})]^{% \top}\in\mathbb{R}^{n\times 2d}italic_...
693
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 12. Let the adjacency matrix of the graph H𝐻Hitalic_H be denoted by J𝐽Jitalic_J, ...
749
118
Suppose that 𝒮𝒮\mathcal{S}caligraphic_S is a Borel space and (𝛔,𝐗)𝛔𝐗({\boldsymbol{\sigma}},\boldsymbol{X})( bold_italic_σ , bold_italic_X ) is a random element in {±1}n×𝒮superscriptplus-or-minus1𝑛𝒮\{\pm 1\}^{n}\times\mathcal{S}{ ± 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT × caligraphic_S. Let ℱℱ\m...
1,703
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 13 (Lemma F.3 in [5]). Suppose that 𝒮𝒮\mathcal{S}caligraphic_S is a Borel space a...
1,767
119
Let 𝒰:=ℋ⁢(U⁢U⊤)assign𝒰ℋ𝑈superscript𝑈top\mathcal{U}:=\mathcal{H}(UU^{\top})caligraphic_U := caligraphic_H ( italic_U italic_U start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ), where ℋ⁢(⋅)ℋ⋅\mathcal{H}(\cdot)caligraphic_H ( ⋅ ) represents the hollowing operator, which sets all diagonal entries of a square matrix to zero...
1,132
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 13 (Lemma F.3 in [5]). Let 𝒰:=ℋ⁢(U⁢U⊤)assign𝒰ℋ𝑈superscript𝑈top\mathcal{U}:=\mat...
1,196
120
In contrast, in our graph H𝐻Hitalic_H, each node i𝑖iitalic_i is assigned two correlated attributes 𝒇isubscript𝒇𝑖{\boldsymbol{f}}_{i}bold_italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝒈isubscript𝒈𝑖{\boldsymbol{g}}_{i}bold_italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, so given 𝝈Hsuperscr...
1,855
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 13 (Lemma F.3 in [5]). In contrast, in our graph H𝐻Hitalic_H, each node i𝑖iitalic...
1,919
121
\sigma_{i},\boldsymbol{\mu}^{\prime}\rangledivide start_ARG 1 end_ARG start_ARG 1 + italic_ρ end_ARG ⟨ ∑ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT ( bold_italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + bold_italic_g start_POSTSUBSCRIPT italic_i end_P...
217
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 13 (Lemma F.3 in [5]). \sigma_{i},\boldsymbol{\mu}^{\prime}\rangledivide start_ARG ...
281
122
Suppose that (27) holds. For each given i𝑖iitalic_i, we have , 1 = |log⁡(ℙ⁢(σiH=1∣J,Z,σ−iH)ℙ⁢(σiH=−1∣J,Z,σ−iH))−[(log⁡(a′/b′)⁢J+2n+d/R′⁣2⁢𝒰)⁢𝝈H]i|=oℙ⁢(log⁡n;log⁡n)ℙsubscriptsuperscript𝜎𝐻𝑖conditional1𝐽𝑍subscriptsuperscript𝜎𝐻𝑖ℙsubscriptsuperscript𝜎𝐻𝑖conditional1𝐽𝑍subscriptsuperscript𝜎𝐻𝑖subscriptdelimit...
1,331
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 14. Suppose that (27) holds. For each given i𝑖iitalic_i, we have , 1 = |log⁡(ℙ⁢(σi...
1,387
123
Suppose that (27) holds. Let 𝐮i:=12+2⁢ρ⁢(𝐟i+𝐠i)assignsubscript𝐮𝑖122𝜌subscript𝐟𝑖subscript𝐠𝑖{\boldsymbol{u}}_{i}:=\frac{1}{\sqrt{2+2\rho}}({\boldsymbol{f}}_{i}+{% \boldsymbol{g}}_{i})bold_italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT := divide start_ARG 1 end_ARG start_ARG square-root start_ARG 2 + 2 ...
1,618
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 15. Suppose that (27) holds. Let 𝐮i:=12+2⁢ρ⁢(𝐟i+𝐠i)assignsubscript𝐮𝑖122𝜌subsc...
1,674
124
, 1 = 𝔼⁢r⁢(𝝈^,𝝈H)≥n−13⁢n−1⁢ℙ⁢(f⁢(σ1H∣J,Z,σ−1H)<f⁢(−σ1H∣J,Z,σ−1H)).𝔼𝑟^𝝈superscript𝝈𝐻𝑛13𝑛1ℙ𝑓conditionalsubscriptsuperscript𝜎𝐻1𝐽𝑍subscriptsuperscript𝜎𝐻1𝑓conditionalsubscriptsuperscript𝜎𝐻1𝐽𝑍subscriptsuperscript𝜎𝐻1\mathbb{E}r(\hat{{\boldsymbol{\sigma}}},{\boldsymbol{\sigma}}^{H})\geq\frac{n-% 1}{3n-1...
1,708
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 15. , 1 = 𝔼⁢r⁢(𝝈^,𝝈H)≥n−13⁢n−1⁢ℙ⁢(f⁢(σ1H∣J,Z,σ−1H)<f⁢(−σ1H∣J,Z,σ−1H)).𝔼𝑟^𝝈sup...
1,764
125
If 𝒜ε∩ℬεsubscript𝒜𝜀subscriptℬ𝜀\mathcal{A}_{\varepsilon}\cap\mathcal{B}_{\varepsilon}caligraphic_A start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT ∩ caligraphic_B start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT holds, then it is easy to verify that f⁢(σ1H∣J,Z,σ−1H)<f⁢(−σ1H∣J,Z,σ−1H)𝑓conditionalsubscriptsuperscript𝜎�...
1,684
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 15. If 𝒜ε∩ℬεsubscript𝒜𝜀subscriptℬ𝜀\mathcal{A}_{\varepsilon}\cap\mathcal{B}_{\va...
1,740
126
, 1 = lim infn→∞𝔼⁢r⁢(𝝈^,𝝈H)≥n−I∗⁢(a′,b′,c′).subscriptlimit-infimum→𝑛𝔼𝑟^𝝈superscript𝝈𝐻superscript𝑛superscript𝐼superscript𝑎′superscript𝑏′superscript𝑐′\liminf_{n\to\infty}\mathbb{E}r(\hat{{\boldsymbol{\sigma}}},{\boldsymbol{% \sigma}}^{H})\geq n^{-I^{*}(a^{\prime},b^{\prime},c^{\prime})}.lim inf start_POSTSU...
423
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XII Proof of Theorem 8: Impossiblity of Exact Community Recovery in Correlated Contextual Stochastic Block Models XII-A Proof of Lemma 11 Lemma 15. , 1 = lim infn→∞𝔼⁢r⁢(𝝈^,𝝈H)≥n−I∗⁢(a′,b′,c′).subscriptlimit-infimum→𝑛𝔼𝑟^𝝈...
479
127
Recall that , 1 = ℱt={∑k=1tZik⁢ik≥∑k=1tZik⁢ik+1}subscriptℱ𝑡superscriptsubscript𝑘1𝑡subscript𝑍subscript𝑖𝑘subscript𝑖𝑘superscriptsubscript𝑘1𝑡subscript𝑍subscript𝑖𝑘subscript𝑖𝑘1\mathcal{F}_{t}=\left\{\sum_{k=1}^{t}Z_{i_{k}i_{k}}\geq\sum_{k=1}^{t}Z_{i_{k}i% _{k+1}}\right\}caligraphic_F start_POSTSUBSCRIPT italic...
1,054
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 Recall that , 1 = ℱt={∑k=1tZik⁢ik≥∑k=1tZik⁢ik+1}subscriptℱ𝑡superscriptsubscript𝑘1𝑡subscript𝑍subscript𝑖𝑘subscript𝑖𝑘superscriptsubscript𝑘1𝑡subscript𝑍subscript𝑖𝑘subscript𝑖𝑘1\mathcal{F}_{t}=\left\...
1,076
128
, 1 = Z1,1=‖𝒙1−𝒚1‖2=‖(𝝁⁢σ1+𝒛1)−(𝝁⁢σ1+ρ⁢𝒛1+1−ρ2⁢𝒘1)‖2=‖(1−ρ)⁢𝒛1−1−ρ2⁢𝒘1‖2;Z2,2=‖𝒙2−𝒚2‖2=‖(1−ρ)⁢𝒛2−1−ρ2⁢𝒘2‖2;Z1,2=‖𝒙1−𝒚2‖2=‖(𝝁⁢σ1+𝒛1)−(𝝁⁢σ2+ρ⁢𝒛2+1−ρ2⁢𝒘2)‖2=‖𝝁⁢(σ1−σ2)+𝒛1−ρ⁢𝒛2−1−ρ2⁢𝒘2‖2;Z2,1=‖𝒙2−𝒚1‖2=‖𝝁⁢(σ2−σ1)+𝒛2−ρ⁢𝒛1−1−ρ2⁢𝒘1‖2.formulae-sequencesubscript𝑍11superscriptdelimited-∥∥subscript𝒙...
1,181
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = Z1,1=‖𝒙1−𝒚1‖2=‖(𝝁⁢σ1+𝒛1)−(𝝁⁢σ1+ρ⁢𝒛1+1−ρ2⁢𝒘1)‖2=‖(1−ρ)⁢𝒛1−1−ρ2⁢𝒘1‖2;Z2,2=‖𝒙2−𝒚2‖2=‖(1−ρ)⁢𝒛2−1−ρ2⁢𝒘2‖2;Z1,2=‖𝒙1−𝒚2‖2=‖(𝝁⁢σ1+𝒛1)−(𝝁⁢σ2+ρ⁢𝒛2+1−ρ2⁢𝒘2)‖2=‖𝝁⁢(σ1−σ2)+𝒛1−ρ⁢𝒛2−1−ρ2⁢𝒘2‖2;...
1,203
129
-\rho^{2}}{\boldsymbol{w}}_{1}\|^{2}.\end{split}start_ROW start_CELL italic_Z start_POSTSUBSCRIPT 1 , 1 end_POSTSUBSCRIPT end_CELL start_CELL = ∥ bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∥ ( bold_italic_...
1,039
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 -\rho^{2}}{\boldsymbol{w}}_{1}\|^{2}.\end{split}start_ROW start_CELL italic_Z start_POSTSUBSCRIPT 1 , 1 end_POSTSUBSCRIPT end_CELL start_CELL = ∥ bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_...
1,061
130
, 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = =ℙ⁢(Z1,1+Z2,2≥Z1,2+Z2,1)absentℙsubscript𝑍11subscript𝑍22subscript𝑍12subscript𝑍21\displaystyle=\mathbb{P}(Z_{1,1}+Z_{2,2}\geq Z_{1,2}+Z_{2,1})= blackboard_P ( italic_Z start_PO...
1,684
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = =ℙ⁢(Z1,1+Z2,2≥Z1,2+Z2,1)absentℙsubscript𝑍11subscript𝑍22subscrip...
1,706
131
\boldsymbol{z}}_{1}-{\boldsymbol{z}}_{2}\right\|^{2}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_exp ( -...
1,754
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 \boldsymbol{z}}_{1}-{\boldsymbol{z}}_{2}\right\|^{2}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_z start_POSTSUBS...
1,776
132
, 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = =ℙ⁢(Z1,1+Z2,2≥Z1,2+Z2,1)absentℙsubscript𝑍11subscript𝑍22subscript𝑍12subscript𝑍21\displaystyle=\mathbb{P}(Z_{1,1}+Z_{2,2}\geq Z_{1,2}+Z_{2,1})= blackboard_P ( italic_Z start_PO...
1,937
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = =ℙ⁢(Z1,1+Z2,2≥Z1,2+Z2,1)absentℙsubscript𝑍11subscript𝑍22subscrip...
1,959
133
and we have −‖𝝁‖2≤−⟨𝝁,𝒛i⟩≤‖𝝁‖2superscriptnorm𝝁2𝝁subscript𝒛𝑖superscriptnorm𝝁2-\|{\boldsymbol{\mu}}\|^{2}\leq-\langle{\boldsymbol{\mu}},{\boldsymbol{z}}_{i}% \rangle\leq\|{\boldsymbol{\mu}}\|^{2}- ∥ bold_italic_μ ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≤ - ⟨ bold_italic_μ , bold_italic_z start_POSTSUBSCRIP...
350
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 and we have −‖𝝁‖2≤−⟨𝝁,𝒛i⟩≤‖𝝁‖2superscriptnorm𝝁2𝝁subscript𝒛𝑖superscriptnorm𝝁2-\|{\boldsymbol{\mu}}\|^{2}\leq-\langle{\boldsymbol{\mu}},{\boldsymbol{z}}_{i}% \rangle\leq\|{\boldsymbol{\mu}}\|^{2}- ∥ b...
372
134
, 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢⟨2⁢𝝁+𝒛1−𝒛2,𝒘2−𝒘1⟩≥ρ⁢∥2⁢𝝁+𝒛1−𝒛2∥2)absentℙ1superscript𝜌22𝝁subscript𝒛1subscript𝒛2subscript𝒘2subscript𝒘1𝜌superscriptdelimited-∥∥2𝝁subscript𝒛1subscript𝒛22\d...
1,962
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢⟨2⁢𝝁+𝒛1−𝒛2,𝒘2−𝒘1⟩≥ρ⁢∥2⁢𝝁+𝒛1−𝒛2∥2)absentℙ1supersc...
1,984
135
\left\|{\boldsymbol{\zeta}}_{1}-{\boldsymbol{\zeta}}_{2}\right\|^{2}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_ζ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_ζ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRI...
1,743
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 \left\|{\boldsymbol{\zeta}}_{1}-{\boldsymbol{\zeta}}_{2}\right\|^{2}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_...
1,765
136
, 1 = 1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥∑i=1t‖𝝁‖22⁢(σi−σi+1)2+ρ⁢⟨𝝁,𝒛i⟩⁢(σi−σi−1)+⟨𝝁,𝒛i⟩⁢(σi−σi+1)+ρ2⁢‖𝒛i−𝒛i+1‖21superscript𝜌2superscriptsubscript𝑖1𝑡subscript𝒙𝑖subscript𝒙𝑖1subscript𝒘𝑖1superscriptsubscript𝑖1𝑡superscriptnorm𝝁22superscriptsubscript𝜎𝑖subscript𝜎𝑖12𝜌𝝁subscript𝒛𝑖subscript𝜎𝑖subscript𝜎𝑖1...
1,797
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = 1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥∑i=1t‖𝝁‖22⁢(σi−σi+1)2+ρ⁢⟨𝝁,𝒛i⟩⁢(σi−σi−1)+⟨𝝁,𝒛i⟩⁢(σi−σi+1)+ρ2⁢‖𝒛i−𝒛i+1‖21superscript𝜌2superscriptsubscript𝑖1𝑡subscript𝒙𝑖subscript𝒙𝑖1subscript𝒘𝑖1superscriptsub...
1,819
137
, 1 = Right-hand side of (143) ≥∑i=1tρ2⁢‖𝒙i−𝒙i+1‖2=ρ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙,Right-hand side of (143) superscriptsubscript𝑖1𝑡𝜌2superscriptdelimited-∥∥subscript𝒙𝑖subscript𝒙𝑖12𝜌2superscript𝒙toptensor-productsuperscript𝑳subscript𝐶𝑡subscript𝑰𝑑𝒙\begin{split}\text{Right-hand side of \eqref{eq:main eq} }&\geq\sum_...
637
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = Right-hand side of (143) ≥∑i=1tρ2⁢‖𝒙i−𝒙i+1‖2=ρ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙,Right-hand side of (143) superscriptsubscript𝑖1𝑡𝜌2superscriptdelimited-∥∥subscript𝒙𝑖subscript𝒙𝑖12𝜌2superscript𝒙toptensor-pro...
659
138
, 1 = ℙ⁢(ℱt)ℙsubscriptℱ𝑡\displaystyle\mathbb{P}(\mathcal{F}_{t})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥ρ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙)absentℙ1superscript𝜌2superscriptsubscript𝑖1𝑡subscript𝒙𝑖subscript𝒙𝑖1subscript𝒘𝑖1𝜌2superscript𝒙toptensor-pro...
1,893
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = ℙ⁢(ℱt)ℙsubscriptℱ𝑡\displaystyle\mathbb{P}(\mathcal{F}_{t})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥ρ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙)abse...
1,915
139
\boldsymbol{x}}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_exp ( - divide start_ARG italic_ρ...
1,778
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 \boldsymbol{x}}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_...
1,800
140
where 𝒗𝒗{\boldsymbol{v}}bold_italic_v is concatenation of 𝝁⁢σi𝝁subscript𝜎𝑖\boldsymbol{\mu}\sigma_{i}bold_italic_μ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The inequality (a)𝑎(a)( italic_a ) holds by Lemma 19, the equality (b)𝑏(b)( italic_b ) holds by Lemma 18, the inequality (c)𝑐(c)( italic_c )...
1,926
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 where 𝒗𝒗{\boldsymbol{v}}bold_italic_v is concatenation of 𝝁⁢σi𝝁subscript𝜎𝑖\boldsymbol{\mu}\sigma_{i}bold_italic_μ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The inequality (a)𝑎(a)( itali...
1,948
141
both for the cases where (σ1,σ2)=(+1,−1)subscript𝜎1subscript𝜎211(\sigma_{1},\sigma_{2})=(+1,-1)( italic_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = ( + 1 , - 1 ) and (σ1,σ2)=(−1,+1)subscript𝜎1subscript𝜎211(\sigma_{1},\sigma_{2})=(-1,+1)( italic_σ start_POSTSUBSCR...
263
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 both for the cases where (σ1,σ2)=(+1,−1)subscript𝜎1subscript𝜎211(\sigma_{1},\sigma_{2})=(+1,-1)( italic_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = ( + ...
285
142
, 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢⟨2⁢𝝁+𝒛1−𝒛2,𝒘2−𝒘1⟩≥λ⁢∥2⁢𝝁+𝒛1−𝒛2∥2)absentℙ1superscript𝜌22𝝁subscript𝒛1subscript𝒛2subscript𝒘2subscript𝒘1𝜆superscriptdelimited-∥∥2𝝁subscript𝒛1subscript𝒛22\d...
1,963
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = ℙ⁢(ℱ2)ℙsubscriptℱ2\displaystyle\mathbb{P}(\mathcal{F}_{2})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢⟨2⁢𝝁+𝒛1−𝒛2,𝒘2−𝒘1⟩≥λ⁢∥2⁢𝝁+𝒛1−𝒛2∥2)absentℙ1supersc...
1,985
143
})}\left\|{\boldsymbol{\zeta}}_{1}-{\boldsymbol{\zeta}}_{2}\right\|^{2}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_ζ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_ζ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBS...
1,827
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 })}\left\|{\boldsymbol{\zeta}}_{1}-{\boldsymbol{\zeta}}_{2}\right\|^{2}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_ital...
1,849
144
, 1 = 1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥∑i=1t‖𝝁‖22⁢(σi−σi+1)2+ρ⁢⟨𝝁,𝒛i⟩⁢(σi−σi−1)+⟨𝝁,𝒛i⟩⁢(σi−σi+1)+ρ2⁢‖𝒛i−𝒛i+1‖21superscript𝜌2superscriptsubscript𝑖1𝑡subscript𝒙𝑖subscript𝒙𝑖1subscript𝒘𝑖1superscriptsubscript𝑖1𝑡superscriptnorm𝝁22superscriptsubscript𝜎𝑖subscript𝜎𝑖12𝜌𝝁subscript𝒛𝑖subscript𝜎𝑖subscript𝜎𝑖1...
1,224
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = 1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥∑i=1t‖𝝁‖22⁢(σi−σi+1)2+ρ⁢⟨𝝁,𝒛i⟩⁢(σi−σi−1)+⟨𝝁,𝒛i⟩⁢(σi−σi+1)+ρ2⁢‖𝒛i−𝒛i+1‖21superscript𝜌2superscriptsubscript𝑖1𝑡subscript𝒙𝑖subscript𝒙𝑖1subscript𝒘𝑖1superscriptsub...
1,246
145
, 1 = Right-hand side of (150) ≥∑i=1tλ2⁢‖𝒙i−𝒙i+1‖2=λ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙⁢ if and only ifρ−λ2⁢∑i=1t‖zi−zi+1+𝝁⁢σi−1−λρ−λ⁢𝝁⁢σi+1‖2≥ρ−λ2⁢‖𝝁‖2⁢∑i=1t(σi−1−λρ−λ⁢σi+1)2−1−λ2⁢‖𝝁‖2⁢∑i=1t(σi−σi+1)2.Right-hand side of (150) superscriptsubscript𝑖1𝑡𝜆2superscriptdelimited-∥∥subscript𝒙𝑖subscript𝒙𝑖12𝜆2superscript𝒙toptenso...
1,425
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = Right-hand side of (150) ≥∑i=1tλ2⁢‖𝒙i−𝒙i+1‖2=λ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙⁢ if and only ifρ−λ2⁢∑i=1t‖zi−zi+1+𝝁⁢σi−1−λρ−λ⁢𝝁⁢σi+1‖2≥ρ−λ2⁢‖𝝁‖2⁢∑i=1t(σi−1−λρ−λ⁢σi+1)2−1−λ2⁢‖𝝁‖2⁢∑i=1t(σi−σi+1)2.Right-hand side...
1,447
146
, 1 = ℙ⁢(ℱt)ℙsubscriptℱ𝑡\displaystyle\mathbb{P}(\mathcal{F}_{t})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥λ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙)absentℙ1superscript𝜌2superscriptsubscript𝑖1𝑡subscript𝒙𝑖subscript𝒙𝑖1subscript𝒘𝑖1𝜆2superscript𝒙toptensor-pro...
1,893
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 , 1 = ℙ⁢(ℱt)ℙsubscriptℱ𝑡\displaystyle\mathbb{P}(\mathcal{F}_{t})blackboard_P ( caligraphic_F start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). , 2 = ≤ℙ⁢(1−ρ2⁢∑i=1t⟨𝒙i−𝒙i+1,𝒘i+1⟩≥λ2⁢𝒙⊤⁢(𝑳Ct⊗𝑰d)⁢𝒙)abse...
1,915
147
\boldsymbol{x}}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_exp ( - divide start_ARG italic_λ...
1,777
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 \boldsymbol{x}}\right)start_RELOP SUPERSCRIPTOP start_ARG ≤ end_ARG start_ARG ( italic_a ) end_ARG end_RELOP blackboard_E start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_...
1,799
148
where 𝒗𝒗{\boldsymbol{v}}bold_italic_v is concatenation of 𝝁⁢σi𝝁subscript𝜎𝑖\boldsymbol{\mu}\sigma_{i}bold_italic_μ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The inequality (a)𝑎(a)( italic_a ) holds by Lemma 19, the equality (b)𝑏(b)( italic_b ) holds by Lemma 18, the inequality (c)𝑐(c)( italic_c )...
860
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIII Proof of Lemma 1 where 𝒗𝒗{\boldsymbol{v}}bold_italic_v is concatenation of 𝝁⁢σi𝝁subscript𝜎𝑖\boldsymbol{\mu}\sigma_{i}bold_italic_μ italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The inequality (a)𝑎(a)( itali...
882
149
For all α>0𝛼0\alpha>0italic_α > 0 , 1 = I⁢(α)=2⁢log⁡(1+1+α−12).𝐼𝛼211superscript𝛼12I\left(\alpha\right)=2\log\left(\frac{1+\sqrt{1+\alpha^{-1}}}{2}\right).italic_I ( italic_α ) = 2 roman_log ( divide start_ARG 1 + square-root start_ARG 1 + italic_α start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT end_ARG end_ARG start_...
154
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 16 (Proposition 2.2 in [10]). For all α>0𝛼0\alpha>0italic_α > 0 , 1 = I⁢(α)=2⁢log⁡(1+1+α−12).𝐼𝛼211superscript𝛼12I\left(\alpha\right)=2\log\left(\frac{1+\sqrt{1+\alpha^{-1}}}{2}\right).italic_I ( ital...
187
150
For t≥2𝑡2t\geq 2italic_t ≥ 2 and α>0,S⁢(α,t)<t⁢I⁢(α)formulae-sequence𝛼0𝑆𝛼𝑡𝑡𝐼𝛼\alpha>0,S\left(\alpha,t\right)<tI\left(\alpha\right)italic_α > 0 , italic_S ( italic_α , italic_t ) < italic_t italic_I ( italic_α ).
114
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 17 (Lemma 2.2 in [10]). For t≥2𝑡2t\geq 2italic_t ≥ 2 and α>0,S⁢(α,t)<t⁢I⁢(α)formulae-sequence𝛼0𝑆𝛼𝑡𝑡𝐼𝛼\alpha>0,S\left(\alpha,t\right)<tI\left(\alpha\right)italic_α > 0 , italic_S ( italic_α , ital...
146
151
Let X∼𝒩⁢(μ→,Σ)similar-to𝑋𝒩→𝜇ΣX\sim\mathcal{N}(\vec{\mu},\Sigma)italic_X ∼ caligraphic_N ( over→ start_ARG italic_μ end_ARG , roman_Σ ) and A𝐴Aitalic_A is symmetric. We have , 1 = MX⊤⁢A⁢X⁢(t):=𝔼⁢(eX⊤⁢A⁢X)=1|I−2⁢t⁢A⁢Σ|12⁢e−12⁢μ⊤⁢[I−(I−2⁢t⁢A⁢Σ)−1]⁢Σ−1⁢μ.assignsubscript𝑀superscript𝑋top𝐴𝑋𝑡𝔼superscript𝑒superscri...
569
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 18 (Moment generating function of quadratic form). Let X∼𝒩⁢(μ→,Σ)similar-to𝑋𝒩→𝜇ΣX\sim\mathcal{N}(\vec{\mu},\Sigma)italic_X ∼ caligraphic_N ( over→ start_ARG italic_μ end_ARG , roman_Σ ) and A𝐴Aitali...
599
152
Let X∼𝒩⁢(0,1)similar-to𝑋𝒩01X\sim\mathcal{N}(0,1)italic_X ∼ caligraphic_N ( 0 , 1 ). For t>0𝑡0t>0italic_t > 0, we have , 1 = ℙ⁢(X≥t)≤exp⁡(−t2/2).ℙ𝑋𝑡superscript𝑡22\mathbb{P}(X\geq t)\leq\exp(-t^{2}/2).blackboard_P ( italic_X ≥ italic_t ) ≤ roman_exp ( - italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / 2 ) .....
177
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 19 (Tail bound of normal distribution). Let X∼𝒩⁢(0,1)similar-to𝑋𝒩01X\sim\mathcal{N}(0,1)italic_X ∼ caligraphic_N ( 0 , 1 ). For t>0𝑡0t>0italic_t > 0, we have , 1 = ℙ⁢(X≥t)≤exp⁡(−t2/2).ℙ𝑋𝑡superscrip...
206
153
Let X1,…,Xnsubscript𝑋1…subscript𝑋𝑛X_{1},\ldots,X_{n}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be independent random variables such that ai≤Xi≤bisubscript𝑎𝑖subscript𝑋𝑖subscript𝑏𝑖a_{i}\leq X_{i}\leq b_{i}italic_a start_POSTSUBSCRIPT italic_i en...
980
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 20 (Hoeffding’s inequality). Let X1,…,Xnsubscript𝑋1…subscript𝑋𝑛X_{1},\ldots,X_{n}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be indep...
1,009
154
Let LCtsuperscript𝐿subscript𝐶𝑡L^{C_{t}}italic_L start_POSTSUPERSCRIPT italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT be the Laplacian matrix for a cycle graph consisting of t𝑡titalic_t nodes. The eigenvalues of LCtsuperscript𝐿subscript𝐶𝑡L^{C_{t}}italic_L start_POSTSUPERSCRIPT italic_...
315
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 21 (Proposition 2.1 in [10]). Let LCtsuperscript𝐿subscript𝐶𝑡L^{C_{t}}italic_L start_POSTSUPERSCRIPT italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT be the Laplacian matrix ...
348
155
Let X𝑋Xitalic_X be a noncentral χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT variable with D𝐷Ditalic_D degrees of freedom and noncentrality parameter B𝐵Bitalic_B, then for all x>0𝑥0x>0italic_x > 0, , 1 = ℙ⁢[X≥(D+B)+2⁢(D+2⁢B)⁢x+2⁢x]≤exp⁡(−x)ℙdelimited-[]𝑋𝐷𝐵2𝐷2𝐵𝑥2𝑥𝑥\mathbb{P}[X...
411
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery XIV Technical Tools Lemma 22 (Lemma 8.1 in [40] : Tail bound for noncentral chi-squared distribution). Let X𝑋Xitalic_X be a noncentral χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT variable with D�...
454
156
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2024-00408003 and No. 2021R1C1C11008539).
51
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery Acknowledgment This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2024-00408003 and No. 2021R1C1C11008539).
68
157
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below: Click the "Report Issue" button. Open a report feedback form via keyboard, use ...
1,383
Exact Matching in Correlated Networks with Node Attributes for Improved Community Recovery Instructions for reporting errors We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rende...
1,402
0
In terms of accuracy, Graph Neural Networks (GNNs) are the best architectural choice for the node classification task. Their drawback in real-world deployment is the latency that emerges from the neighbourhood processing operation. One solution to the latency issue is to perform knowledge distillation from a trained GN...
207
Graph Knowledge Distillation to Mixture of Experts Abstract In terms of accuracy, Graph Neural Networks (GNNs) are the best architectural choice for the node classification task. Their drawback in real-world deployment is the latency that emerges from the neighbourhood processing operation. One solution to the latency ...
219
1
Graphs can be used to encode the dependencies between data samples. The impressive performance of Graph Neural Networks (GNNs) shows that taking into account the structural information increases the quality of prediction on tasks like product prediction on co-purchasing graphs or paper category prediction on citation g...
853
Graph Knowledge Distillation to Mixture of Experts 1 Introduction Graphs can be used to encode the dependencies between data samples. The impressive performance of Graph Neural Networks (GNNs) shows that taking into account the structural information increases the quality of prediction on tasks like product prediction ...
866
2
Knowledge distillation from a Graph Neural Network (GNN) into a Multi-Layer Perceptron (MLP) promotes inference efficiency by avoiding the aggregation over neighbourhood nodes. Yang et al. (2021) presented one of the first distillation attempts, employing a student model that combines label propagation with an MLP acti...
482
Graph Knowledge Distillation to Mixture of Experts 2 Related work 2.1 GNN-to-MLP Knowledge Distillation Knowledge distillation from a Graph Neural Network (GNN) into a Multi-Layer Perceptron (MLP) promotes inference efficiency by avoiding the aggregation over neighbourhood nodes. Yang et al. (2021) presented one of the...
509
3
A Sparse Mixture-of-Experts (MoE) model is a weighted combination of similarly structured models with dynamically computed weights (Shazeer et al., 2016; Gross et al., 2017; Zhang et al., 2021a; Li et al., 2022; Dryden & Hoefler, 2022; Chi et al., 2022; Komatsuzaki et al., 2022; Pavlitska et al., 2023). For any sample,...
680
Graph Knowledge Distillation to Mixture of Experts 2 Related work 2.2 Mixture-of-Experts A Sparse Mixture-of-Experts (MoE) model is a weighted combination of similarly structured models with dynamically computed weights (Shazeer et al., 2016; Gross et al., 2017; Zhang et al., 2021a; Li et al., 2022; Dryden & Hoefler, 2...
703
4
We denote a graph by 𝒢=(𝒱,ℰ,X)𝒢𝒱ℰ𝑋\mathcal{G}=(\mathcal{V},\mathcal{E},X), where 𝒱𝒱\mathcal{V} is a set of N𝑁N nodes, ℰℰ\mathcal{E} is a set of edges between nodes, and X∈ℝN×d𝑋superscriptℝ𝑁𝑑X\in\operatorname{\mathbb{R}}^{N\times d} represents a matrix with each row being a vector of d𝑑d node features associ...
329
Graph Knowledge Distillation to Mixture of Experts 3 Background We denote a graph by 𝒢=(𝒱,ℰ,X)𝒢𝒱ℰ𝑋\mathcal{G}=(\mathcal{V},\mathcal{E},X), where 𝒱𝒱\mathcal{V} is a set of N𝑁N nodes, ℰℰ\mathcal{E} is a set of edges between nodes, and X∈ℝN×d𝑋superscriptℝ𝑁𝑑X\in\operatorname{\mathbb{R}}^{N\times d} represents a ...
342
5
We now introduce our distillation approach, which uses a Mixture-of-Experts (MoE) model. The method starts with the training of a teacher GNN. The teacher model is used to produce soft-labels for the knowledge distillation (see Section 4.2). The knowledge distillation setup uses a combination of reliable sampling and p...
332
Graph Knowledge Distillation to Mixture of Experts 4 Methodology We now introduce our distillation approach, which uses a Mixture-of-Experts (MoE) model. The method starts with the training of a teacher GNN. The teacher model is used to produce soft-labels for the knowledge distillation (see Section 4.2). The knowledge...
346
6
We use the standard formulation for a Mixture-of-Experts layer, introduced by Shazeer et al. (2016): , 1 = hl=∑i=1EG​(hl−1)i​fi​(hl−1),subscriptℎ𝑙superscriptsubscript𝑖1𝐸𝐺subscriptsubscriptℎ𝑙1𝑖subscript𝑓𝑖subscriptℎ𝑙1h_{l}=\sum_{i=1}^{E}G(h_{l-1})_{i}f_{i}(h_{l-1}),. , 2 = . , 3 = (1) where G:ℝd′→[0,1]E:𝐺→super...
1,848
Graph Knowledge Distillation to Mixture of Experts 4 Methodology 4.1 Spatial routing by memory We use the standard formulation for a Mixture-of-Experts layer, introduced by Shazeer et al. (2016): , 1 = hl=∑i=1EG​(hl−1)i​fi​(hl−1),subscriptℎ𝑙superscriptsubscript𝑖1𝐸𝐺subscriptsubscriptℎ𝑙1𝑖subscript𝑓𝑖subscriptℎ𝑙1h...
1,870
7
, 1 = h′=exp⁡(s)\scalerel∗⊙∘∑i=1EGR​b​M​(h)i​fi​(exp⁡(a​t​ti)\scalerel∗⊙∘h),{h^{\prime}}=\exp(s)\mathbin{\scalerel*{\bm{\odot}}{\circ}}\sum_{i=1}^{E}G_{RbM}(h)_{i}f_{i}(\exp(att_{i})\mathbin{\scalerel*{\bm{\odot}}{\circ}}h),. , 2 = . , 3 = (5) where s∈ℝ𝑠ℝs\in\operatorname{\mathbb{R}} is a learnable output scaler, a​t​...
1,480
Graph Knowledge Distillation to Mixture of Experts 4 Methodology 4.1 Spatial routing by memory , 1 = h′=exp⁡(s)\scalerel∗⊙∘∑i=1EGR​b​M​(h)i​fi​(exp⁡(a​t​ti)\scalerel∗⊙∘h),{h^{\prime}}=\exp(s)\mathbin{\scalerel*{\bm{\odot}}{\circ}}\sum_{i=1}^{E}G_{RbM}(h)_{i}f_{i}(\exp(att_{i})\mathbin{\scalerel*{\bm{\odot}}{\circ}}h),....
1,502
8
We distill knowledge from the pretrained GNN using supervised learning, regularized by the KL-divergence between the class distribution y^vsubscript^𝑦𝑣\hat{y}_{v} predicted by the student and the class distribution y^v′subscriptsuperscript^𝑦′𝑣\hat{y}^{\prime}_{v} predicted by the teacher. The knowledge distillation...
1,336
Graph Knowledge Distillation to Mixture of Experts 4 Methodology 4.2 Knowledge Distillation We distill knowledge from the pretrained GNN using supervised learning, regularized by the KL-divergence between the class distribution y^vsubscript^𝑦𝑣\hat{y}_{v} predicted by the student and the class distribution y^v′subscri...
1,357
9
In most implementations, the embeddings of the experts are initialised randomly (Chi et al., 2022; Li et al., 2022; Yan & Li, 2023). In our approach, we desire more informative initialisations, because embeddings are operating in the input space. We therefore apply a pretraining stage, following Zhang et al. (2021a). W...
177
Graph Knowledge Distillation to Mixture of Experts 4 Methodology 4.3 MoE initialization In most implementations, the embeddings of the experts are initialised randomly (Chi et al., 2022; Li et al., 2022; Yan & Li, 2023). In our approach, we desire more informative initialisations, because embeddings are operating in th...
198
10
We evaluate our model on nine real-world datasets. We show that our model can utilize additional parameters more efficiently than a parameter-inflated MLP, an ensemble of MLPs, or a vanilla mixture-of-experts model. We conduct an ablation study to show how the various loss terms influence accuracy.
60
Graph Knowledge Distillation to Mixture of Experts 5 Experiments We evaluate our model on nine real-world datasets. We show that our model can utilize additional parameters more efficiently than a parameter-inflated MLP, an ensemble of MLPs, or a vanilla mixture-of-experts model. We conduct an ablation study to show ho...
74
11
Datasets. To conduct our experiments we use nine real-world datasets: Cora (Sen et al., 2008), Citeseer (Giles et al., 1998), Pubmed (McCallum et al., 2000), Amazon-Photo, Amazon-Computers, Academic-CS, Academic-Physics (Shchur et al., 2018), OGB-ArXive and OGB-Products (Hu et al., 2020). For the Cora, Citeseer, and Pu...
1,957
Graph Knowledge Distillation to Mixture of Experts 5 Experiments 5.1 Experimental setting Datasets. To conduct our experiments we use nine real-world datasets: Cora (Sen et al., 2008), Citeseer (Giles et al., 1998), Pubmed (McCallum et al., 2000), Amazon-Photo, Amazon-Computers, Academic-CS, Academic-Physics (Shchur et...
1,977
12
We compare our method to GLNN, KRD, NOSMOG and CoHOp baselines. Results are presented in Table 1 for GraphSAGE as the teacher, and in Table 2 for more advanced teacher GNNs. We use RevGNN-Wide (Li et al., 2021) and DRGAT (Zhang et al., 2023) as the advanced teachers, because they are among the best performing GNN model...
716
Graph Knowledge Distillation to Mixture of Experts 5 Experiments 5.2 Performance comparison We compare our method to GLNN, KRD, NOSMOG and CoHOp baselines. Results are presented in Table 1 for GraphSAGE as the teacher, and in Table 2 for more advanced teacher GNNs. We use RevGNN-Wide (Li et al., 2021) and DRGAT (Zhang ...
736
13
To additionally explore whether our approach is an efficient mechanism for exploiting additional parameters, we construct two baselines: a soft-voting ensemble of MLPs and a vanilla MoE. The soft-voting ensemble consists of several MLP students with the same structure, but different random initializations. The three-ML...
334
Graph Knowledge Distillation to Mixture of Experts 5 Experiments 5.3 Comparing with ensemble and vanilla MoE To additionally explore whether our approach is an efficient mechanism for exploiting additional parameters, we construct two baselines: a soft-voting ensemble of MLPs and a vanilla MoE. The soft-voting ensemble...
360
14
Loss terms. We now examine whether each component of the equation 11 is important for achieving better performance. Our model includes three additional loss terms for each RbM layer (see equation 11): commitment loss (equation 6), self-similarity loss (equation 7), and load balance loss (equation 8). In order to conduc...
434
Graph Knowledge Distillation to Mixture of Experts 5 Experiments 5.4 Ablation study, label propagation, and number of experts Loss terms. We now examine whether each component of the equation 11 is important for achieving better performance. Our model includes three additional loss terms for each RbM layer (see equatio...
464
15
In order to analyse the routing spatial structure qualitatively, we utilise the T-SNE (Van der Maaten & Hinton, 2008), with perplexity of 30 and PCA initialization, to produce a 2-d visualizations of a router embedding space for RbM and a vanilla MoE in Figure 4. These correspond to the hidden representation hℎh for Rb...
219
Graph Knowledge Distillation to Mixture of Experts 5 Experiments 5.5 Routing spatial structure analysis In order to analyse the routing spatial structure qualitatively, we utilise the T-SNE (Van der Maaten & Hinton, 2008), with perplexity of 30 and PCA initialization, to produce a 2-d visualizations of a router embeddi...
241
16
In this paper we focused on the task of distillation from a graph neural network and introduced RbM, a Mixture of Experts model that encourages strong expert specialization at the routing level. We established how parameter inflation can positively affect the performance and showed practical application of MoE in the k...
273
Graph Knowledge Distillation to Mixture of Experts 6 Conclusion and Future work In this paper we focused on the task of distillation from a graph neural network and introduced RbM, a Mixture of Experts model that encourages strong expert specialization at the routing level. We established how parameter inflation can po...
289
17
In Table 7 we provide the key statistics of the datasets we used to evaluate our models: Cora (Sen et al., 2008), Citeseer (Giles et al., 1998), Pubmed (McCallum et al., 2000), Amazon-Photo, Amazon-Computers, Academic-CS, Academic-Physics (Shchur et al., 2018), OGB-ArXive and OGB-Products (Hu et al., 2020).
111
Graph Knowledge Distillation to Mixture of Experts Appendix A Datasets description In Table 7 we provide the key statistics of the datasets we used to evaluate our models: Cora (Sen et al., 2008), Citeseer (Giles et al., 1998), Pubmed (McCallum et al., 2000), Amazon-Photo, Amazon-Computers, Academic-CS, Academic-Physic...
128
18
Our experiments were conducted using an NVIDIA Tesla V100 GPU with 32GB of memory. The machine has an Intel Xeon Gold 6140 CPU with clock frequency of 2.30GHz and total thread count of 36. All computations, with exception of the clustering, were executed on the GPU. For Cora, Citeseer, PubMed, Amazon-Comp, Amazon-Photo...
170
Graph Knowledge Distillation to Mixture of Experts Appendix B Hardware specification Our experiments were conducted using an NVIDIA Tesla V100 GPU with 32GB of memory. The machine has an Intel Xeon Gold 6140 CPU with clock frequency of 2.30GHz and total thread count of 36. All computations, with exception of the cluste...
186
19
We are using Ray Tune (Liaw et al., 2018) to tune model hyperparameters. We tuned the following model structure hyperparameters: (i) dropout rate was selected from [0.0,0.1,0.2,0.3,0.4,0.5,0.6]0.00.10.20.30.40.50.6[0.0,0.1,0.2,0.3,0.4,0.5,0.6] and applied to all dropout layers in the model; (ii) total number of experts...
335
Graph Knowledge Distillation to Mixture of Experts Appendix C Hyperparameters tuning protocol We are using Ray Tune (Liaw et al., 2018) to tune model hyperparameters. We tuned the following model structure hyperparameters: (i) dropout rate was selected from [0.0,0.1,0.2,0.3,0.4,0.5,0.6]0.00.10.20.30.40.50.6[0.0,0.1,0.2...
353
20
As discussed in the main text, the performance of RbM does vary as the total number of experts is changed. During our experiments we use the same number of experts/clusters for all RbM layers in order to reduce the number of hyperparameters. We found that there is an optimal number of experts for RbM that can be identi...
175
Graph Knowledge Distillation to Mixture of Experts Appendix D Selecting the number of experts As discussed in the main text, the performance of RbM does vary as the total number of experts is changed. During our experiments we use the same number of experts/clusters for all RbM layers in order to reduce the number of h...
195
21
We investigate whether our model is compatible with an alternative teacher GNN and demonstrates the same advantages over the baselines. The main paper provides results for GraphSAGE as the teacher, together with some results for more advanced GNN teachers. Table 8 provides additional results for experiments in a transd...
115
Graph Knowledge Distillation to Mixture of Experts Appendix E GCN Teacher Model We investigate whether our model is compatible with an alternative teacher GNN and demonstrates the same advantages over the baselines. The main paper provides results for GraphSAGE as the teacher, together with some results for more advanc...
133
22
In this section we characterize the complexity of the models. All the layers in each model are identically structured, and the number of layers is the same as the teacher’s number of layers. Thus, we now characterize the parameter count and computational complexity of a single layer for MLP, MoE and RbM. We also contra...
1,526
Graph Knowledge Distillation to Mixture of Experts Appendix F Complexity analysis In this section we characterize the complexity of the models. All the layers in each model are identically structured, and the number of layers is the same as the teacher’s number of layers. Thus, we now characterize the parameter count a...
1,542
0
Bayes without Underfitting: Fully Correlated Deep Learning Posteriors via Alternating Projections Marco Miani†                        Hrittik Roy†                        Søren Hauberg Technical University of Denmark mmia@dtu.dk                        Technical University of Denmark hroy@dtu.dk           ...
86
Bayes without Underfitting: Fully Correlated Deep Learning Posteriors via Alternating Projections Marco Miani†                        Hrittik Roy†                        Søren Hauberg Technical University of Denmark mmia@dtu.dk                        Technical University of Denmark hroy@dtu.dk           ...
86
1
Bayesian deep learning all too often underfits so that the Bayesian prediction is less accurate than a simple point estimate. Uncertainty quantification then comes at the cost of accuracy. For linearized models, the null space of the generalized Gauss-Newton matrix corresponds to parameters that preserve the training p...
180
Abstract Bayesian deep learning all too often underfits so that the Bayesian prediction is less accurate than a simple point estimate. Uncertainty quantification then comes at the cost of accuracy. For linearized models, the null space of the generalized Gauss-Newton matrix corresponds to parameters that preserve the t...
182
2
Bayesian deep learning tends to underfit. Numerous studies demonstrate that marginalizing approximate weight posteriors yields less accurate predictions than applying a maximum a posteriori (map) point estimate (Wenzel et al., 2020; Daxberger et al., 2021a; Zhang et al., 2024; Kristiadi et al., 2022). This significantl...
143
1 Underfitting in Bayesian deep learning Bayesian deep learning tends to underfit. Numerous studies demonstrate that marginalizing approximate weight posteriors yields less accurate predictions than applying a maximum a posteriori (map) point estimate (Wenzel et al., 2020; Daxberger et al., 2021a; Zhang et al., 2024; K...
152
3
Deep learning performs very well when the training data is subject to limited observation noise. In contrast, Bayesian deep learning often underfits in the sense that the Bayesian prediction deviates significantly from a point estimate prediction on the training data 𝒟𝒟\mathcal{D}, i.e. , 1 = 𝔼𝜽∼q​[f​(𝜽,𝐱)]≠f​(𝜽...
522
1 Underfitting in Bayesian deep learning What is underfitting? Deep learning performs very well when the training data is subject to limited observation noise. In contrast, Bayesian deep learning often underfits in the sense that the Bayesian prediction deviates significantly from a point estimate prediction on the tra...
537
4
Our proposed posterior reflects the degrees of freedom in the model that fundamentally cannot be determined by even noise-free data. Predicting according to this distribution gives reliable out-of-distribution detection and general uncertainty quantification without underfitting. Our approach captures correlations betw...
75
1 Underfitting in Bayesian deep learning Why is this beneficial? Our proposed posterior reflects the degrees of freedom in the model that fundamentally cannot be determined by even noise-free data. Predicting according to this distribution gives reliable out-of-distribution detection and general uncertainty quantificat...
89
5
We will soon see that the relevant subspace on which to project is given by the kernel (i.e. null space) of a matrix that is quadratic in the number of model parameters. Even for models of modest size, this matrix is too large to be stored in memory and direct projection methods cannot be applied. We propose a linear-t...
79
1 Underfitting in Bayesian deep learning Why is this difficult? We will soon see that the relevant subspace on which to project is given by the kernel (i.e. null space) of a matrix that is quadratic in the number of model parameters. Even for models of modest size, this matrix is too large to be stored in memory and di...
93
6
Sec. 2 gives the background to derive our approach. A wider discussion of related work is postponed to Sec. 5. We develop our approach in two steps. First, Sec. 3 describes our proposed posterior approximation, while Sec. 4 derives an efficient sampling algorithm. Empirical investigations are conducted in Sec. 6.
68
1 Underfitting in Bayesian deep learning Paper outline. Sec. 2 gives the background to derive our approach. A wider discussion of related work is postponed to Sec. 5. We develop our approach in two steps. First, Sec. 3 describes our proposed posterior approximation, while Sec. 4 derives an efficient sampling algorithm....
80
7
Let f:ℝP×ℝI→ℝO:𝑓→superscriptℝ𝑃superscriptℝ𝐼superscriptℝ𝑂f:\mathbb{R}^{P}\times\mathbb{R}^{I}\rightarrow\mathbb{R}^{O} denote a neural network with parameter 𝜽∈ℝP𝜽superscriptℝ𝑃\bm{\theta}\in\mathbb{R}^{P} that maps inputs 𝐱∈ℝI𝐱superscriptℝ𝐼\mathbf{x}\in\mathbb{R}^{I} to outputs 𝐲∈ℝO𝐲superscriptℝ𝑂\mathbf{y}\...
1,664
2 Background and notation Notation. Let f:ℝP×ℝI→ℝO:𝑓→superscriptℝ𝑃superscriptℝ𝐼superscriptℝ𝑂f:\mathbb{R}^{P}\times\mathbb{R}^{I}\rightarrow\mathbb{R}^{O} denote a neural network with parameter 𝜽∈ℝP𝜽superscriptℝ𝑃\bm{\theta}\in\mathbb{R}^{P} that maps inputs 𝐱∈ℝI𝐱superscriptℝ𝐼\mathbf{x}\in\mathbb{R}^{I} to outp...
1,672
8
Here ggn𝜽mapsubscriptggnsubscript𝜽map\textsc{ggn}_{\bm{\theta}_{\textsc{map}}} is the so-called generalized Gauss-Newton matrix, α𝛼\alpha is the prior precision, 𝐇𝜽​(𝐱)=−∂f​(𝜽,𝐱)2log⁡p​(𝐲|f​(𝜽,𝐱))∈ℝO×Osubscript𝐇𝜽𝐱subscriptsuperscript2𝑓𝜽𝐱𝑝conditional𝐲𝑓𝜽𝐱superscriptℝ𝑂𝑂\mathbf{H}_{\bm{\theta}}(\mat...
544
2 Background and notation Notation. Here ggn𝜽mapsubscriptggnsubscript𝜽map\textsc{ggn}_{\bm{\theta}_{\textsc{map}}} is the so-called generalized Gauss-Newton matrix, α𝛼\alpha is the prior precision, 𝐇𝜽​(𝐱)=−∂f​(𝜽,𝐱)2log⁡p​(𝐲|f​(𝜽,𝐱))∈ℝO×Osubscript𝐇𝜽𝐱subscriptsuperscript2𝑓𝜽𝐱𝑝conditional𝐲𝑓𝜽𝐱superscri...
552
9
We next propose a fully correlated Gaussian posterior that is guaranteed to not underfit. Unless otherwise stated, the presented results are novel contributions and proofs of all theorems can be found in the appendix. As alluded to, we propose restricting the posterior covariance to a particular subspace of the paramet...
705
3 The proposed approximate posterior We next propose a fully correlated Gaussian posterior that is guaranteed to not underfit. Unless otherwise stated, the presented results are novel contributions and proofs of all theorems can be found in the appendix. As alluded to, we propose restricting the posterior covariance to...
711
10
When using the linearized neural network, flin𝜽map​(𝜽,𝐱)=f​(𝜽map,𝐱)+𝐉𝜽map​(𝐱)​(𝜽−𝜽map)superscriptsubscript𝑓linsubscript𝜽map𝜽𝐱𝑓subscript𝜽map𝐱subscript𝐉subscript𝜽map𝐱𝜽subscript𝜽mapf_{\text{lin}}^{\bm{\theta}_{\textsc{map}}}(\bm{\theta},\mathbf{x})=f(\bm{\theta}_{\textsc{map}},\mathbf{x})+\mathbf{J}_...
582
3 The proposed approximate posterior The projected posterior never underfits. When using the linearized neural network, flin𝜽map​(𝜽,𝐱)=f​(𝜽map,𝐱)+𝐉𝜽map​(𝐱)​(𝜽−𝜽map)superscriptsubscript𝑓linsubscript𝜽map𝜽𝐱𝑓subscript𝜽map𝐱subscript𝐉subscript𝜽map𝐱𝜽subscript𝜽mapf_{\text{lin}}^{\bm{\theta}_{\textsc{map}}...
595
11
The projected posterior (4) is supported on equal functions on the training data, i.e. ∀𝐱∈𝒟for-all𝐱𝒟\forall\mathbf{x}\in\mathcal{D} , 1 = flin𝜽map​(𝜽,𝐱)=f​(𝜽map,𝐱)superscriptsubscript𝑓linsubscript𝜽map𝜽𝐱𝑓subscript𝜽map𝐱\displaystyle f_{\textnormal{lin}}^{\bm{\theta}_{\textnormal{{map}}}}(\bm{\theta},\math...
385
3 The proposed approximate posterior The projected posterior never underfits. Lemma 3.1. The projected posterior (4) is supported on equal functions on the training data, i.e. ∀𝐱∈𝒟for-all𝐱𝒟\forall\mathbf{x}\in\mathcal{D} , 1 = flin𝜽map​(𝜽,𝐱)=f​(𝜽map,𝐱)superscriptsubscript𝑓linsubscript𝜽map𝜽𝐱𝑓subscript𝜽map...
404
12
Let 𝐉𝛉=[𝐉𝛉​(𝐱1)⊤​…​𝐉𝛉​(𝐱N)⊤]⊤subscript𝐉𝛉superscriptdelimited-[]subscript𝐉𝛉superscriptsubscript𝐱1top…subscript𝐉𝛉superscriptsubscript𝐱𝑁toptop\mathbf{J}_{\bm{\theta}}\!=\![\mathbf{J}_{\bm{\theta}}(\mathbf{x}_{1})^{\top}\ldots\mathbf{J}_{\bm{\theta}}(\mathbf{x}_{N})^{\top}]^{\top} and 𝐱test∈ℝIsubscript𝐱t...
804
3 The proposed approximate posterior The projected posterior never underfits. Lemma 3.2. Let 𝐉𝛉=[𝐉𝛉​(𝐱1)⊤​…​𝐉𝛉​(𝐱N)⊤]⊤subscript𝐉𝛉superscriptdelimited-[]subscript𝐉𝛉superscriptsubscript𝐱1top…subscript𝐉𝛉superscriptsubscript𝐱𝑁toptop\mathbf{J}_{\bm{\theta}}\!=\![\mathbf{J}_{\bm{\theta}}(\mathbf{x}_{1})^{\to...
823
13
For α>0𝛼0\alpha>0, the predictive variance of lla on any training datapoint is positive and bounded, , 1 = O​γ2γ2+α≤Varθ∼qlla​flin𝜽map​(𝜽,𝐱)≤O​λ2λ2+αfor ​𝐱∈𝒟.formulae-sequence𝑂superscript𝛾2superscript𝛾2𝛼subscriptVarsimilar-to𝜃subscript𝑞llasuperscriptsubscript𝑓linsubscript𝜽map𝜽𝐱𝑂superscript𝜆2superscrip...
469
3 The proposed approximate posterior The projected posterior never underfits. Theorem 3.3. For α>0𝛼0\alpha>0, the predictive variance of lla on any training datapoint is positive and bounded, , 1 = O​γ2γ2+α≤Varθ∼qlla​flin𝜽map​(𝜽,𝐱)≤O​λ2λ2+αfor ​𝐱∈𝒟.formulae-sequence𝑂superscript𝛾2superscript𝛾2𝛼subscriptVarsimi...
489
14
The above analysis justifies the projected posterior and also sheds light on why current Bayesian approximations often underfit. For efficiency, mean field approximations of the posterior covariance are quite common, e.g. diagonal or Kronecker factored covariances (Ritter et al., 2018; Martens and Grosse, 2015). These ...
148
3 The proposed approximate posterior Underfitting in existing models. The above analysis justifies the projected posterior and also sheds light on why current Bayesian approximations often underfit. For efficiency, mean field approximations of the posterior covariance are quite common, e.g. diagonal or Kronecker factor...
161
15
Beyond the above theoretical motivations, our projected covariance also brings computational benefits. Since 𝐔𝐔\mathbf{U} is an orthonormal basis, the covariance 𝐔𝐔⊤superscript𝐔𝐔top\mathbf{U}\mathbf{U}^{\top} is a projection matrix, implying that its eigenvalues are all 0 or 1. This, in turn, implies that 𝐔𝐔⊤=(...
455
3 The proposed approximate posterior Computational benefits. Beyond the above theoretical motivations, our projected covariance also brings computational benefits. Since 𝐔𝐔\mathbf{U} is an orthonormal basis, the covariance 𝐔𝐔⊤superscript𝐔𝐔top\mathbf{U}\mathbf{U}^{\top} is a projection matrix, implying that its ei...
465
16
A benefit of Bayesian neural networks is that we can choose α𝛼\alpha by maximizing the marginal likelihood, p​(𝒟|α)𝑝conditional𝒟𝛼p(\mathcal{D}|\alpha), on the training data, i.e. without relying on validation data (MacKay, 1995). However, since the marginal likelihood of the true posterior is intractable, it is co...
640
3 The proposed approximate posterior Tractable model selection. A benefit of Bayesian neural networks is that we can choose α𝛼\alpha by maximizing the marginal likelihood, p​(𝒟|α)𝑝conditional𝒟𝛼p(\mathcal{D}|\alpha), on the training data, i.e. without relying on validation data (MacKay, 1995). However, since the ma...
651
17
The marginal likelihood for the projected posterior (4) has a globally optimal α𝛼\alpha given by , 1 = α∗=∥𝜽map∥2P−Tr​(𝕀P−𝒫​(ggn𝜽map)).superscript𝛼superscriptdelimited-∥∥subscript𝜽map2𝑃Trsubscript𝕀𝑃𝒫subscriptggnsubscript𝜽map\alpha^{*}=\frac{\left\lVert\bm{\theta}_{\textnormal{{map}}}\right\rVert^{2}}{P-\mat...
216
3 The proposed approximate posterior Tractable model selection. Lemma 3.4. The marginal likelihood for the projected posterior (4) has a globally optimal α𝛼\alpha given by , 1 = α∗=∥𝜽map∥2P−Tr​(𝕀P−𝒫​(ggn𝜽map)).superscript𝛼superscriptdelimited-∥∥subscript𝜽map2𝑃Trsubscript𝕀𝑃𝒫subscriptggnsubscript𝜽map\alpha^{*...
233
18
The projected posterior can be viewed as a fully correlated tractable approximation to the lla. The following statement shows that the difference between the two approximate posteriors is bounded.
34
3 The proposed approximate posterior Links to LLA. The projected posterior can be viewed as a fully correlated tractable approximation to the lla. The following statement shows that the difference between the two approximate posteriors is bounded.
45