text
string
source
string
the following conjecture. Conjecture 12. Given the definitions above, we have lim sup n→∞sup A∈{±1}n×n1√ndisc(A)>1. Hadamard matrices A natural place to search for a family proving a lower bound is to use the Sylvester construction of Hadarmard matrices. An Hadarmard matrix is a ±1 matrix whose columns are all orthogonal (in other words, after scaling by1√nit is an orthogonal matrix). In 1867 James Joseph Sylvester proposed the 9 following construction for sizes nthat are powers of 2. H0= 1 and Hkis the 2k×2kmatrix given by Hk=H1⊗Hk−1. In other words, Hk=Hk−1Hk−1 Hk−1−Hk−1 . Recall that n= 2k. Since1√nHkis an orthogonal matrix, it is easy to see that ∥Hkx∥2=nfor all x∈ {± 1}n and thus disc( Hk)≥√n. This is not tight for H1(as its discrepancy is 2), however it is tight for H2: e.g. taking the eigenvector y= (1,1,1,−1). To give an upper bound on disc( Hk) we proceed with an explicit construction of a ±1 vector, using the (optimal) eigenvector yforH2. Let x♮ (0)= 1,x♮ (1)= (1,1), and for k≥2,x♮ (k)∈ {± 1}2kis given by x♮ (k)=y⊗x♮ (k−2)= x♮ (k−1) x♮ (k−1) x♮ (k−1) −x♮ (k−1) . Since Hk=H1⊗(H1⊗Hk−2) =H2⊗Hk−2, the mixed-product property gives Hkx♮ (k)= (H2⊗Hk−2)(y⊗x♮ (k−2)) = (H2y)⊗(Hk−2x♮ (k−2)) = (2 y)⊗(Hk−2x♮ (k−2)). Inducting this argument shows that Hkx♮ (k) ∞=(√ 2√ 2kifkis odd,√ 2kifkis even. While this settles the discrepancy of Hkforkeven, the odd case is, to the best of my knowledge, open. Conjecture 13. LetHkbe the 2k×2kHadamard matrix obtained by the Sylvester construction (see above). For odd k, we have disc(Hk) =√ 2√ 2k. Note that this would settle Conjecture 12. Remark 6.1 (Boolean Fourier Analysis) .I am not going to discuss this at length here, but since Hk essentially corresponds to the boolean Fourier transform for boolean functions on kvariables, the discrepancy ofHkhas a natural interpretation in Boolean Analysis. We point the reader to [O’D14] for more on this subject. In a final note, I could not mention Hadamard matrices without stating a famous conjecture regarding their existence. Conjecture 14 (Hadamard Conjecture) .For any positive na multiple of 4, there exists an n×nHadamard matrix. Entry 7 Sampling from the Sherrington-Kirkpatrick Model (AR) In recent years, significant progress has been made in understanding the mixing behavior of Glauber dynamics for Ising models, particularly in the Sherrington-Kirkpatrick (SK) model. Despite the classical Dobrushin- uniqueness condition being too restrictive for this model, recent breakthroughs employing techniques like stochastic localization and spectral analysis have improved the understanding of the fast-mixing regime. In this entry, we will discuss these improved mixing bounds for Glauber dynamics in the SK model and the open questions that remain. 10 Let us start with some preliminaries. Consider the following measure on the hypercube {±1}n: µJ,h(x)∝exp1 2xTJx+hTx , where ∝corresponds to equal up to a normalizing constant. Here, the interaction matrix J∈Rn×nencodes pairwise interactions between spins, while hrepresents the influence of an external field. We will particularly focus on J=βWbeing a random matrix with W∼GOE (n) and inverse temperature β >0. This model was introduced by Sherrington and Kirkpatrick [SK75] and we refer to it as SK-model. The
https://arxiv.org/abs/2504.20539v1
reader can focus on the case h= 0. All statements below are with high probability on the disorder W. Glauber Dynamics is an MCMC algorithm which chooses a site i∈[n] uniformly at random and updates the spin according to µconditioned on the remaining spins. The dynamics can be represented by the following transition kernel: P(x, y) =  1 nµ(Xi=yi|X−i=x−i), ifdH(x, y) = 1 1−1 nPn j=1µ(Xj̸=xj|X−i=x−i),ifdH(x, y) = 0 0, otherwise, representing the probability of transitioning from xtoy. Here, dHdenotes the Hamming distance in the hypercube. Mixing in time Tϵmeans that, regardless of the starting point, the measure of the Markov chain at time Tϵisϵ-close to its stationary measure µwith respect to the total variation distance (although other notions of distances can also be used). The reader should think of ϵas a small constant. One celebrated and generally sufficient condition of Glauber Dynamics mixing in O(nlog(n)) time is the Dobrushin-uniqueness condition [Dob68] which requires max iP j|Jij|<1. While this condition is tight for some models (e.g. the Curie-Weiss-model), it only proves fast mixing for very high temperatures (β=O(1√n)) in the SK-model. In fact, a long standing open problem asked to extend the results above for some β=O(1). Indeed, in recent years, new proof techniques based on stochastic localization schemes have emerged and, when applied to the SK-model, significantly improved the bounds on βfor guaranteeing fast mixing. Eldan, Koehler and Zeitouni [EKZ22] were able to prove the following general (deterministic) spectral condition: As long as λmax(J)−λmin(J)<1−δ for some δ >0, then Glauber Dynamics mixes in O(n2) time with respect to total variation distance (big-O notation may hide constants depending on ϵ). Applied to the SK-model this guarantees fast mixing for inverse temperature β <1 4. The bound on the width of the spectrum of the Interaction Matrix proves to be tight again in the Curie-Weiss-Model. Moreover, based on the Low-degree-conjecture, Kunisky [Kun24] gave evidence that no polynomial-time sampling algorithm exists that can succeed in general for all Jwith λmax(J)−λmin(J)≤ 1 +δ. Hence, for improving bounds in the SK-model using spectral analysis, it does not suffice to measure the distance of the largest and the smallest eigenvalue of the interaction matrix, but additional properties of the interaction matrix in the SK model have to be used. One nice example is the symmetry of the spectrum: In contrast to the all-ones-matrix which defines the Curie-Weiss-Interaction matrix, the spectrum of Wis almost symmetric around 0. Anari, Koehler and Vuong [AKV24] used exactly this property to improve the bound on β: When |λmax(J)| |λmin(J)|= 1 + o(1), Glauber Dynamics mixes with respect to Kullback-Leibler divergence in O(nlog(n)) time, when λmax(J)−λmin(J)<1.18 andλmin(J)<0. The proof is based on a trickle down equation in a stochastic localization scheme and the authors introduce novel techniques to bound the operator norm of covariance matrices, depending on both 11 the ratio of λmax(J) and λmin(J) and λmax(J)−λmin(J). This leads to the currently best-known upper bound β <0.295 and it remains an open question whether this bound could be improved up to 1: Open Problem 15. Consider the SK-model with an arbitrary external
https://arxiv.org/abs/2504.20539v1
field. Does Glauber Dynamics fast mix (meaning in polynomial time) up to the replica symmetry breaking threshold β∗= 1? El Alaoui, Montanari and Sellke [AMS22] proposed an algorithm for sampling from the SK-model without an external field that applies a stochastic localization scheme and makes use of the Approximate Message Passing framework. They proved fast mixing in Wasserstein Distance (which is a weaker metric than Total Variation Distance) up to β <1 2. Celentano [Cel24] extended the argument to all βup to the threshold β∗= 1. Moreover, El Alaoui, Montanari and Sellke, proved in the same paper [AMS22] that it is impossible for stable algorithms to sample from the SK model without external field for β >1, which also rules out fast mixing for their algorithm. Updates While we generally do not expect to continuously update with progress on the questions discussed here, we try to include such updates if they happen (or we learn of them) before the posting of this manuscript. In this context, we are happy to report on two lines of progress: Conjecture 4 was answered by McRae in [McR25], following up on work of Rakoto Endor and Waldspurger [EW24]. Furthermore, Rares-Darius Buhai [Buh25] showed that Conjecture 13 is false by identifying a connection between discrepancy of Hkand a theory of non-linearity in Boolean Functions dating back, at least, to the work of Paterson and Wiedemann [PW83]; unfortunately, to the best of our knowledge, it does not rule out that the Hkfamily of matrices can be used to establish Conjecture 12. References [ABKS23] P. Abdalla, A. S. Bandeira, M. Kassabov, and V. Souza. Expander graphs are globally synchronizing. Preprint, available on arXiv , 2023. [AKV24] Nima Anari, Frederic Koehler, and Thuy-Duong Vuong. Trickle-down in localization schemes and applications. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing , pages 1094–1105, 2024. [AMS22] Ahmed El Alaoui, Andrea Montanari, and Mark Sellke. Sampling from the sherrington- kirkpatrick gibbs measure via algorithmic stochastic localization. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS) , pages 323–334. IEEE, 2022. [Ban16] Afonso S. Bandeira. Ten lectures and forty-two open problems in the mathematics of data science. Available at ht tp s: // pe op le .m at h. et hz .c h/ ~a ba nd ei ra /T en Le ct ur es Fo rt yT wo Pr ob le ms .p df , 2016. [Ban24] Afonso S. Bandeira. Ten open problems involving matrices, randomness, graphs, and more. In Oberwolfach Report for Workshop 2417 (21.04.2024–26.04.2024). Available at ht tp s: // pu bl ic at io ns .m fo .d e/ ha nd le /m fo /4 14 9 , 2024. [BAP05] Jinho Baik, G´ erard Ben Arous, and Sandrine P´ ech´ e. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. The Annals of Probability , 33(5):1643–1697, 2005. [BBvH23] A. S. Bandeira, M. Boedihardjo, and R. van Handel. Matrix concentration inequalities and free probability. Inventiones Mathematicae , (234):419–487, 2023. 12 [BCLS20] Afonso S Bandeira, Yutong Chen, Roy R Lederman, and Amit Singer. Non-unique
https://arxiv.org/abs/2504.20539v1
games over compact groups and orientation estimation in cryo-em. Inverse Problems , 36(6):064002, apr 2020. [BCSvH24] A. S. Bandeira, G. Cipolloni, D. Schroder, and R. van Handel. Matrix concentration inequalities and free probability ii. two-sided bounds and applications. Preprint, available on arXiv , 2024. [BJM23] N. Bansal, H. Jiang, and R. Meka. Resolving matrix spencer conjecture up to poly-logarithmic rank. STOC , 2023. [BKMZ22] A. S. Bandeira, D. Kunisky, D. Mixon, and X. Zeng. On the concentration of gaussian cayley matrices. Preprint, available on arXiv , 2022. [BMMP24] Afonso S Bandeira, Antoine Maillard, Shahar Mendelson, and Elliot Paquette. Fitting an ellipsoid to a quadratic number of random points. To appear in Latin American Journal of Probability and Mathematical Statistics. , 2024. [Buh25] Rares-Darius Buhai. Private Communication , 2025. [Cel24] Michael Celentano. Sudakov–fernique post-amp, and a new proof of the local convexity of the tap free energy. The Annals of Probability , 52(3):923–954, 2024. [DJR22] Daniel Dadush, Haotian Jiang, and Victor Reis. A new framework for matrix discrepancy: Partial coloring bounds via mirror descent. STOC , 2022. [DKWB23] Yunzi Ding, Dmitriy Kunisky, Alexander S Wein, and Afonso S Bandeira. Subexponential-time algorithms for sparse pca. Foundations of Computational Mathematics , pages 1–50, 2023. [Dob68] PL Dobruschin. The description of a random field by means of conditional probabilities and conditions of its regularity. Theory of Probability & Its Applications , 13(2):197–224, 1968. [EKZ22] Ronen Eldan, Frederic Koehler, and Ofer Zeitouni. A spectral condition for spectral gap: fast mixing in high-temperature ising models. Probability theory and related fields , 182(3):1035–1051, 2022. [ETB+24] Vittorio Erba, Emanuele Troiani, Luca Biggio, Antoine Maillard, and Lenka Zdeborov´ a. Bilinear sequence regression: A model for learning from long sequences of high-dimensional tokens. arXiv preprint arXiv:2410.18858 , 2024. [EW24] Faniriana Rakoto Endor and Ir` ene Waldspurger. Benign landscape for burer–monteiro factorizations of maxcut-type semidefinite programs. arXiv preprint arXiv:2411.03103 , 2024. [Gor88] Yehoram Gordon. On Milman’s inequality and random subspaces which escape through a mesh inRn. In Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 1986–87 , pages 84–106. Springer, 1988. [GZ19] Tingran Gao and Zhizhen Zhao. Multi-frequency phase synchronization. In International Conference on Machine Learning , 2019. [HKPX23] Jun-Ting Hsieh, Pravesh Kothari, Aaron Potechin, and Jeff Xu. Ellipsoid fitting up to a constant. InInternational Colloquium on Automata, Languages and Programming, ICALP , number 2023, 2023. [HRS22] Samuel B Hopkins, Prasad Raghavendra, and Abhishek Shetty. Matrix discrepancy from quantum communication. STOC , 2022. [KBK24] Anastasia Kireeva, Afonso S Bandeira, and Dmitriy Kunisky. Computational lower bounds for multi-frequency group synchronization. arXiv preprint arXiv:2406.03424 , 2024. 13 [KD23] Daniel Kane and Ilias Diakonikolas. A nearly tight bound for fitting an ellipsoid to gaussian random points. In The Thirty Sixth Annual Conference on Learning Theory , pages 3014–3028. PMLR, 2023. [KST21] M. Kassabov, S. H. Strogatz, and A. Townsend. Sufficiently dense kuramoto networks are globally synchronizing. Chaos 31, 2021. [Kun23] Dmitriy Kunisky. The discrepancy of unsatisfiable matrices and a lower bound for the koml´ os conjecture constant. SIAM J. Discrete Mathematics (SIDMA) , 37(2), 2023. [Kun24] Dmitriy Kunisky. Optimality of glauber dynamics for general-purpose ising model
https://arxiv.org/abs/2504.20539v1
sampling and free energy approximation. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) , pages 5013–5028. SIAM, 2024. [Kur75] Yoshiki Kuramoto. Self-entrainment of a population of coupled non-linear oscillators. International Symposium on Mathematical Problems in Theoretical Physics , pages 420–422, 1975. [Kur16] Yoshiki Kuramoto. On the low-rank approach for semidefinite programs arising in synchronization and community detection. COLT , 2016. [MABB24] A. McRae, P. Abdalla, A. S. Bandeira, and N. Boumal. Nonconvex landscapes for z2 synchronization and graph clustering are benign near exact recovery thresholds. Preparing, available on arXiv , 2024. [MB23] Antoine Maillard and Afonso S Bandeira. Exact threshold for approximate ellipsoid fitting of random points. arXiv preprint arXiv:2310.05787 , 2023. [McR25] Andrew D. McRae. Benign landscapes for synchronization on spheres via normalized laplacian matrices. arXiv preprint arXiv:2503.18801 , 2025. [Mek14] Raghu Meka. Discrepancy and beating the union bound. Windows On Theory Blog Post. Available at ht tp s: // wi nd ow so nt he or y. or g/ 20 14 /0 2/ 07 /d is cr ep an cy -a nd -b ea ti ng -t he -u ni on -b ou nd , 2014. [MK24] Antoine Maillard and Dmitriy Kunisky. Fitting an ellipsoid to random points: predictions using the replica method. IEEE Transactions on Information Theory , 2024. [MR14] Andrea Montanari and Emile Richard. A statistical model for tensor pca. Neural Information Processing Systems (NIPS 2014) , 2014. [MTM+24] Antoine Maillard, Emanuele Troiani, Simon Martin, Florent Krzakala, and Lenka Zdeborov´ a. Bayes-optimal learning of an extensive-width neural network from quadratically many samples. arXiv preprint arXiv:2408.03733 , 2024. [O’D14] Ryan O’Donnell. Analysis of Boolean Functions . Cambridge Press, 2014. [PTVW23] Aaron Potechin, Paxton M Turner, Prayaag Venkat, and Alexander S Wein. Near-optimal fitting of ellipsoids to random points. In The Thirty Sixth Annual Conference on Learning Theory , pages 4235–4295. PMLR, 2023. [PW83] K. G. Paterson and D. H. Wiedemann. The maximum number of linearly independent vectors in a Bent function. Proceedings of the National Academy of Sciences of the United States of America , 80(22):6278–6279, 1983. [PWBM16] Amelia Perry, Alexander S Wein, Afonso S Bandeira, and Ankur Moitra. Optimality and sub-optimality of pca for spiked random matrices and synchronization. arXiv preprint arXiv:1609.05573 , 2016. 14 [Sau11] James James Francis Saunderson. Subspace identification via convex optimization . PhD thesis, Massachusetts Institute of Technology, 2011. [SCPW12] James Saunderson, Venkat Chandrasekaran, Pablo A Parrilo, and Alan S Willsky. Diagonal and low-rank matrix decompositions, correlation matrices, and ellipsoid fitting. SIAM Journal on Matrix Analysis and Applications , 33(4):1395–1416, 2012. [SK75] David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. Physical review letters , 35(26):1792, 1975. [Spe85a] Joel Spencer. Six standard deviations suffice. Transactions of the American Mathematical Society , 289(2):679–706, 1985. [Spe85b] Joel H. Spencer. Six standard deviations suffice. Transactions of the American Mathematical Society , 289:679–706, 1985. [SPW13] James Saunderson, Pablo A Parrilo, and Alan S Willsky. Diagonal and low-rank decompositions and fitting ellipsoids to random points. In 52nd IEEE Conference on Decision and Control , pages 6031–6036. IEEE, 2013. [TW23] Madhur Tulsiani and June Wu.
https://arxiv.org/abs/2504.20539v1
arXiv:2504.20691v1 [math.NT] 29 Apr 2025THE RELATIVE ENTROPY OF PRIMES IN ARITHMETIC PROGRESSIONS I S REALLY SMALL ALEX COWAN Abstract. Fix a modulus q. One would expect the number of primes in each invertible res idue class mod q to be multinomially distributed, i.e. for each pmodqto behave like an independent random variable uniform on(Z/qZ)×. Using techniques from data science, we discover overwhelm ing evidence to the contrary: primes are much more uniformly distributed than iid uniform random variables. This phenomenon was previously unknown, and there is no clear theoretical explanation for i t. To demonstrate that our test statistic of choice, the KL dive rgence, is indeed extreme, we prove new bounds for the left tail of the relative entropy of the uniform multi nomial using the method of types. Contents 1. Models, statistics, and percentiles 1 2. Example: Chebyshev’s bias 2 3. Relative entropy 3 4. The statistical aspect 4 5. The arithmetic aspect 5 6. The theoretical state of affairs 6 Appendix A. Proof of theorem 4.2 6 Acknowledgements 10 References 10 1.Models, statistics, and percentiles In a nebulous sense, it’s a good rule of thumb that when prime n umbers are ordered by their absolute values, their residues modulo some fixed qbehave like independent uniform random variables on (Z/qZ)×. Let ξp,q∼Unif(Z/qZ)×(1) be iid random variables indexed by primes p. The sequence ξ2,q, ξ3,q, ξ5,q, ... (2) is amodel for 2modq,3modq,5modq, ... (3) How might we assess it? The aforementioned rule of thumb can be recast as saying that the sequences ( 3) and ( 2) produce similar statistics1. We make sense of arithmetic statistics such as #{p <108:p= 1mod8 }= 1,439,970 #{p <108:p= 3mod8 }= 1,440,544 #{p <108:p= 5mod8 }= 1,440,534 #{p <108:p= 7mod8 }= 1,440,406(4) by determining to what degree they resemble the statistics t ypical realizations of the random model ( 2) would produce, discovering for instance that the counts of ( 4) are more uniform than about 98.5%of random samples. Date: April 30, 2025. 1The word “statistics” has two meanings: a discipline or body of methodologies, and the plural of “statistic”: a quantity derived from data. Here our meaning is the latter. 1 2 ALEX COWAN To measure the extent to which arithmetic data behaves rando mly we introduce the following framework. Letfbe a function on sequences of the form ( 3) and ( 2), with codomain Rfor simplicity. When fis evaluated on the arithmetic data ( 3), the result is a real number τ, anempirical statistic . When fis evaluated on ( 2), the result is instead a random variable T, arandom statistic . The extent to which arithmetic data such as ( 3) behaves randomly can be assessed by estimating the percentile P(T < τ) =P/parenleftbig f(ξ2,q,ξ3,q,ξ5,q,...)< f(2modq,3modq,5modq,...)/parenrightbig , (5) wherePis short for the probability of. For the arithmetic data ( 3) to masquerade as random, the percentile ( 5) shouldn’t be too close to 0%nor100%. We now have a tool for discovering non-random behaviour. 2.Example: Chebyshev’s bias A well-known instance in which primes mod qdo not behave randomly is “Chebyshev’s bias” [ RS94].
https://arxiv.org/abs/2504.20691v1
Let’s look at what is, on the arithmetic side “prime number races”, and on the statistical side “stopping times of random walks”. As usual, let π(x):= #{p/lessorequalslantx} π(x;amodq):= #{p/lessorequalslantx:p=amodq}. Statistic 2.1. On the arithmetic side, define τ2:= min{x:π(x; 1mod4) > π(x; 3mod4) } τ3:= min{x:π(x; 1mod3) > π(x; 2mod3) }, to be compared on the statistical side with T2:= min  x:/summationdisplay p/lessorequalslantx p/negationslash=2/braceleftBigg 1ifξp,4= 1 −1ifξp,4= 3>0   T3:= min  x:/summationdisplay p/lessorequalslantx p/negationslash=3/braceleftBigg 1ifξp,3= 1 −1ifξp,3= 2>0   respectively. Our task is to estimate P(T2< τ2)andP(T3< τ3). LetXk∼2Ber(1 2)−1,k= 1,2,...be iid random variables that are ±1each with probability 1/2, and let Stdenote the simple random walk X1+...+Xt. The percentile P(T2< τ2)can be written P(T2< τ2) = 1−∞/summationdisplay n=π(τ2)−1P/parenleftbig min{t:St= 1}=n/parenrightbig , (6) and similarly for P(T3< τ3). For anyz∈Z>0, the distribution of the smallest value of tfor which St=zis given by P/parenleftbig min{t:St=z}=n/parenrightbig =z n2n/parenleftbiggn z+n 2/parenrightbigg ifz+nis even and0ifz+nis odd. Using the effective form of Stirling’s formula √ 2πnnn ene1 12ne−1 360n3< n!<√ 2πnnn ene1 12n, valid for all n/greaterorequalslant2, one finds that, for N/greaterorequalslant2, P/parenleftbig min{t:St= 1}/greaterorequalslant2N+1/parenrightbig = (1+r)∞/summationdisplay n=N1/radicalbig 2πn(n+1)(2n+1)(7) with e−1 360(2N+1)3−5 242N+1 N(N+1)<1+r < e1 360/parenleftBig 1 N3+1 (N+1)3/parenrightBig +1 24/parenleftBig 1 N2−1 (N+1)2−2N+1 N(N+1)/parenrightBig . (8) THE RELATIVE ENTROPY OF PRIMES IN ARITHMETIC PROGRESSIONS I S REALLY SMALL 3 The series in ( 7) can be bounded as 1/radicalbig 16π(N+1)<∞/summationdisplay n=N1/radicalbig 2πn(n+1)(2n+1)<1/radicalbig 16π(N−1). (9) As Rubinstein and Sarnak [ RS94] report, τ2= min{x:π(x; 1mod4) > π(x; 3mod4) }= 26,861 τ3= min{x:π(x; 1mod3) > π(x; 2mod3) }= 608,981,813,029, corresponding to N= 1,472andN= 11,669,295,395respectively. Combining ( 6) with the bounds ( 8) and ( 9), Fact 2.2. (100−0.3674)%>P(T2< τ2)>(100−0.3678)% (100−0.00013056980498)% >P(T3< τ3)>(100−0.00013056980500)% . Fact 2.2 quantifies the qualitative observation that the first time at which primes 3mod4 outnumber primes 1mod4 occurs later than would be expected of unbiased random walks (it occurs later than about 99.6%of random walks), and much more so for primes 2mod3 outnumbering primes 1mod3 (99.9999% ). By quantifying the extent to which the values of statistic 2.1 are aberrant, we obtain a concrete assessment of the extent t o which primes mod qbehave similarly or dissimilarly to the random model ( 2). 3.Relative entropy Our statistic of choice for discovering this paper’s main re sult is the “relative entropy”, also known as the Kullback–Leibler divergence or KL divergence, which we now define and motivate. LetXαandXβbe discrete random variables with probability mass functio ns (PMFs) pαandpβrespectively. Define the random variable logP(ˆXα|β):= logP(Xβ=Xα) = logpβ(Xα), considered as a function of the random variable Xα. If one imagines that ˆXαis a realization of the random variable Xα, thenlogP(ˆXα|β)is thelogof the probability that ˆXαwould have been generated by Xβ. The expectation of logP(ˆXα|β)is E/bracketleftbig logP(ˆXα|β)/bracketrightbig =/summationdisplay apα(a)logpβ(a) (10) =:−H(pα,pβ). The cross-entropy H(p,q)is related to the relative entropy DKL(p/⌊ard⌊lq)via [CT06, §2.3] DKL(p/⌊ard⌊lq):=H(p,q)−H(p) =/summationdisplay apα(a)logpα(a) pβ(a), (11) whereH(p):=H(p,p)denotes entropy. It follows from e.g. Jensen’s inequality that DKL(p/⌊ard⌊lq)/greaterorequalslant0with equality iff p=q. Hence, the normalization (11) is the affine transformation of the expected log-likelihood (10) for which the transformed quantity is non- negative and actually 0sometimes. The relevant
https://arxiv.org/abs/2504.20691v1
case for us will be the one in which Xβ= Unif(Z/qZ)×andXα=Mxthe empirical distribution of the tuple /parenleftbigg#{p/lessorequalslantx:ξp,q=a1} π(x),#{p/lessorequalslantx:ξp,q=a2} π(x),...,#{p/lessorequalslantx:ξp,q=aϕ(q)} π(x)/parenrightbigg , witha1,a2,...,a ϕ(q)ranging over a∈(Z/qZ)×. Asξp,q∼Unif(Z/qZ)×each independent, Mxis multinomially distributed with π(x)trials and all probabilities equal to 1/ϕ(q)—ϕ(q) = #(Z/qZ)×is Euler’s totient function. Fora∈(Z/qZ)×, letMx(a)denote the athcomponent of this multinomial, i.e. Mx(a) =1 π(x)/summationdisplay p/lessorequalslantx/braceleftBigg 1ifξp,q=a 0otherwise. 4 ALEX COWAN Statistic 3.1. On the arithmetic side, define τq:=DKL/parenleftbig π(x;·modq)/⌊ard⌊lUnif(Z/qZ)×/parenrightbig =/summationdisplay a∈(Z/qZ)×π(x;amodq) π(x)log/parenleftbiggπ(x;amodq) π(x)/ϕ(q)/parenrightbigg , to be compared on the statistical side with Tq:=DKL/parenleftbig Mx/⌊ard⌊lUnif(Z/qZ)×/parenrightbig =/summationdisplay a∈(Z/qZ)×Mx(a)log/parenleftbig ϕ(q)Mx(a)/parenrightbig . Our task is now to estimate the percentile P(Tq< τq). We need information about (i) The distributions of the random statistics Tq, and (ii) The values of the empirical statistics τq. 4.The statistical aspect To set our expectations, the following remark classifies all possible limiting behaviours of relative entropies of multinomials. Remark 4.1. Letpandqbe PMFs on some common finite set of size k, and letMnbe multinomially distributed withntrials and probabilities p. The strong law of large numbers implies that, as n→ ∞, DKL(Mn/⌊ard⌊lq)a.s.−→DKL(p/⌊ard⌊lq). (12) Whenp=q, Wilks’ theorem [ Wil38 ] states that, as n→ ∞, 2nDKL(Mn/⌊ard⌊lq)−→χ2 k−1, (13) aχ2-distributed random variable with k−1degrees of freedom. Above −→denotes convergence in distribution. Summarizing, the sum of relative entropies of niid random variables with PMF p, taken against q, is≫n with probability 1ifp/ne}ationslash=q, and otherwise, when rescaled by a factor of 2n, converges in distribution to a χ2 random variable. To test our hypothesis that the arithmetic data ( 3) bears statistical semblance to the random model ( 2), we evaluate for many moduli qthe percentiles P(Tq< τq), whereTqandτqare the relative entropies given in statistic 3.1 .Remark 4.1 tells us that 2π(x)·Tqconverges in distribution to a χ2random variable with ϕ(q)−1 degrees of freedom, and that no other iid choice of the random variables ( 1) will do better. The forthcoming table 5.1 uncovers behaviour which qualitatively appears outlandis h under our hypothesis. Comparing to the limiting distribution ( 13) is known as a G-test, and is similar to Pearson’s well-known χ2 goodness-of-fit test. But in order to establish rigorous bou nds on the percentiles P(Tq< τq)an effective form of Wilks’ theorem ( 13) is needed, specifically for the left tail. This is quite an un usual situation to find oneself in in light of remark 4.1 : iid random variables can under no circumstances produce re lative entropies much smaller than expected. Existing effective bounds in the literature, e.g. [Agr20 ,Agr22 ,MJT+19,JVHW17 ], focus on the right tail, and aren’t helpful here. To rectify the situatio n, we prove the following technical theorem, which is practical for computation and strong enough to draw conclus ions. Theorem 4.2. Let •[y]denote the nearest integer to y(with ties broken arbitrarily) •Sbe a finite set of size k/greaterorequalslant2 •Mnbe a multinomially distributed random variable with n/greaterorequalslant2trials and all probabilities equal to 1/k •µ:=n/k, •θ∈R>0 •B:=⌊n√ 2θ+k|µ−[µ]|⌋ •for any∆∈Z,0<∆<[µ], c∆:=1 2+[µ] ∆+[µ]2 ∆2log/parenleftbigg 1−∆ [µ]/parenrightbigg . THE RELATIVE ENTROPY OF PRIMES IN ARITHMETIC PROGRESSIONS I S REALLY SMALL 5 For allθsuch that 0< B <[µ]and2cB>−2[µ]−1 2[µ]+1, P/parenleftBig DKL/parenleftbig Mn/⌊ard⌊lUnif(S)/parenrightbig /lessorequalslantθ/parenrightBig <√ 2πnµn
https://arxiv.org/abs/2504.20691v1
(2π[µ])k 2[µ]ne1 12n+k([µ]−µ)(1+1 2[µ])B/summationdisplay ∆=1exp/bracketleftBigg 1 2−c∆ [µ]2∆3−/parenleftbigg1 2+c∆ [µ]−1 2−c∆ 2[µ]2/parenrightbigg ∆/bracketrightBiggk/summationdisplay r=1/parenleftbiggk r/parenrightbigg/parenleftbigg∆−1 r−1/parenrightbigg 2r +1 kn  n! (µ!)kifµ∈Z 0 otherwise. The proof of theorem 4.2 is relegated to appendix A . 5.The arithmetic aspect We’re now ready to assess the random model ( 2) by measuring the relative entropy as given in statistic 3.1 . The empirical statistics τqare straightforward to compute when x= 108, which is easily large enough for theorem 4.2 ’s bound to reveal that the percentiles P(Tq< τq)are aberrant. Table 5.1 , which also includes scipy’s estimate of P(Tq< τq), summarizes the situation. P(Tq< τq) P(Tq< τq) qτq Theorem 4.2 scipy qτq Theorem 4.2 scipy 32.66·10−9/lessorequalslant13.99% 13.89% 22 7.08·10−8/lessorequalslant2.76·10−1%2.42·10−2% 43.00·10−9/lessorequalslant14.86% 14.74% 23 1.12·10−7/lessorequalslant2.06·10−5%4.54·10−8% 53.95·10−9/lessorequalslant4.18·10−1%2.55·10−1% 24 2.39·10−8/lessorequalslant3.90·10−2%7.45·10−3% 62.64·10−9/lessorequalslant13.96% 13.86% 25 1.00·10−7/lessorequalslant6.54·10−5%2.78·10−7% 72.41·10−8/lessorequalslant5.87·10−1%1.96·10−1% 26 5.17·10−8/lessorequalslant6.20·10−3%3.47·10−4% 81.32·10−8/lessorequalslant2.52% 1.50% 27 9.58·10−8/lessorequalslant4.18·10−4%3.28·10−6% 91.02·10−8/lessorequalslant6.87·10−2%2.41·10−2% 28 7.28·10−8/lessorequalslant4.10·10−2%2.05·10−3% 103.93·10−9/lessorequalslant4.11·10−1%2.53·10−1% 29 1.90·10−7/lessorequalslant2.79·10−5%5.48·10−9% 117.09·10−8/lessorequalslant2.77·10−1%2.43·10−2% 30 1.53·10−8/lessorequalslant8.36·10−3%1.64·10−3% 129.47·10−9/lessorequalslant1.53% 9.28·10−1% 31 1.90·10−7/lessorequalslant3.96·10−6%3.86·10−10% 135.18·10−8/lessorequalslant6.23·10−3%3.48·10−4% 32 8.54·10−8/lessorequalslant1.50·10−3%2.26·10−5% 142.42·10−8/lessorequalslant5.90·10−1%1.97·10−1% 33 9.37·10−8/lessorequalslant3.49·10−5%1.55·10−7% 151.53·10−8/lessorequalslant8.31·10−3%1.65·10−3% 34 7.71·10−8/lessorequalslant7.02·10−4%1.10·10−5% 163.79·10−8/lessorequalslant1.94·10−1%3.52·10−2% 35 1.03·10−7/lessorequalslant8.05·10−7%1.01·10−9% 177.72·10−8/lessorequalslant6.98·10−4%1.10·10−5% 36 3.57·10−8/lessorequalslant8.01·10−4%4.84·10−5% 181.02·10−8/lessorequalslant6.83·10−2%2.40·10−2% 37 2.60·10−7/lessorequalslant3.27·10−6%1.92·10−11% 191.01·10−7/lessorequalslant6.52·10−4%4.97·10−6% 38 1.01·10−7/lessorequalslant6.48·10−4%4.96·10−6% 201.13·10−8/lessorequalslant2.84·10−3%5.68·10−4% 39 1.16·10−7/lessorequalslant3.52·10−6%4.01·10−9% 214.25·10−8/lessorequalslant2.09·10−3%1.23·10−4% 40 5.01·10−8/lessorequalslant2.71·10−5%4.94·10−7% Table 5.1. Forx= 108andτq,Tqas instatistic 3.1 . Code is available at [ Cow25 ]. It’s clear from table 5.1 that the sequence ( 3) of primes mod qdoes not behave like a typical realization of the random model ( 2). Specifically, the relative entropies are much smaller tha n they would be for iid uniform random variables. This phenomenon is distinct from Chebyshev’s bias discussed in section 2 . In light ofremark 4.1 , biases away from uniform in the distribution of primes mod qlead to larger relative entropies. The fact that, despite Chebyshev’s bias, the relative entro pies are still too small can be understood as saying that whatever is causing this behaviour is an effect which is m ore impactful than Chebyshev’s bias, at least in the range of x,qtabulated here. Remark 4.1 also explains that atypically small relative entropies are impossible for iid random variables, i.e. any model of the primes which elucidates the results of table 5.1 must capture that their residues mod qare not independent of one another. 6 ALEX COWAN 6.The theoretical state of affairs Theoretical results can lend credence to the notion that ( 3) and ( 2) ought to be statistically similar. For instance, the strong law of large numbers [Dur19 , Thm. 2.4.1] #{p/lessorequalslantx:ξp,q=a} π(x)a.s.−→1 ϕ(q)(14) is mirrored in the arithmetic by Dirichlet’s theorem for primes in arithmetic progressions [MV07 , Cor. 11.19] π(x;amodq) π(x)∼1 ϕ(q). (15) Furthering the analogy, the exponents in the rates of conver gence in ( 14) and ( 15) match: the consequence of Berry–Esseen’s effective central limit theorem [Dur19 , Thms. 3.4.1 and 3.4.17] /vextendsingle/vextendsingle/vextendsingle/vextendsingle#{p/lessorequalslantx:ξp,q=a} π(x)−1 ϕ(q)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≪π(x)−1 2+ε, /vextendsingle/vextendsingle/vextendsingle/vextendsingle#{p/lessorequalslantx:ξp,q=a} π(x)−1 ϕ(q)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≫π(x)−1 2−εinfinitely often is, on the arithmetic side, the consequence of the Riemann Hypothesis for Dirichlet L-functions [MV07 , Cor. 13.8, §15] /vextendsingle/vextendsingle/vextendsingle/vextendsingleπ(x;amodq) π(x)−1 ϕ(q)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≪π(x)−1 2+ε, /vextendsingle/vextendsingle/vextendsingle/vextendsingleπ(x;amodq) π(x)−1 ϕ(q)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≫π(x)−1 2−εinfinitely often . Models like ( 2) naturally lead one to conjectures: absent evidence to the c ontrary, one may guess that statements about the random model true 100% of the time should be
https://arxiv.org/abs/2504.20691v1
true for primes mod qas well. For example, Hooley [ Hoo76 ] conjectures that as x/greaterorequalslantq→ ∞ jointly in any manner, Var/bracketleftBig π(x;·modq)/bracketrightBig ∼xlogq. (16) Various results support this conjecture in the regime q≫(loglogx)1+ε[Fio15], and recently the conjecture was shown to be false in the complementary regime q≪loglogx[FM23 ]. It’s likely no coincidence that this precisely mirrors the behaviour of the random model ( 2) [Dur19 , §8.5], making this outcome predictable. The results of table 5.1 are not similarly predictable: Hooley’s conjecture ( 16) predicts variance among the values of π(x;amodq)that is too small for the iid random variables ( 2), and was proven to be false by establishing a lower bound on said variance. In contrast, th e statistics tabulated above, rooted in empirical fact, show variance among those same values π(x;amodq)implausibly small under the random model ( 2)! It appears that, while ( 16) is false, the general notion is correct: primes mod qvary less than iid random variables would. Though much work has been done on lower bounds — see [ dlBF23 ] for a survey — upper bounds elucidating table 5.1 ’s non-random behaviour are lacking. As noted, these would n ecessarily demonstrate interdependence between primes mod q. Appendix A.Proof of theorem 4.2 Our proof of theorem 4.2 proceeds via the “method of types” [ CT06, §11]. Given a finite set S={a1,...,a k} of cardinality #S=kand a finite sequence x= (x1,x2,...,x n)∈Snof length nwhose entries are elements of S, letpxdenote the empirical distribution on Syielded by x, i.e. px:=/parenleftbigg#{ℓ:xℓ=a1} n,...,#{ℓ:xℓ=ak} n/parenrightbigg . In the opposite direction, given a probability distributio nponSwhose entries are rational numbers with denominator n, the set T=Tpof all sequences xof length nfor which px=pis called the type class ofp. Write T(a):=np(a)fora∈S. Certain statistics of sequences x∈Sn, including the relative entropy DKL(x/⌊ard⌊lq)for fixed q, depend only on the type class of px. Ergo the study of the distribution of these statistics for x∼Unif(Sn)can be reduced to combinatorics involving type classes. THE RELATIVE ENTROPY OF PRIMES IN ARITHMETIC PROGRESSIONS I S REALLY SMALL 7 The first step in proving theorem 4.2 invokes Pinsker’s inequality 1 2/parenleftBigg/summationdisplay a∈S|p(a)−q(a)|/parenrightBigg2 /lessorequalslantDKL(p/⌊ard⌊lq). (17) With ( 17), the question is transformed into one about L1norms of type classes, which is more amenable to combinatorial analysis. The necessary bound on cardinalities of type classes is give n inlemma A.1 , while the weights which appear, multinomial coefficients, are estimated using an effective fo rm of Stirling’s formula and a polynomial bound on log(1+x)coming from a series expansion. Proof of theorem 4.2 .Pinsker’s inequality ( 17) implies that, for x∼Unif(Sn), P(DKL(px/⌊ard⌊lUnif(S))/lessorequalslantθ)/lessorequalslantP/parenleftBigg/summationdisplay a∈S/vextendsingle/vextendsinglepx(a)−1 k/vextendsingle/vextendsingle/lessorequalslant√ 2θ/parenrightBigg . (18) Moving to type classes, let T:={type classes T:T=Tpxfor some x∈Sn} =/braceleftBigg T= (T(a1),...,T(ak)) :T(a)∈Z/greaterorequalslant0,/summationdisplay a∈ST(a) =n/bracerightBigg . The probability distribution on Tinduced by x∼Unif(Sn)is the uniform multinomial P(Tpx=T|x∼Unif(Sn)) =1 kn/parenleftbiggn T(a1),...,T(ak)/parenrightbigg =1 knn! T(a1)!···T(ak)!. (19) LetPTdenote probabilities under the distribution ( 19), and set µ:=n/k. Combining ( 19) with ( 18), P(DKL(px/⌊ard⌊lUnif(S))/lessorequalslantθ)/lessorequalslantPT/parenleftBigg/summationdisplay a∈S|T(a)−µ|/lessorequalslantn√ 2θ/parenrightBigg . (20) The bound/summationdisplay a∈S|T(a)−µ|/lessorequalslantn√ 2θ =⇒/summationdisplay a∈S|T(a)−[µ]|/lessorequalslantn√ 2θ+k|µ−[µ]| applied to the right hand side of ( 20) gives PT/parenleftBigg/summationdisplay a∈S|T(a)−µ|/lessorequalslantn√ 2θ/parenrightBigg
https://arxiv.org/abs/2504.20691v1
/lessorequalslantPT/parenleftBigg/summationdisplay a∈S|T(a)−[µ]|/lessorequalslantn√ 2θ+k|µ−[µ]|/parenrightBigg . (21) SetB=⌊n√ 2θ+k|µ−[µ]|⌋. Via ( 21), our objective is reduced to enumerating type classes with L1norm at mostB, weighted by the right hand side of ( 19). Define dj:=T(aj)−[µ] T∆:=  T:k/summationdisplay j=1|dj|= ∆   T/lessorequalslantB:=B/uniondisplay ∆=0T∆=/braceleftBigg T:/summationdisplay a∈S|T(a)−[µ]|/lessorequalslantB/bracerightBigg . (22) Combining ( 21), (20), and ( 19), the probability on the left hand side of ( 20) which we’re ultimately looking to bound, is in turn bounded via knP(DKL(px/⌊ard⌊lUnif(S))/lessorequalslantθ)/lessorequalslant/summationdisplay T∈T/lessorequalslantBn! T(a1)!···T(ak)!. (23) The bound we obtain for ( 23)’s right hand side, divided by kn, will be theorem 4.2 ’s right hand side. 8 ALEX COWAN Let’s separate the set of type classes T/lessorequalslantBinto the union ( 22), each piece to be bounded separately. /summationdisplay T∈T/lessorequalslantBn! T(a1)!···T(ak)!=/summationdisplay T∈T/lessorequalslantBn! /producttextk j=1(dj+[µ])! =B/summationdisplay ∆=0/summationdisplay T∈T∆n!/producttextk j=1(dj+[µ])! =B/summationdisplay ∆=1/summationdisplay T∈T∆n! /producttextk j=1(dj+[µ])!+  n! (µ!)kifµ∈Z 0 otherwise.(24) Applying the effective version of Stirling’s formula √ 2πnnn en< n!<√ 2πnnn ene1 12n, valid for all n/greaterorequalslant1, to each summand of ( 24) yields B/summationdisplay ∆=1/summationdisplay T∈T∆n!/producttextk j=1(dj+[µ])!<B/summationdisplay ∆=1/summationdisplay T∈T∆exp/bracketleftBigg n(logn−1)+log√ 2π+1 12n+1 2logn −k/summationdisplay j=1(dj+[µ])(log(dj+[µ])−1)+log√ 2π+1 2log(dj+[µ])/bracketrightBigg .(25) Rewriting the relation/summationdisplay a∈ST(a) =nas exp −n+k/summationdisplay j=1(dj+[µ]) = 1, the upper bound ( 25) becomes B/summationdisplay ∆=1/summationdisplay T∈T∆n! /producttextk j=1(dj+[µ])!<B/summationdisplay ∆=1/summationdisplay T∈T∆(2π)k−1 2e1 12nexp/bracketleftBigg (n+1 2)logn−k/summationdisplay j=1(dj+[µ]+1 2)log(dj+[µ])/bracketrightBigg <√ 2πnnne1 12n (2π)k 2B/summationdisplay ∆=1/summationdisplay T∈T∆exp/bracketleftBigg −k/summationdisplay j=1/parenleftbig dj+[µ]+1 2/parenrightbig log(dj+[µ])/bracketrightBigg . (26) Let’s focus on the term −/summationtextk j=1([µ]+1 2)log(dj+[µ])in the exponent of ( 26). For any 0< R <1, define c:= (R+1 2R2+log(1−R))R−2. For all|x|< R, log(1+x)/greaterorequalslantx−/parenleftbig1 2−c/parenrightbig x2. In the special case R= ∆/[µ], which is less than 1by assumption, the value of cis c∆:=1 2+[µ] ∆+[µ]2 ∆2log/parenleftbigg 1−∆ [µ]/parenrightbigg . As|dj|/lessorequalslant∆by definition of T∆, k/summationdisplay j=1log(dj+[µ]) =klog[µ]+k/summationdisplay j=1log/parenleftbigg 1+dj [µ]/parenrightbigg /greaterorequalslantklog[µ]+k/summationdisplay j=1dj [µ]−1 2−c∆ [µ]2d2 j /greaterorequalslantklog[µ]+n−k[µ] [µ]−1 2−c∆ [µ]2k/summationdisplay j=1d2 j. THE RELATIVE ENTROPY OF PRIMES IN ARITHMETIC PROGRESSIONS I S REALLY SMALL 9 Substituting into ( 26), B/summationdisplay ∆=1/summationdisplay T∈T∆n! /producttextk j=1(dj+[µ])!<√ 2πnnne1 12n (2π)k 2·exp/parenleftbigg −k([µ]+1 2)/parenleftbigg log[µ]+µ−[µ] [µ]/parenrightbigg/parenrightbigg ·B/summationdisplay ∆=1/summationdisplay T∈T∆exp/bracketleftBigg −k/summationdisplay j=1djlog(dj+[µ])+(1 2−c∆)/parenleftbigg1 [µ]+1 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j/bracketrightBigg <√ 2πnnne1 12n (2π[µ])k 2[µ]k[µ]·exp/parenleftbigg k/parenleftbigg 1+1 2[µ]/parenrightbigg ([µ]−µ)/parenrightbigg ·B/summationdisplay ∆=1/summationdisplay T∈T∆exp/bracketleftBigg −k/summationdisplay j=1djlog(dj+[µ])+(1 2−c∆)/parenleftbigg1 [µ]+1 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j/bracketrightBigg .(27) Looking at the exponent of ( 27), −k/summationdisplay j=1djlog(dj+[µ])+(1 2−c∆)/parenleftbigg1 [µ]+1 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j =−k/summationdisplay j=1dj/parenleftbigg log[µ]+log/parenleftbigg 1+dj [µ]/parenrightbigg/parenrightbigg +(1 2−c∆)/parenleftbigg1 [µ]+1 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j /lessorequalslant−(n−k[µ])log[µ]−k/summationdisplay j=1dj/parenleftbiggdj [µ]−1 2−c∆ [µ]2d2 j/parenrightbigg +(1 2−c∆)/parenleftbigg1 [µ]+1 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j /lessorequalslant−k(µ−[µ])log[µ]+1 2−c∆ [µ]2k/summationdisplay j=1d3 j+/parenleftbigg−1 2−c∆ [µ]+1 2−c∆ 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j. (28) The assumption cB>−1 21−1 2[µ] 1+1 2[µ] implies 1 2−c∆ 2[µ]2−1 2+c∆ [µ]<0. In conjunction with the facts k/summationdisplay j=1d3 j/lessorequalslant∆3andk/summationdisplay j=1d2 j/greaterorequalslant∆, the bound ( 28) can be loosened to −k/summationdisplay j=1djlog(dj+[µ])+(1 2−c∆)/parenleftbigg1 [µ]+1 2[µ]2/parenrightbiggk/summationdisplay j=1d2 j /lessorequalslant−k(µ−[µ])log[µ]+1 2−c∆ [µ]2∆3−/parenleftbigg1 2+c∆ [µ]−1 2−c∆ 2[µ]2/parenrightbigg ∆.(29) The following combinatorial lemma lets us handle the sum ove rT∈ T∆featuring in ( 27). Lemma A.1. For∆/greaterorequalslant1, #T∆/lessorequalslantk/summationdisplay r=1/parenleftbiggk r/parenrightbigg/parenleftbigg∆−1 r−1/parenrightbigg 2r. Proof. Suppose there are exactly rvalues of jfor which dj/ne}ationslash= 0. As∆/greaterorequalslant1, we have 1/lessorequalslantr/lessorequalslantk. Each case is mutually exclusive, so we count them separately and sum. The number of ways to distribute mballs among ℓ bins is/parenleftbigm+ℓ−1
https://arxiv.org/abs/2504.20691v1
ℓ−1/parenrightbig , from which it can be deduced that the number of ways to write ∆as a sum of rstrictly positive 10 ALEX COWAN integers is/parenleftbig∆−1 r−1/parenrightbig . Multiplying by/parenleftbigk r/parenrightbig to reflect the possible choices of precisely which rvalues of jhavedj/ne}ationslash= 0, multiplying by 2rto reflect the possible sign choices, and summing over 1/lessorequalslantr/lessorequalslantkyields lemma A.1 . /square Applying ( 29) and lemma A.1 to (27), and then substituting into ( 24) gives /summationdisplay T∈T/lessorequalslantBn! T(a1)!···T(ak)!<√ 2πnnne1 12n (2π[µ])k 2[µ]n·exp/parenleftbigg k/parenleftbigg 1+1 2[µ]/parenrightbigg ([µ]−µ)/parenrightbigg ·B/summationdisplay ∆=0exp/bracketleftBigg 1 2−c∆ [µ]2∆3−/parenleftbigg1 2+c∆ [µ]−1 2−c∆ 2[µ]2/parenrightbigg ∆/bracketrightBiggk/summationdisplay r=1/parenleftbiggk r/parenrightbigg/parenleftbigg∆−1 r−1/parenrightbigg 2r +  n! (µ!)kifµ∈Z 0 otherwise. Dividing by the total number of sequences knto match ( 23) yields theorem 4.2 . /square Acknowledgements We thank Noam Elkies, Kimball Martin, Michael Rubinstein, K annan Soundararajan, Drew Sutherland, and Jerry Wang for helpful discussions. References [Agr20] Rohit Agrawal, Finite-sample concentration of the multinomial in relativ e entropy , IEEE Trans. Inform. Theory 66(2020), no. 10, 6297–6302. [Agr22] Rohit Agrawal, Finite-sample concentration of the empirical relative ent ropy around its mean , arXiv 2203.00800 , 2022. [Cow25] Alex Cowan, reallysmall ,https://github.com/thealexcowan/reallysmall , April 2025. [CT06] Thomas M. Cover and Joy A. Thomas, Elements of information theory , second ed., Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, 2006. [dlBF23] R. de la Bretèche and D. Fiorilli, Moments of moments of primes in arithmetic progressions , Proc. Lond. Math. Soc. (3) 127(2023), no. 1, 165–220. [Dur19] Rick Durrett, Probability—theory and examples , fifth ed., Cambridge Series in Statistical and Probabilist ic Mathematics, vol. 49, Cambridge University Press, Cambrid ge, 2019. [Fio15] Daniel Fiorilli, The distribution of the variance of primes in arithmetic pro gressions , Int. Math. Res. Not. IMRN (2015), no. 12, 4421–4448. [FM23] Daniel Fiorilli and Greg Martin, Disproving Hooley’s conjecture , J. Eur. Math. Soc. (JEMS) 25(2023), no. 12, 4791–4812. [Hoo76] C. Hooley, On the Barban-Davenport-Halberstam theorem. V , Proc. London Math. Soc. (3) 33(1976), no. 3, 535–548. [JVHW17] Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsach y Weissman, Maximum likelihood estimation of func- tionals of discrete distributions , IEEE Transactions on Information Theory 63(2017), no. 10, 6774–6798. [MJT+19] Jay Mardia, Jiantao Jiao, Ervin Tánczos, Robert D Nowak, and Tsachy Weissman, Concentration inequalities for the empirical distribution of discrete distributions: beyond the method of types , Information and Inference: A Journal of the IMA 9(2019), no. 4, 813–850. [MV07] Hugh L. Montgomery and Robert C. Vaughan, Multiplicative number theory. I. Classical theory , Cambridge Studies in Advanced Mathematics, vol. 97, Cambridge Univer sity Press, Cambridge, 2007. [RS94] Michael Rubinstein and Peter Sarnak, Chebyshev’s bias , Experiment. Math. 3(1994), no. 3, 173–197. [Wil38] S. S. Wilks, The Large-Sample Distribution of the Likelihood Ratio for T esting Composite Hypotheses , The Annals of Mathematical Statistics 9(1938), no. 1, 60 – 62. Department of Mathematics, University of Waterloo, Waterl oo, ON, Canada Email address :alex.cowan@uwaterloo.ca
https://arxiv.org/abs/2504.20691v1
arXiv:2504.21046v1 [stat.ME] 28 Apr 2025Statistical Comparison of Hidden Markov Models via Fragment Analysis Carlos M. Hernandez-Suarez∗Osval A. Montesinos-L´ opez† May 1, 2025 Abstract Standard practice in Hidden Markov Model (HMM) selection fa vors the candidate with the highest full-sequence likelihood, although this i s equivalent to making a de- cision based on a single realization. We introduce a fragment-based framework that redefines model selection as a formal statistical compariso n. For an unknown true model HMM 0and a candidate HMM j, letµj(r) denote the probability that HMM j and HMM 0generate the same sequence of length r. We show that if HMM iis closer to HMM0than HMM j, there exists a threshold r∗—often small—such that µi(r)> µj(r) for allr≥r∗. Sampling kindependent fragments yields unbiased estimators ˆ µj(r) whose differences are asymptotically normal, enabling a stra ightforward Z-test for the hypothesis H0:µi(r) =µj(r). By evaluating only short subsequences, the procedure circumvents full-sequence likelihood computation and pro vides valid p-values for model comparison. Keywords: HiddenMarkovmodels; fragmentsampling; large-scalemodelselect ion; efficient algorithms; Markov processes. 1 Introduction Hidden Markov Models (HMMs) are a cornerstone of time-series mod eling, underpinning applicationsfromspeech recognition(Rabiner, 1989)tobioinforma tics andfinance. However, selecting an appropriate model among several competing HMMs bec omes difficult when the observed sequence is long. Standard criteria such as the Akaik e Information Criterion (AIC) (Akaike, 1974) or the Bayesian Information Criterion (BIC) (Schwarz, 1978) rely on exact likelihood calculations of large data sequences, which are both numerically unstable and computationally expensive. Several proposals seek to mitigate these numerical issues, for ins tance by normalizing probabilityvectorsoremployinglogarithmictransformationswithint heforward-backwardal- gorithm(Rabiner,1989). Yet,full-sequencelikelihoodscanbecome prohibitivelyexpensive for ∗Coordinaci´ on General de Investigaci´ on Cient´ ıfica, Universidad d e Colima, Colima, M´ exico. cmh1@cornell.edu †Facultad de Telem´ atica, Universidad de Colima, Colima, M´ exico. oamontes1@ucol.mx 1 tensofthousands(ormore)observations. ApproximateBayesia nComputation(Beaumont et al., 2002; Marin et al., 2012) and spectral approaches (Hsu et al., 2012 ) circumvent explicit like- lihood computation by leveraging summary statistics or matrix facto rizations. Stochastic Variational Inference (Hoffman et al., 2013) further shows promis e for parameter estimation on massive sequences, but direct comparison across candidate HM Ms remains problematic. To tackle these challenges, Hernandez-Suarez and Montesinos-L ´ opez (2024) introduce a fragment-based or ‘likelihood-free’ framework to compare HMMs w ithout computing their full-sequence likelihoods. The primary idea is: (i) randomly sample shor t subsequences (fragments) of length rfrom the data; (ii) evaluate their likelihood under each candidate HMM; and (iii) use standard statistical tools to compare these likeliho ods. This approach drastically reduces the computational overhead of typical forwa rd-backward methods over very large sequences. Moreover, it naturally accommodates differ ent numbers of hidden states, which can invalidate standard likelihood ratio tests (Giudici e t al., 2000) that require nested models. Here, we formally present the mathematical foundation of the fra gment-based metric described in (Hernandez-Suarez and Montesinos-L´ opez, 2024) . We define µj(r) as the ex- pected likelihood under model jfor a randomly selected fragment of length rgenerated by the unknown true model, HMM 0. Under suitable conditions, we show that µj(r) can serve as a reliable surrogate for the full-sequence likelihood
https://arxiv.org/abs/2504.21046v1
in large-scale H MM comparisons. Ad- ditionally, we describe a straightforward procedure for construc ting candidate HMMs that replicate the observed-state transition frequency matrix derived from data, thereby ensuring consistency in the short-fragment regime. 2 Notation and Preliminaries We consider a family of Hidden Markov Models (HMMs) each with Nhidden states and K observable states (or symbols). For a generic model HMM j, let •Pj∈RN×Ndenote its hidden-state transition matrix (row-stochastic), •Sj∈RN×Kdenote its emission matrix (rows summing to 1), •πj∈RNdenote the stationary distribution of Pj, satisfying π⊤ jPj=π⊤ j. LetXn= (x1,...,x n) denote the hidden-state chain and Yn= (y1,...,y n) the observed sequence, each yt∈ {1,...,K}. We assume the data Ynarise from an unknown truemodel HMM 0with associated matrices P0,S0, andπ0. We define a fragment of length ras any contiguous subsequence ( yℓ,...,y ℓ+r−1) ofYn. SinceKris the total number of distinct fragments of length r, we may index such possible fragments by si,i= 1,...,Kr, and denote by lj(si) the likelihood of siunder HMM j. 2 3 A New Fragment-Based Metric 3.1 Definition of µj(r) We are interested in comparing models by their expected likelihood to g enerate a random fragment of length r. WhenYnis truly generated by HMM 0, each fragment siof length r has probability l0(si). Thus the expected fragment likelihood under HMM jis µj(r) =Kr/summationdisplay i=1l0(si)lj(si) (1) which is also the probability that two length- rsequences, each independently generated by HMM 0and HMM j, respectively, coincide exactly. Because l0(si) is the pmf for the si- indexed random fragment, we have µj(r) = E[Lj(r)], where Lj(r) is the likelihood of a randomly drawn fragment of size r. An equivalent matrix form uses the so-called forwardfactorization: lj(si) =π⊤ jMj(y1)Mj(y2)···Mj(yr)1, (2) with Mj(m) =PjDiag/parenleftbig Sj(·,m)/parenrightbig . Summing over all Krfragments, the Kronecker product gives a compact form for µj(r). Specifically, µj(r) = (π0⊗πj)/parenleftbiggK/summationdisplay i=1M0(i)⊗Mj(i)/parenrightbiggr 1 = (π0⊗πj)Wr j1, (3) where Wj= (P0⊗Pj)·Diag/parenleftbig vec/parenleftbig SjS⊤ 0/parenrightbig/parenrightbig (4) 4 Testing Based on Sampled Fragments Given two candidates HMM 1and HMM 2, we aim to compare µ1(r) vs.µ2(r). Generally, we do not know HMM 0, so we draw kfragments {s1,...,s k}of length rfrom the observed sequence Yn. For each si, compute l1(si) andl2(si), then set di=l1(si)−l2(si). By the Central Limit Theorem (CLT), under mild conditions, ¯d=1 kk/summationdisplay i=1did− →N/parenleftig µ1(r)−µ2(r),1 kσ2 12(r)/parenrightig , (5) where σ2 12(r) =µ2 12(r)−/parenleftbig µ12(r)/parenrightbig2,withµ12(r) =µ1(r)−µ2(r). (6) 3 An explicit form for µ2 12(r) is given by: µ2 12(r) = (π0⊗π1⊗π1)/parenleftiggK/summationdisplay i=1M0(i)⊗M1(i)⊗M1(i)/parenrightiggr 1 −2 (π0⊗π1⊗π2)/parenleftiggK/summationdisplay i=1M0(i)⊗M1(i)⊗M2(i)/parenrightiggr 1 + (π0⊗π2⊗π2)/parenleftiggK/summationdisplay i=1M0(i)⊗M2(i)⊗M2(i)/parenrightiggr 1. Hence, we can test H0:µ1(r) =µ2(r) vs.H1:µ1(r)> µ2(r) (7) by a standard Z-statistic: Z(k) =¯d ˆSd/√ k, (8) whereˆS2 dis the sample variance of the {di}. 4.1 On the Optimal Fragment Size r In the long run, as r→ ∞, eachµj(r) grows or decays according to the spectral radius of its associated matrix Wj(4). Thus, while rcan be increased until dominance becomes evident, a practical strategy is to use moderate r(e.g., 3≤r≤6) in tandem with a sampling-based Z-test. Because sampling and fragment likelihood calculations for sma llrare efficient, one can attempt multiple rvalues until the comparison stabilizes. 5 Example The dataset,
https://arxiv.org/abs/2504.21046v1
compiled bytheCentral PollutionControlBoardandp ublicly availablethrough Kaggle (Jha, 2023), contains 4,560 consecutive daily observations of atmospheric pollutant concentrations and meteorological variables. Only ozone (O 3) concentrations were analyzed to construct and compare candidate HMMs for the underlying air qu ality dynamics. Obser- vations were discretized into three categories—low, medium, and hig h—based on empirical terciles oftheozonedistribution. Two candidateHiddenMarkov Mod els(HMMs) werefitted to the categorized daily ozone data using the hmmlearn Python package. The Expectation- Maximization (EM) algorithm was employed to maximize the likelihood, allow ing up to 1000 iterationsandfixing therandomseedto ensurereproducibility. HMM 1assumed threehidden states and HMM 2assumed four hidden states, both with three observable categor ies. The resulting transition and emission probability matrices were extracte d for further analysis and are presented in the Appendix. Table 1 shows the statistical comparison of both models for fragme nt sizesrfrom 3 to 20. The sample size in all cases was n= 1,000. The log-likelihood of the entire observed sequence under each model was: HMM 1:−3581.6988, and HMM 2:−3009.0871. 4 Table1: ComparisonofHMM 1andHMM 2basedonfragmentsampling( n= 1,000fragments perr). r ˆµ2(r)−ˆµ1(r) Std Difference Z-statistic p-value 3 0.03429 0.04805 22.567 <10−7 4 0.03125 0.04215 23.444 <10−7 5 0.02705 0.03621 23.619 <10−7 6 0.02434 0.03225 23.868 <10−7 7 0.01975 0.02864 21.809 <10−7 Table 2: Estimates of the dominant eigenvalues, expressed as the r atio ˆµi(r+1)/ˆµi(r), and the number of possible sequences of length r. For both models, the number of observable states is K= 3. Sequence size is N= 4,560 r ˆµ1(r+1)/ˆµ1(r) ˆ µ2(r+1)/ˆµ2(r) KrKr/N 3 27 0.0059 4 0.5372 0.6450 81 0.0178 5 0.5330 0.6684 243 0.0533 6 0.5810 0.7492 729 0.1599 7 0.5868 0.7292 2187 0.4796 Table 2 shows the estimates of the ratios µi(r+ 1)/µi(r),i= 1,2, in an effort to illus- trate the role of the dominant eigenvalue in governing the asymptot ic growth of fragment probabilities as the fragment length rincreases. These data is plotted in Figure 1. 6 Discussion Table 1 shows that at low r-values, the null hypothesis H0:µ1(r) =µ2(r) is already rejected, favoring the conclusion that µ1(r)> µ2(r) forr≥3. While it is theoretically possible for the inequality to reverse at larger r-values, depending on the behavior of the dominant eigenval- ues ofWj, Table 2 and Figure 1 indicate that no overlap occurs between the es timated rates ˆµ1(r+ 1)/ˆµ1(r) and ˆµ2(r+ 1)/ˆµ2(r). This suggests that a switch in dominance is unlikely. Moreover, increasing rbeyond a certain point is not advisable, as the sequence length must be much larger than the number of possible fragments of size r—that is, n≫Kr—to ensure a representative sampling of the fragment distribution. Table 2 indic ates that this condition is no longer satisfied for r≥6. Traditional HMM model selection methods typically rely on the log-likelih ood of the en- tire sequence, which can be numerically unstable or even infeasible fo r largen. Although partial normalization steps in the forward-backward algorithm mitig ate underflow to some extent, they do not guarantee stable floating-point operations a t extreme sequence lengths. In contrast, the fragment-based approach evaluates shorter subsequences, keeping computa- tions localized, stable, and computationally efficient. Moreover, classical likelihood
https://arxiv.org/abs/2504.21046v1
ratio tests (Giudici et al., 2000) requir e nested models and become invalid when candidate HMMs differ in their number of hidden sta tes. The fragment- based method overcomes this limitation by allowing direct statistical c omparison between 5 0.40.450.50.550.60.650.70.750.8 3 4 5 6 7 8Model 1 Model 2 Figure 1: Plot of the data in Table 2. No overlap observed in the studie d interval. arbitrary models, thereby offering greater flexibility for large-sca le or exploratory analyses. Akeypracticalquestionconcernsthechoiceoffragmentsize r. Whilesmall rvaluesavoid datasparsity acrossthe Krpossible fragmenttypes, larger rmaybenecessary todetect more subtle differences between models. A straightforward strategy is to run the sampling-based test sequentially at r= 3,4,5,...until the resulting Z-statistics converge to a stable pattern. More sophisticated approaches could adaptively tune rbased on preliminary significance tests, or incorporate blockwise sampling to account for correlatio ns between neighboring fragments. Although it is possible for an HMM with mismatched emission probabilities to achieve a highµj(r) for some small r, the method’s reliance on multiple fragment sizes and on the properties of the true data-generating process HMM 0typically penalizes such mismatches asrincreases. Future work could investigate the rate at which µj(r) distinguishes the true model as rgrows, or explore alternative short-fragment metrics—such as a pproximate Kullback–Leibler divergences—beyond raw likelihood comparisons. 7 Conclusion We have presented a systematic, fragment-based approach for comparing Hidden Markov Models without requiring full-sequence likelihood computations. When the emission matrix is unbounded, for instance, Sjis a Poisson distribution with parameter λjall the results still apply because (5) still holds. By focusing on short subsequences, the framework is robust to c ommon numerical insta- bilities in long products of probabilities, and it accommodates models wit h distinct hidden- 6 state dimensions. Importantly, the method enables formal statis tical comparisons between models by providing p-values that quantify the strength of evidence favoring one mode l over another. This adds a rigorous measure of certainty to model selection, addressing a major limitation of traditional approaches that relied solely on likelihoo d values without an assessment of statistical significance. The method relies on stand ard sampling and a straight- forward test statistic, making it easy to implement on large data set s. Future work includes data-driven selection of the fragment length, more refined asymp totic analyses, and further applications in domains where the HMM dimension is large or the observa tional alphabet is high-dimensional. A Fitted HMM Transition and Emission Matrices The estimated transition and emission probability matrices for the tw o candidate HMMs are presented below. A.1 Model 1: 3 Hidden States, 3 Observable Categories Transition Matrix (P1): P1= 0.001532 0 .998364 0 .000104 0.915998 0 .014236 0 .069766 0.027333 0 .014714 0 .957953  Emission Matrix (S1): S1= 0.611939 0 .383932 0 .004129 0.597684 0 .390171 0 .012145 0.009617 0 .269177 0 .721207  A.2 Model 2: 4 Hidden States, 3 Observable Categories Transition Matrix (P2): P2= 0.927025 0 .005267 0 .067344 0 .000363 0.005510 0 .921437 0 .068471 0 .004582 0.066956 0 .069051 0 .863918 0 .000075 0.145968 0 .120110 0 .731595 0 .002327  Emission Matrix (S2): S2= 0.004554 0 .091130 0
https://arxiv.org/abs/2504.21046v1
.904317 0.914271 0 .085396 0 .000332 0.102241 0 .811204 0 .086555 0.353744 0 .646253 0 .000003  7 References Jha, A. (2023). Time series air quality data of India (2010–2023). A vailable at: https://www.kaggle.com/datasets/abhisheksjha/time-serie s-air-quality-data-of-india- 2010-2023 (accessed April 2025). Akaike, H. (1974). A new look at the statistical model identification .IEEE transactions on automatic control 19,716–723. Beaumont, M.A., Zhang, W., andBalding, D.J.(2002). Approximate bayesian computation in population genetics. Genetics 162,2025–2035. Giudici, P., Ryden, T., and Vandekerkhove, P. (2000). Likelihood-Ra tio Tests for Hidden Markov Models. Biometrics 56,742–747. Hernandez-Suarez, C. M. and Montesinos-L´ opez, O. A. (2024) . Optimizing hidden markov models for large-scale data: New approaches to evaluation and est imation. TechRxiv . Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochast ic variational inference. the Journal of machine Learning research 14,1303–1347. Hsu, D., Kakade, S. M., and Zhang, T. (2012). A Spectral Algorithm for Learning Hidden Markov Models. Journal of Computer and System Sciences 78,1460–1480. Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012). Appr oximate bayesian computational methods. Statistics and computing 22,1167–1180. Rabiner, L. R. (1989). A tutorial on hidden markov models and selec ted applications in speech recognition. Proceedings of the IEEE 77,257–286. Schwarz, G. (1978). Estimating the dimension of a model. The annals of statistics pages 461–464. 8
https://arxiv.org/abs/2504.21046v1
Polyhedral aspects of maxoids Tobias Boege1, Kamillo Ferry2, Benjamin Hollering3, and Francesco Nowell2 1UiT The Arctic University of Norway, Tromsø, Norway post@taboege.de 2Technical University of Berlin, Germany {ferry,nowell }@math.tu-berlin.de 3Technical University of Munich, Germany benjamin.hollering@tum.de Abstract The conditional independence (CI) relation of a distribution in a max-linear Bayesian network depends on its weight matrix through the C∗-separation criterion. These CI models, which we call maxoids , are compositional graphoids which are in general not representable by Gaussian or discrete random variables. We prove that every maxoid can be obtained from a transitively closed weighted DAG and show that the stratification of generic weight matrices by their maxoids yields a polyhedral fan. 1 Introduction Linear structural equation models, sometimes called Bayesian networks, are of critical importance in modern data science and statistics through their applications to causality Pearl (2009) and probabilistic inference Koller and Friedman (2009). These statistical models use directed acyclic graphs (DAGs) to represent causal relationships and conditional independencies between random variables. Recently, there has been a focus on developing graphical models which are able to capture causal relations between extreme events. The two main approaches employ H¨ ussler–Reiss distributions Engelke et al. (2024, 2025) and max-linear Bayesian networks, the latter of which are the main subject of this paper. Max-linear Bayesian networks (MLBNs), were introduced in Gissibl and Kl¨ uppelberg (2018) to model cascading failures . They are used in areas where these failures lead to catastrophic events, such as financial risk and water contamination Leigh et al. (2019); Rochet and Tirole (1996). A random vector X= (X1,. . .,Xn) is distributed according to the max-linear model on a DAG Gif it satisfies the system of recursive structural equations Xi=_ j∈pa(i)cijXj∨Zi,cij,Zi≥0, (1.1) where ∨=max, the cijare edge weights, pa(i) is the set of parents of iinG, and the Zi are independent, atom-free, continuous random variables.arXiv:2504.21068v1 [math.CO] 29 Apr 2025 The structural equations mimic Bayesian networks in the extreme value setting. Despite this similarity, the conditional independence (CI) theory of MLBNs turns out to be more subtle in certain aspects than that of classical Bayesian networks which are governed by the well-known d-separation criterion. In addition to the d-separations of the DAG, a max-linear model may satisfy other CI statements which depend on the weight matrix C appearing in (1.1). Am´ endola et al. (2022) observed that multiple distinct CI structures can arise for the same DAG, each for a set of C-matrices with positive Lebesgue measure. They introduced the graphical ∗-separation criterion which is complete but not strongly complete for CI implication in MLBNs, and the C∗-separation criterion which takes C into account and completely characterizes the CI structure of an MLBN. Moreover, the following chain of implications from d- over ∗- toC∗-separation is valid for all MLBNs: [i⊥dj|L] =⇒[i⊥∗j|L] =⇒[i⊥C∗j|L]. In this paper, we focus on the CI structures which arise from C∗-separation since they are the most refined according to these implications and MLBNs are generically faithful to them. We call M∗(G,C):={[I⊥ ⊥J|L]:[I⊥C∗J|L] in(G,C)}themaxoid associated to the DAG Gwith coefficient matrix Cand note that this is essentially the
https://arxiv.org/abs/2504.21068v1
global Markov property ofGwith respect to C∗-separation with given coefficient matrix C. We show thatM∗(G,C) is a compositional graphoid and that the set of distinct maxoids associated to a fixed DAG Gare in correspondence with the cones of a complete fan for which we provide an explicit representation of the inequalities. The following is our main result. Theorem. For any DAG Gthere is a hyperplane arrangement HG⊆REsuch that for every C∈RE\ HGthe set coneG(C) := C′∈RE\ HG:M∗(G,C) =M∗(G,C′) is a full-dimensional open polyhedral cone. The collection of all closures of such cones for a fixed Gforms a complete polyhedral fan FGinRE. Moreover the map which sends a cone of FGto its maxoid is an inclusion-reversing surjection. One immediate consequence of the above theorem together with the results of Am´ endola et al. (2022) is that the maximal cones of FGcorrespond to the distinct CI structures which can arise from a max-linear Bayesian network with positive Lebesgue measure for the choice of C. In this sense, the maximal cones correspond to the generic CI structures of an MLBN supported on G. Similarly, we call a weight matrix Cgeneric if it does not lie on HG. As we will show in Section 2, if there exist two nodes i,j∈V(G) such that there are at least two distinct paths between iandj, then FGhas at least two distinct full-dimensional cones. This provides a strong contrast to classical linear structural equation models which are generically faithful to d-separation, and thus almost every distribution in the model exhibits the same CI structure; cf. Lauritzen (1996). Our results also elucidate which CI structures may arise from a given graph and when two graphs exhibit the same generic CI structure. This is critical for determining if the graph structure may be recovered using only conditional independencies as is typically done in constraint-based causal discovery algorithms, e.g.,Spirtes et al. (2000). The remainder of this paper is organized as follows. In Section 2 we recall the details of the ∗- and C∗-separation criteria and use this to provide an explicit description of the linear inequalities which define coneG(C) via the critical paths ofG. We then relate the set of maxoids arising from a DAG Gto the cones of the associated polyhedral fan. In Section 3 we show that every maxoid is a compositional graphoid but provide counterexamples which demonstrate that maxoids need not be representable by either regular Gaussian or discrete distributions. This again provides a contrast to Bayesian networks for which Gaussian and discrete distributions are the primary parametric families which are studied. 2 The Polyhedral Geometry of C∗-separation LetG= (V,E) be a DAG on |V|=nvertices and denote the set of coefficient matrices supported on GbyRE >0, i.e., all n×nmatrices Cwith cij= 0 if i→j /∈Eandcij>0 otherwise. We recall that a random vector Xis distributed according to the max-linear model on Gif it satisfies eq. (1.1). This system of equations has solution X=C∗Zwhere the matrix-vector product is done in max-times arithmetic. C∗is the Kleene star matrix ofCwhose entries are given by (C∗)ij= max π∈P(i,j)Y e∈πcij, where P(i,j) denotes the set of
https://arxiv.org/abs/2504.21068v1
all directed paths from itojinGandQ e∈πcijis the weight of the path π. The conditional independence structure of max-linear models depends on inequalities between the weights of paths, which in the above form would not be polyhedral. To solve this, we note that the coordinate-wise logarithm is an isomorphism which maps RE >0→RE. This transformation takes us from max-times to max-plus arithmetic and in this new coordinate system the Kleene star is given by (C∗)ij= max π∈P(i,j)X e∈πcij= max π∈P(i,j)ωC(π). It is natural to extend the logarithm to send 0 to −∞ and thus when embedding C∈REintoRn×nwe use the convention that cij=−∞ ifi→j /∈E(G). Those familiar with tropical geometry will notice that Cis actually a matrix over the tropical semiring with max-plus arithmetic, however this will not be relevant for the results which we present here. For the remainder of this paper, we will exclusively utilize the max-plus convention. A path π′iscritical ifωC(π′) =max π∈P(i,j)ωC(π). If there is a unique critical path between every pair of nodes i,jthen we say that Cisgeneric and denote this unique path by πij crit(G,C), omitting the pair ( G,C) when it is clear from context. Note that ifCis not generic, then its entries satisfy some non-trivial linear equation of the form ωC(π) =ωC(π′) for π,π′∈P(i,j). Hence, the set of generic matrices is the complement of a hyperplane arrangement HGwhose defining equations depend on G. We are now ready to introduce C∗-separation. Definition 2.1. Let (G,C) be a weighted DAG with vertex set VandL⊆V. The critical DAGG∗ C,Lis the DAG on Vsuch that i→j∈E(G∗ C,L) whenever iandjare connected via a directed path, and no critical path from itojinGintersects L. i j (a)ip j (b)i ℓj (c)ip ℓj (d)ip ℓq j (e) Figure 1: The types of ∗-connecting paths between iandjgiven Lin a critical DAG G∗ C,L. The colored colliders ℓmust belong to L; the non-colliders p,qmust not belong to L. Two nodes iandjareC∗-connected given Lif there exists a path from itojinG∗ C,L of the form pictured in Figure 1. If no such path exists, then iandjareC∗-separated given Lwhich is denoted [ i⊥C∗j|L]. Theorem 2.2 ((Am´ endola et al., 2022, Theorem 6.18)) .Let(G,C)be a weighted DAG andXbe a random vector distributed according to the max-linear model on (G,C). Then [i⊥C∗j|L] =⇒[i⊥ ⊥j|L]. Moreover, the converse holds for all but a Lebesgue null set of weight matrices C. C∗-separation generally entails more CI statements than d-separation. In particular note that a ∗-connecting path can have at most one collider in its conditioning set whereas d-separation allows any number of colliders in a connecting path. Moreover, it suffices to block only a single critical path from iandjin order to separate them. Example 2.3. Consider the diamond graph Gwith weight matrix C: 1 2 3 4C= −∞ c12 c13−∞ −∞ −∞ c24−∞ −∞ −∞ −∞ c34 −∞ −∞ −∞ −∞  Observe that P(1, 4) = {π2,π3}where πi= 1→i→4. If Csatisfies ωC(π2)> ωC(π3), thenG∗ C,{2}is exactly the diamond above because π2is a critical path from 1 to 4 which intersects the conditioning set {2}. Since neither π2norπ3are of the forms displayed in
https://arxiv.org/abs/2504.21068v1
Figure 1, this MLBN satisfies [1⊥C∗4|2]. On the other hand, if ωC(π3)> ωC(π2) then a similar argument yields that [1⊥C∗4|3]. Thus we get two distinct maxoids which correspond to whether π2orπ3is the critical path. Moreover, the maxoid M∗(G,C) is completely determined by which side of the hyperplane c12+c24=ωC(π2) =ωC(π3) =c13+c34 the matrix Clies on. Our goal in the remainder of this section is to develop the observations from the previous example into a general result which connects weighted DAGs and their maxoids using polyhedral geometry. We begin with a sequence of lemmas which further elucidate the connection between the critical paths in ( G,C) and the CI structure M∗(G,C). Lemma 2.4. Let(G,C)and(G′,C′)be two weighted DAGs on the same node set and generic weights CandC′. Then M∗(G,C) =M∗(G′,C′)⇐⇒ πij crit(G,C) =πij crit(G′,C′)for all i̸=j, i.e., two weighted DAGs have the same critical paths if and only if their maxoids coincide. Proof. “⇐= ”: If ( G,C) and ( G′,C′) have the same critical paths then they give rise to the same critical DAG for any L, implying equal maxoids. “ =⇒” by contraposition: Suppose that π=πij crit(G,C)̸=πij crit(G′,C′) =π′and denote the nodes on π′as follows: π′:i=ℓ′ 0→ℓ′ 1··· → ℓ′ m−1→ℓ′ m=j. (2.1) We may assume that πdoes not contain any of the nodes ℓ′ 1,. . .,ℓ′ m−1(if this is not the case, then we may replace jwith an internal node common to both πandπ′and have shorter but still differing critical paths). Then clearly [i⊥C∗j|ℓ′ m−1]holds in ( G′,C′) but not in ( G,C), implying inequality of the respective maxoids. Lemma 2.5. Let(G,C)be a weighted DAG and Gthe transitive closure of G. There exists a matrix Csupported on Gsuch that M∗(G,C) =M∗(G,C). Proof. For a fixed ( G,C) with edge set E, letEbe the edge set of its transitive closure G. For any two path-connected nodes i,j∈V, letεijbe the weight of the (not necessarily unique) critical i−jpath in ( G,C), and fix a −∞< δ < mini,jεij. One possible choice ofCis given by C= (cij)i,j∈V=  cijif (i,j)∈E, δif (i,j)∈E\E, −∞ otherwise. By construction, no edge in E\Eis contained in any critical path of ( G,C). Thus, the statement follows from Lemma 2.4. A related notion is the weighted transitive reduction . Definition 2.6. Theweighted transitive reduction Gtr Cof a weighted DAG ( G,C) is the subgraph of Gwith edges determined as follows: i→j∈E(Gtr C)⇐⇒ i→jis the unique critical i−jpath in ( G,C). 1 2 3 41 1 1 3 1 (G,C)1 2 3 41 1 3 1 (Gtr C,Ctr)1 2 3 41 1 11 3 1 (G,C) Figure 2: For appropriate CtrandC,M∗(G,C) =M∗(Gtr C,Ctr) =M∗(G,C) holds. Remark 2.7. Another consequence of Lemma 2.4 is that for any weighted DAG ( G,C) with generic Cwe have M∗(G,C) =M∗(Gtr C,Ctr), (2.2) where Ctris any matrix supported on Gtr Cwhich gives rise to the same critical paths as (G,C). Combined with Lemma 2.5, this means that the maxoid of anyweighted DAG arises as a maxoid of its transitive closure for an appropriately chosen weight matrix. Example 2.8. The maxoid corresponding to the weighted DAG
https://arxiv.org/abs/2504.21068v1
on the left in Figure 2 is M∗(G,C) ={[1⊥ ⊥3|2] , [1 ⊥ ⊥3|2, 4] , [1 ⊥ ⊥4|2] , [1 ⊥ ⊥4|2, 3]}. This maxoid is also realized by the weighted transitive reduction Gtr Cand transitive closure Gwhen CtrandCare chosen according to Lemma 2.5 and Remark 2.7. In this example, ctr 24> ctr 23+ctr 34andc14<min{c12+c24,c13+c34}must hold. Theorem 2.9. Let(G,C)be a weighted DAG with generic C∈RE\ HG. The set coneG(C) := C′∈RE\ HG:M∗(G,C) =M∗(G,C′) (2.3) is a full-dimensional open polyhedral cone defined by linear inequalities of the form ωC′(πij crit(G,C))> ωC′(π),for each π∈P(i,j)\ {πij crit(G,C)}, (2.4) for all distinct i,j∈V. Proof. By Lemma 2.4, the set coneG(C) consists of all generic weight matrices C′supported onGand giving rise to the same critical paths as C. This is precisely what is encoded in the inequalities (2.4) for all i,j∈V. These strict linear inequalities in the entries of C′define an open polyhedral cone in REdisjoint from HG. The cone is non-empty as C∈coneG(C) is given, and full-dimensional because ε-perturbations of Cin the direction of any cijpreserve its critical paths. Remark 2.10. A minimal description of the cone defined in (2.4) can be obtained by considering only pairs i,jwhich are connected by multiple disjoint paths, in the sense that any two of them form a simple cycle in the skeleton of G. Indeed, if two i−jpaths π1 andπ2contain a common intermediate node k, then the linear inequality corresponding to the comparison of ωC(π1) and ωC(π2) is already implied by the linear inequalities which arise from comparing their respective i−kandk−jportions. We now study the case where the weight matrix lies on the boundary of a cone. For generic C, let ˜Cbe a matrix lying on a facet of the euclidean closure of coneG(C). This means that for some pair i,j∈V, equality holds in (2.4) and thus there are two critical i−jpaths in ( G,˜C): one is the unique critical i−jpath πij crit(G,C), and the other we denote by π′. We assume that the paths are disjoint in the sense of Remark 2.10 and that all matrices on the facet of coneG(C) on which ˜Clies give rise to the same critical paths asCoutside of those which factor through the directed i−jportion of the DAG. Theorem 2.11. In the setting described above, the following holds: M∗(G,˜C) =M∗(G,C)∪ M∗(G,C′), (2.5) where C′is a matrix supported on Ggiving rise to the same critical paths as Cexcept for in the directed i−jportion, where the unique critical path is π′. Proof. We first consider the simplified case where Gconsists solely of the two directed i−jpaths. For readability, we set π:=πij crit(G,C) and refer to the intermediate nodes of πandπ′using the notation in (2.1). In this setting, iandjare the only two nodes which are connected by more than one path. Because of this, it suffices to prove both inclusions in (2.5) only for separation statements of the form [ i⊥C∗j|L]. “⊆”: Let L⊂V\ij. Note that if [i⊥ ⊥j|L]∈ M ∗(G,˜C) holds, then Lintersects π∪π′non-trivially. Indeed, if L∩(π∪π′) =∅, then the critical DAG G∗ ˜C,Lcontains the edge i→j, implying ∗-connectedness. Thus, this choice of Lalso
https://arxiv.org/abs/2504.21068v1
separates iandjin (G,C) or (G,C′), implying the first inclusion. “⊇”: In ( G,C) any L⊆V\ijwhich intersects πnon-trivially gives rise to the statement [i⊥C∗j|L]. This choice of Lalso separates iandjin (G,˜C). (Recall that the condition for the edge i→jto be present in G∗ ˜C,Lis that nocritical i−jpath in ( G,˜C) factors through L.) Analogously, any statement of the form [i⊥C∗j|L]inM∗(G,C′) also holds in (G,˜C). In the more general setting where Gdoes not consist solely of directed i−jpaths, additional ∗-connecting i−jpaths may exist. Thus, additional nodes which are not contained in πandπ′may be needed to separate iandj. However, these nodes will be required to separate iandjin all three weighted DAGs, since, by our starting assumption, these three matrices give rise to the same critical paths outside of the directed i−jportion ofG. Furthermore, if i′andj′are nodes such that a path between them factors through π (and thus also π′), then a similar argument immediately shows that any Lwhich separates them in ( G,˜C) must also separate them in either ( G,C) or (G,C′). Lastly, any separation which does not involve πandπ′will be present in all three maxoids by assumption and thus the remaining CI statements will be the same as well. Remark 2.12. In the setting of Theorem 2.11, given ˜Cone can obtain a matrix with the properties of C′by replacing ˜ cℓ,ℓ′with ˜ cℓ,ℓ′+ε, where ( ℓ,ℓ′)∈π′andε >0 fulfills ε < min i′,j′∈V( min π1,π2∈P(i′,j′)|ω˜C(π1)−ω˜C(π2)|) . (2.6) This makes π′the unique critical i−jpath while preserving all other critical paths. Theorem 2.11 implies that facets (and by extension, lower-dimensional faces) of the euclidean closure of coneG(C) correspond to non-generic maxoids which arise as unions of generic maxoids. Corollary 2.13. The euclidean closures of the open cones corresponding to the generic maxoids of Gform a complete polyhedral fan, FG, inRE. The maximal cones of FGare in bijection with the generic maxoids of G. Moreover, the function Φwhich sends a cone ofFGto its maxoid is an inclusion-reversing surjection: F1is a face of F2=⇒Φ(F1)⊇Φ(F2)for all F1,F2∈ FG. Remark 2.14. It is not hard to show that FGis the Gr¨ obner fan of the ideal IG= ⟨P π∈P(i,j)Q e∈πxe:i,j∈V,|P(i,j)|>1⟩; see Sturmfels (1996). Indeed, any weight matrix C∈REdefines a term order which picks out the critical i−jpaths of ( G,C) as the initial term of the generator fij=P π∈P(i,j)Q e∈πxeinIG. Example 2.15. The fan associated to the diamond graph from Example 2.3 consists of two maximal cones in R4separated by the hyperplane c12+c24=c13+c34. The corresponding maxoids are M1={[1⊥ ⊥3|2] , [1⊥ ⊥4|2, 3] , [1 ⊥ ⊥4|2]} forc12+c24> c13+c34 M2={[1⊥ ⊥3|2] , [1⊥ ⊥4|2, 3] , [1 ⊥ ⊥4|3]} forc12+c24< c13+c34 M3={[1⊥ ⊥3|2] , [1⊥ ⊥4|2, 3] , [1 ⊥ ⊥4|2] , [1 ⊥ ⊥4|3]}forc12+c24=c13+c34. M1 M2M31 2 3 4 Figure 3: A projection of the fan FGof the diamond which is the Gr¨ obner fan of the ideal IG=⟨x12x24+x13x34⟩. 3 Representability of maxoids The polyhedral fan FGprovides the maxoids associated to a given DAG with a geometric structure which is both interesting and practically useful: it gives an efficient algorithm for solving the CI implication problem for maxoids on
https://arxiv.org/abs/2504.21068v1
a given DAG. A similar connection has been previously exploited in the framework of structural imsets by Bouckaert et al. (2010). However, the polyhedral fan in our case is specific to the graph and the map from its cones to maxoids does not in general induce a Galois connection. As a result, the extraction of conditional independence features from the polyhedral geometry is not straightforward. In this section we focus on logical properties of all maxoids, independent of the underlying DAG, in the context of conditional independence implication. Like many other types of graphical models (cf. Lauritzen and Sadeghi (2018)), C∗-separation satisfies thecompositional graphoid properties , i.e., every maxoid is closed under the following equivalence and implications for all disjoint I,J,K,L⊆N: Semigraphoid: [I⊥ ⊥J|L]∧[I⊥ ⊥K|JL]⇐⇒ [I⊥ ⊥JK|L], Intersection: [I⊥ ⊥J|KL]∧[I⊥ ⊥K|JL] =⇒[I⊥ ⊥JK|L], and Composition: [I⊥ ⊥J|L]∧[I⊥ ⊥K|L] = ⇒[I⊥ ⊥JK|L]. Whereas the Semigraphoid property holds for the CI statements satisfied by any random vector, Intersection and Composition provide non-trivial additional structure. Am´ endola et al. (2022) mention without proof that C∗-separation satisfies the compositional graphoid properties. We supply the routine proof below and then delve into the question of what distinguishes maxoids from other types of compositional graphoids. Proposition 3.1. Maxoids are compositional graphoids. Proof. Consider any maxoid M=M∗(G,C) for a given DAG Gand weight matrix C supported on G. All separation statements below are with respect to ( G,C). By the definition of C∗-separation, the assumption [I̸⊥C∗J|L]implies the existence of a ∗- connecting path πbetween IandJin the critical DAG G∗ C,L. A fortiori, πalso connects IandKLinG∗ C,L, hence [I̸⊥C∗JK|L]. Now consider a ∗-connecting path πbetween IandKinG∗ C,JL. If it contains a collider j∈J, then the portion of πfrom Itojis a ∗-connecting path between IandJinG∗ C,L. Otherwise the collider (if any) is in Landπ yields a ∗-connecting between IandKinG∗ C,L. In both cases, we obtain a ∗-connecting path between IandJKinG∗ C,L. By contraposition, these two arguments prove the “only if” part of the Semigraphoid property. The “if” direction is proved by contraposition as well. Assume that [I̸⊥C∗JK|L] and[I⊥C∗J|L], i.e., there exists a ∗-connecting path πfrom ItoJKbut not one from ItoJinG∗ C,L. Hence, πmust connect IandKand cannot contain any node from J. But then πalso∗-connects IandKinG∗ C,JL, thus [ I̸⊥C∗K|JL] holds. For Intersection, use again contraposition. Assume [I̸⊥C∗JK|L]and[I⊥C∗J|KL]. By the Semigraphoid property and the symmetry with respect to exchanging JandK, we can split [I̸⊥C∗JK|L]into two cases: [I̸⊥C∗K|L]or[I̸⊥C∗J|KL]. The second case contradicts our other assumption. In the former case, let πdenote an ∗-connecting path between IandKinG∗ C,L. We may assume that this path is as short as possible, i.e., does not contain any other node of K. If it contains a node j∈J, then the portion from I toj∗-connects IandJinG∗ C,KLwhich is impossible. Hence πis free of nodes from Jand thus∗-connects IandKalso in G∗ C,JLwhich is the required conclusion of Intersection. The Composition property holds almost by definition. Any ∗-connecting path from ItoJKinG∗ C,Lconnects either ItoJorItoK, which is the contrapositive of the assertion of Composition. Algebraic statistics today, by and large, deals with CI models on discrete and regular Gaussian random variables. Note
https://arxiv.org/abs/2504.21068v1
that the parametrization of MLBNs in (1.1) does not produce jointly Gaussian distributions as the maximum of Gaussians does not follow a Gaussian distribution. On the other hand, discrete distributions are not atom-free and are thus incompatible with this parametrization. Nevertheless, it is reasonable to ask whether maxoids, as abstract conditional independence models, can be represented using one of these two distribution classes. We answer this question negatively by highlighting features of maxoids which serve as obstructions to Gaussian and discrete representability. This means that maxoids are a new and rather exotic class of compositional graphoids. In Drton and Xiao (2010) the term semigaussoid is used to refer to compositional graphoids. What is missing from a semigaussoid to a gaussoid is the closedness under the following implication: Weak Transitivity: [i⊥ ⊥j|L]∧[i⊥ ⊥j|kL]⇐⇒ [i⊥ ⊥k|L]∨[j⊥ ⊥k|L], for all distinct i,j,kandL⊆N\ijk. The following example shows that maxoids need not satisfy Weak Transitivity. By results of Lnˇ eniˇ cka and Mat´ uˇ s (2007), this provides examples of maxoids which — in contrast to classical d-separation graphoids — cannot be faithfully represented by a regular Gaussian random vector. Example 3.2. Consider the diamond graph Gas described in Example 2.3 with weight matrix Csatisfying ωC(π3)> ωC(π2). The maxoid of ( G,C) consists precisely of the d- separations in Gplus[1⊥ ⊥4|3]. As [1⊥ ⊥4|3]and[1⊥ ⊥4|2, 3]hold without [1⊥ ⊥2|3] or[2⊥ ⊥4|3], this CI structure violates Weak Transitivity and cannot be faithfully represented by a regular Gaussian distribution. This violation of Weak Transitivity has the following geometric consequence. The space of regular Gaussian distributions which are Markov to this CI structure is the union of two standard Bayesian networks on subgraphs of the diamond G: one has the edge 1→2 removed (so that it satisfies [1⊥ ⊥2|3]) and the other has the edge 2 →4 removed (and hence satisfies [2⊥ ⊥4|3]). For an algebraic explanation of this phenomenon we refer to Drton et al. (2024). Example 3.3. The Cassiopeia graph G(see also Am´ endola et al. (2022)) arises from Figure 1e by reversing the direction of all arrows. Its associated fan FGhas only one maximal cone and thus all generic Cgive rise to the same CI structure, which has a peculiarity: since the path from itojhas two colliders, it is not ∗-connecting given pandq. We have M∗(G,C) =Md(G)∪ {[i⊥ ⊥j|p,q]}. One easily computes that the CI structure of the conditional distribution given qequals n [i⊥ ⊥ℓ], [i⊥ ⊥ℓ|j], [i⊥ ⊥ℓ|j,p], [i⊥ ⊥j], [i⊥ ⊥j|ℓ], [i⊥ ⊥j|p], [ℓ⊥ ⊥p|j], [ℓ⊥ ⊥p|i,j]o . It follows from implication (I:13) of Studen´ y (2021) (with X=i,Y=j,Z=ℓ,U=p) that every discrete distribution satisfying these CI statements must also satisfy [i⊥ ⊥j|ℓ,p]. This shows that the Cassiopeia maxoid cannot be represented by discrete random variables. Acknowledgements T.B. was funded by the European Union’s Horizon 2020 research and in- novation programme under the Marie Sk lodowska-Curie grant agreement No. 101110545. K.F. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy — The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). B.H. was supported by the Alexander von Humboldt Foundation. F.N.
https://arxiv.org/abs/2504.21068v1
was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the Priority Programme “Combinatorial Synergies” (SPP 2458, project ID: 539875257). References C. Am´ endola, C. Kl¨ uppelberg, S. Lauritzen, and N. M. Tran. Conditional independence in max-linear Bayesian networks. Ann. Appl. Probab. , 32(1):1–45, 2022. doi: 10.1214/ 21-AAP1670. R. Bouckaert, R. Hemmecke, S. Lindner, and M. Studen´ y. Efficient algorithms for conditional independence inference. J. Mach. Learn. Res. , 11:3453–3479, 2010. URL www.jmlr.org/papers/v11/bouckaert10b.html . M. Drton and H. Xiao. Smoothness of Gaussian conditional independence models. InAlgebraic methods in statistics and probability II , volume 516 of Contemporary Mathematics , pages 155–177. American Mathematical Society (AMS), 2010. doi: 10.1090/conm/516/10173. M. Drton, L. Henckel, B. Hollering, and P. Misra. Faithlessness in Gaussian graphical models, 2024. URL https://arxiv.org/abs/2404.05306 . S. Engelke, M. Hentschel, M. Lalancette, and F. R¨ ottger. Graphical models for multivariate extremes, 2024. URL https://arxiv.org/abs/2402.02187 . S. Engelke, N. Gnecco, and F. R¨ ottger. Extremes of structural causal models, 2025. URL https://arxiv.org/abs/2503.06536 . N. Gissibl and C. Kl¨ uppelberg. Max-linear models on directed acyclic graphs. Bernoulli , 24(4A):2693–2720, 2018. doi: 10.3150/17-BEJ941. D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques . MIT Press, 2009. S. Lauritzen and K. Sadeghi. Unifying Markov properties for graphical models. Ann. Statist. , 46(5):2251–2278, 2018. doi: 10.1214/17-AOS1618. S. L. Lauritzen. Graphical models , volume 17 of Oxford Statistical Science Series . The Clarendon Press, Oxford University Press, New York, 1996. Oxford Science Publications. C. Leigh, O. Alsibai, R. J. Hyndman, S. Kandanaarachchi, O. C. King, J. M. McGree, C. Neelamraju, J. Strauss, P. D. Talagala, R. D. Turner, et al. A framework for automated anomaly detection in high frequency water-quality data from in situ sensors. Science of the Total Environment , 664:885–898, 2019. R. Lnˇ eniˇ cka and F. Mat´ uˇ s. On Gaussian conditional independence structures. Kybernetika , 43(3):327–342, 2007. URL https://www.kybernetika.cz/content/2007/3/327 . J. Pearl. Causality . Cambridge University Press, 2009. J.-C. Rochet and J. Tirole. Interbank lending and systemic risk. Journal of Money, Credit and Banking , 28(4):733–762, 1996. P. Spirtes, C. Glymour, and R. Scheines. Causation, prediction, and search . Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, second edition, 2000. With additional material by David Heckerman, Christopher Meek, Gregory F. Cooper and Thomas Richardson, A Bradford Book. M. Studen´ y. Conditional independence structures over four discrete random variables revisited: Conditional Ingleton inequalities. IEEE Trans. Inf. Theory , 67(11):7030–7049, 2021. doi: 10.1109/TIT.2021.3104250. B. Sturmfels. Gr¨ obner bases and convex polytopes , volume 8 of Univ. Lect. Ser. American Mathematical Society (AMS), 1996.
https://arxiv.org/abs/2504.21068v1
Publication Design with Incentives in Mind∗ Ravi Jagadeesan†Davide Viviano‡ May 1, 2025 Abstract The publication process both determines which research receives the most attention, and influences the supply of research through its impact on the researcher’s private incentives. We introduce a framework to study optimal publication decisions when researchers can choose (i) whether or how to conduct a study and (ii) whether or how to manipulate the research findings (e.g., via selective reporting or data manipulation). When manipulation is not possible, but research entails substantial private costs for the researchers, it may be optimal to incentivize cheaper research designs even if they are less accurate. When manipulation is possible, it is optimal to publish some manipulated results, as well as results that would have not received attention in the absence of manipulability. Even if it is possible to deter manipulation, such as by requiring pre- registered experiments instead of (potentially manipulable) observational studies, it is suboptimal to do so when experiments entail high research costs. ∗The first version of this paper was circulated in October, 2024. We thank Isaiah Andrews, Arun Chan- drasekhar, Kevin Chen, Matt Gentzkow, Max Kasy, Toru Kitagawa, Paul Niehaus, Marco Ottaviani, Jesse Shapiro, Jann Spiess, Elie Tamer, Alex Tetenov, and Kaspar W¨ uthrich for helpful comments. We thank Karthik Seetharaman for exceptional research assistance. †Department of Economics, Stanford University. Email address: ravi.jagadeesan@gmail.com . ‡Department of Economics, Harvard University. Email address: dviviano@fas.harvard.edu . 1arXiv:2504.21156v1 [econ.EM] 29 Apr 2025 1 Introduction Publication decisions shape the process of scientific communication. By selecting what to publish, journals affect which findings receive the most attention and can inform the public about the state of the world. The design of publication rules has therefore motivated recent debates on how statistical significance should affect publication when the goal is to direct attention to the most informative results (e.g., Abadie, 2020; Frankel and Kasy, 2022). However, the publication process also affects the supply of research by influencing re- searcher’s incentives about how to conduct research. Researchers have many degrees of freedom about how to conduct their research, such as how and where to run an experiment (e.g., Allcott, 2015; Gechter and Meager, 2022); the size, cost, and effort associated with the study (e.g., Thompson and Barber, 2000; Grabowski et al., 2002); and which findings to report from a given study (Brodeur et al., 2020; Elliott et al., 2022). We refer broadly to their choices about each of these aspects of a study broadly as a research design . Researcher’s private incentives may influence how they choose their design. Yet, “while economists assiduously apply incentive theory to the outside world, we use research methods that rely on the assumption that social scientists are saintly automatons” (Glaeser, 2006). This raises the questions of when and how researchers’ incentives should impact the design of publication processes, and, more broadly, the optimal allocation of attention to research. This paper studies optimal publication decisions when researchers chooses the research design based on private costs and benefits. We frame this question as a mechanism design problem: a social planner (principal) optimizes a
https://arxiv.org/abs/2504.21156v1
publication rule, taking into account the incentives of the researcher (agent). The social planner aims to use the publication process to allocate the attention of the audience to the most informative research findings. More concretely, as in Frankel and Kasy (2022) (building in turn on Wald (1950)), we suppose that research results impact the decisions of an audience who has limits on how much attention they can devote to research. The social planner seeks to publish results that are most important for the audience, net of the cost of (or taking into account constraints on) publishing or attention. Due to attention constraints, not all results will be published, which leaves the planner with a non-trivial trade-off about which results and designs to publish. We then introduce a model in which publication decisions affect the supply of research in the first place (Section 2): given the publication rule, the researcher chooses the design that maximizes her value for publication (or other attention) net of research costs. As a concrete example, consider a medical journal seeking to decide whether to publish results from a clinical study. The journal wants to convey accurate information about the drug’s efficacy in the study (e.g., DeMaria, 2022; Ana, 2004) and direct the audience’s at- 2 tention to the most effective drugs. This objective is motivated by scientific practice. For example, the stated mission of the New England Journal of Medicine is “to publish the best research and information at the intersection of biomedical science and clinical practice and to present this information in understandable, clinically useful formats that inform health care practice and improve patient outcomes.” However, researchers may respond in how they conduct their studies through the size, length, cost of the studies, composition of the control groups (Thorlund et al., 2020), and—in some cases—in which specific findings to report (e.g., Riveros et al., 2013; Shinohara et al., 2015). We draw a dichotomy between researchers’ incentives about (i) whether or how to conduct a study and (ii) which findings to report (e.g., via data manipulation or selective reporting). We first focus on (i) and abstract from data manipulation and selective reporting; we therefore suppose that the research design is observable and verifiable (Section 3). For example, a researcher could choose between experiments with different mean-squared errors and costs and, by pre-committing to an analysis plan, truthfully report an unbiased estimate of a treatment effect. The planner can direct the attention of the audience to any executed study by publishing it; publication can depend on a study’s design and results. Returning to our example, after providing a new drug to a treatment group, scien- tists can evaluate its efficacy by comparing the treatment group to either an experimental placebo group or using a pre-specified synthetic control group obtained from historical med- ical records (Popat et al., 2022; Yin et al., 2022). Using a synthetic control group can have large impact on the supply of medical research and new drugs by decreasing research costs (Jahanshahi et al., 2021; Food and Drug Administration, 2023; Wong et al., 2014). How- ever,
https://arxiv.org/abs/2504.21156v1
using a synthetic control group may increase the estimate’s mean-squared error due to lack of randomization (see Raices Cruz et al., 2022; Rhys Bernard et al., 2024), raising the question of whether a social planner should allow or incentivize their use. (Another relevant application is choosing between two experiments with different number of participants.) We thus suppose that the publication process affects the supply of research through an individual rationality constraint: for a researcher to be willing to conduct a study, they must be compensated with a large enough ex ante publication probability. Without these individ- ual rationality constraints, the first-best publication decision rule would ignore researchers’ costs and publish only studies conducted with the lowest possible mean-squared error; due to attention constraints or publication costs, not all such studies would be published. When the feasible designs are designs that are inexpensive for the researcher to conduct, the first-best policy can be feasibly implemented by the planner; the individual rationality constraint does not bind. Thus, small research costs do not affect the planning problem. 3 However, when the research cost of accurate designs is above a tipping point, the first- best publication rule does not compensate the researcher enough to incentivize them to choose such a design. As a result, the planner faces a trade-off between providing attention to results that are not valuable enough to deserve attention, and rewarding the researcher enough to make them willing to use a costly design in the first place. This trade-off has implications for which designs the planner should publish. For example, suppose that clinical studies can be conducted based on either quasiexperiments based on synthetic controls, which are less expensive but less informative, or full experiments, which are costly but more informative. Incentivizing researchers to conduct the costly experiment would require publishing results from such a study, even when doing so misdirects attention toward treatments with negligible effects on patients. In this case, the planner may prefer publishing studies with potentially larger mean-squared error to avoid having to publish too many results from the experiment; researchers will then choose not to use costly experiments in equilibrium. This analysis suggests that due to the interaction between attention costs and supply effects, publishing medical studies with synthetic control groups can be desirable when it is sufficiently costly for researchers to use experimental control groups. The results illustrate a key insight: as the attention cost increases, the social planner preferences shifts towards less costly (and less accurate) experiments due to its effect on the supply of research. Motivated by this, we provide a simple formula for the planner’s optimal choice of the sample size that balances sample size with attention costs. We next turn in Section 4 to scenarios where the researcher, after observing the study’s results, can report a biased statistic from such a study. The bias is chosen by the researcher and unobserved by the planner, but incurs a cost to the researcher that is increasing in the bias (e.g., capturing reputational costs). The audience is unaware of possible manipulations of published findings, taking results
https://arxiv.org/abs/2504.21156v1
at face value. We think of this model as a stylized description of settings with potential data manipulation or selective reporting. For instance, in the absence of a precise pre-specification, we may be concerned that researchers select a synthetic control group from medical records based on their observed outcomes. Each publication rule generates a different degree of manipulation. For example, suppose that the planner used the publication rule that would be optimal without manipulation— i.e., publishing if the reported statistic is above a cutoff. Then, researchers whose results would be close to the cutoff would manipulate their data or analyses to reach the cutoff.1 Publishing substantially manipulated results would incur a substantial loss for the audience. One approach that has been proposed is to completely deter manipulation. An extreme 1This manipulation would lead to bunching at the cutoff, which is consistent with bunching that has been documented in settings with p-hacking (e.g., Elliott et al., 2022). 4 form of this approach would be to make publication dependent only on the design, and not on results.2However, this approach generally incurs substantial costs by directing the audience’s attention to results that would not affect their decisions, and is not optimal. By contrast, we show that the optimal publication rule under manipulability has three key features. First, it increases the cutoff for which findings always get published compared to settings without data manipulation. Second, just below this cutoff, it randomizes publica- tion chances, making the researcher indifferent about whether to manipulating their results. As a result, with nonzero probability, the planner publishes findings that are not truly valu- able enough to deserve attention, and would not be published without the possibility of manipulation. Third, some manipulation does occur in equilibrium. To gain some intuition for these features of the optimum, consider first the optimal publication rule without manipulation. The planner can eliminate the researcher’s incentive to manipulate by publishing results below the cutoff with a positive probability. However, some results that do not affect the audience’s decision do get published as a result, which is undesirable in the presence of attention costs or constraints. To counteract this effect, it then is optimal for the planner to increase the cutoff to guarantee publication. With a more stringent cutoff, some results that would be published without manipulation are no longer published with probability one. The planner then encourages researchers with results in this range to engage in a limited degree of manipulation to increase their publication chances. The loss from publishing some slightly manipulated studies is second-order relative to the (first-order) gain from publishing results that do affect the audience’s decisions. To prove that the optimal publication rule takes the described form, we formulate the social planner’s problem as a mechanism design problem with “false moral hazard” due to the researcher choosing a manipulation after learning the true results. The absence of direct transfers and the inability to reward the researcher with a publication probability above one make the mechanism design problem effectively one with limited transfers. As a result, standard methods as in Mirrlees (1971) and
https://arxiv.org/abs/2504.21156v1
Myerson (1981) do not apply. We solve the mechanism design problem by identifying the precise pattern of binding incentive constraints. As a final exercise in Section 5, we combine these two models to ask whether the planner should mandate researchers to send a costly signal that deters them from manipulating data. For example, the planner could mandate that researchers run costly experiments that adhere to pre-analysis plans, rather than also allowing for equivalently informative (but cheaper) observational studies. Without accounting for the effects of making research more costly on the supply of research, the planner would always require such signals to ensure that results are 2In practice, this approach can be implemented by committing to publication based on pre-analysis plans, as the Journal of Development Economics does. 5 unbiased (Spiess, 2024; Kasy and Spiess, 2023). For example, the planner would not publish observational studies if equally informative experiments are feasible, no matter their costs. However, to ensure that the researcher is actually willing to conduct the experiment, the planner needs to reward the researcher with enough attention through a high enough publication probability. With large attention costs (or binding publication constraints), this makes incentivizing experiments costly enough for the planner, that they prefer to publish observational studies, even though manipulation may occur. Therefore, taking into account the supply effects of the publication process, the planner may prefer to drive attention towards studies whose results may be manipulable, and hence possibly biased. Related literature. This paper connects to a growing literature that studies economic models to analyze statistical protocols. In the context of scientific communication, Andrews and Kasy (2019), Abadie (2020), Andrews and Shapiro (2021), Kitagawa and Vu (2023), and (most closely related to our paper) Frankel and Kasy (2022) have analyzed how research findings are or should be reported to inform the public. Our analysis builds on this lit- erature by introducing a model that incorporates researcher’s incentives and studying how these incentives shape the optimal design of the optimal publication process. None of these references account for the researcher’s private incentives in the design of the study. We connect to a broad literature on statistical decision theory (e.g., Wald, 1950; Savage, 1951; Manski, 2004; Hirano and Porter, 2009; Tetenov, 2012; Kitagawa and Tetenov, 2018), focusing in particular on settings with private researcher incentives. Other work in this line includes Chassang et al. (2012), Tetenov (2016), Manski and Tetenov (2016), Banerjee et al. (2017), Henry and Ottaviani (2019), Banerjee et al. (2020), Di Tillio et al. (2021), Williams (2021), Bates et al. (2022, 2023), Libgober (2022), Yoder (2022), Kasy and Spiess (2023), McCloskey and Michaillat (2024), Spiess (2024) and Viviano et al. (2024). Different from these papers, we analyze settings where the researcher may choose the research design absent private information, and settings in which the researcher can choose the design and manipulate reported findings with private information. This allows us to formally study ideas such as when/whether unsurprising results should be published, and whether manipulation should occur in equilibrium (Glaeser, 2006). In particular, an important distinction from some of these models
https://arxiv.org/abs/2504.21156v1
studying approval decisions, such as Tetenov (2016), Bates et al. (2022, 2023), and Viviano et al. (2024), is that these papers assume that researchers truthfully report the statistics sampled from their study and abstract from questions about data manipulation or optimal design of the experiment studied in this paper. Spiess (2024) and Kasy and Spiess (2023) study models of scientific communication, without, however, focusing on optimal publication rules studied 6 here. Different from these references, here we incorporate the researcher’s costs of the design (and misreporting), which, we show, leads to qualitatively different solutions in the amount of misreporting. Di Tillio et al. (2021) study the different question of selective sampling, and Henry and Ottaviani (2019) study the question of decisions with continuous and sequential access to the data, different from the question of selective design choice studied here. Finally, a large empirical and econometric literature has documented several aspects of the research process, including selective reporting, data manipulation, specification search, as well as site selection bias and observational studies’ bias (e.g., Allcott, 2015; Gechter and Meager, 2022; Rosenzweig and Udry, 2020; Elliott et al., 2022; Brodeur et al., 2020; Miguel, 2021; Olken, 2015; Banerjee et al., 2020; Rhys Bernard et al., 2024). Our contribution here is to provide a formal theoretical model that studies how incentives interact with some of these choices, shedding light on qualitative aspects of optimal publication rules. 2 Setup Consider three agents: a researcher, a (representative) audience, and a social planner. The audience and social planner are interested in learning a parameter θ0∈R, such as the average treatment effect of a given intervention. All agents share a common prior θ0∼ N(0, η2 0). For expositional convenience only, the prior mean equals zero. A researcher conducts a study to inform the audience about θ0. A study is summarized by (X,∆),where ∆ denotes the design and X(∆)|θ0∼ N θ0+β∆, S2 ∆ (1) the results. Here, β∆andS2 ∆are the average bias, and variance, of design ∆, respectively. If a study is conducted, it will be evaluated according to a publication rule p(X,∆) with values in [0 ,1]. Here, p(X,∆) represents the probability of publishing the study, which is assumed to be a Borel measurable function of ( X,∆). Conditional on publication, the audience forms posterior beliefs about θ0using Bayes’ Rule assuming β∆= 0. Conditional on non-publication, the audience’s posterior mean equals its prior mean (zero), as would arise from Bayes’ Rule with p(·) symmetric in Xand β∆symmetric around 0. Based on these beliefs, the audience takes an action to minimize mean-squared distance from θ0; thus, the audience’s action a⋆ p(X,∆) is the posterior mean a⋆ p(X,∆) =  Xη2 0 S2 ∆+η2 0if the study is published 0 otherwise. 7 | | | | Design Experiment Evaluation Audience∆ X∼ N(θ0, S∆) p(X,∆) a⋆(X,∆) Figure 1: Illustration of the variables in the model under observable and verifiable designs. First, researchers pre-specify the population of interest. Second, they run an experiment and draw a statistic X. Finally, a social planner evaluates the experiment based on a decision rulep(X,∆)∈[0,1]. Finally, the audience forms
https://arxiv.org/abs/2504.21156v1
a posterior about the estimand of interest. Given results X=X(∆) for a design ∆, and a parameter θ0, the planner incurs a loss Lp(X,∆, θ0) =Eph θ0−a⋆ p(X,∆)2i −cpp(X,∆), (2) where Epdenotes expectation with respect to any stochasticity in the publication decision rule. Thus, the planner minimizes the expected loss of the audience, net of a cost of publica- tion or attention. The quantity cpcaptures the publication or attention costs or constraints. Given a design ∆, a publication rule p, and results X, the researcher’s expected payoff is vp(X,∆) = p(X,∆)−C∆ (3) where C∆≤1 is the researcher’s cost of executing design ∆.3As is standard practice, whenever the researcher is indifferent between two designs, we implicitly assume she chooses the design that minimizes the planner’s expected loss. 3 Publication rules under verifiable designs This section studies optimal publication rules when the planner can observe the research design, and condition publication on it. We define a design ∆ unbiased on average ifβ∆= 0, and focus on such designs for simplicity in this section (and defer designs that are not unbiased on average to the following section). We suppose that the researcher chooses between two designs that have different mean-squared errors and research costs. Our analysis proceeds in three steps. We first characterize the optimal publication rule subject to the constraint of incentivizing the researcher to implement a particular design ∆. 3We assume C∆≤1 simply to rule out trivial cases in which design ∆ is never chosen by the researcher. Whenever researchers receive benefits b∆from publication, we can interpret without loss C∆and the cost divided by benefits b∆. 8 We then characterize which designs are worth incentivizing relative to an outside option. Last, we characterize the optimal publication rule that chooses between multiple designs. 3.1 Preliminary analysis for constrained designs As a first step, we characterize the constrained optimal publication when the planner must make implementing a particular design ∆ individually rational for the researcher, i.e., when the planner must guarantee that the researcher always conducts the study. Definition 1. Aconstrained optimal publication rule for a design ∆ is a publication rule p⋆ ∆ that minimizes E[Lp(X(∆),∆, θ0)] subject to E[vp(X(∆),∆)]≥0. Intuitively, a constrained publication rule is a publication rule that always guarantees that research is conducted. Our first result shows that the constrained optimal publication rule then takes a threshold form, where the threshold t∗ ∆for publication depends on the prior, the mean squared error and research cost of the design ∆, and the publication cost. Proposition 1 (Basic scenario: Constrained optimal publication rule) .If∆is unbiased on average, then a constrained optimal publication rule for ∆is the threshold rule p⋆ ∆(X) = 1{|X| ≥t⋆ ∆}, where t⋆ ∆= minS2 ∆+η2 0 η2 0√cp, Φ−1(C∆/2) q S2 ∆+η2 0 . The proof is in Appendix A.2.1. To understand the intuition behind Proposition 1, first suppose that the research cost is C∆= 0, so there is no individual rationality constraint for the researcher. Then, as in Frankel and Kasy (2022), as the planner’s publication cost cp is nonzero, the planner will publish results that move the
https://arxiv.org/abs/2504.21156v1
audience’s optimal action enough to justify the attention/publication costs cp: i.e., the planner will publish results |X| ≥γ∗ ∆, where4 γ∗ ∆=S2 ∆+η2 0 η2 0√cp. When this cutoff rule guarantees an ex ante publication probability of at least C∆, the individual rationality constraint does not bind. In this case, we say the design is cheap. Definition 2. Let ∆ be a design that is unbiased on average. Design ∆ is cheap ifC∆≤ P(|X(∆)| ≥γ∗ ∆) and expensive otherwise. 4Our framework directly extends to the case in which θ0∼ N(µ, η2 0) for a prior mean µ. Here, µis the audience’s action in the absence of publication. In this case, the optimal constrained publication rule takes the form |X−µ| ≥t⋆ ∆for the same threshold t⋆ ∆as in Proposition 1. Therefore, as in Frankel and Kasy (2022), one should interpret surprising results as ones that move the audience away from its default action. 9 Note that higher publication costs cp, and lower prior variances η2 0, both raise the thresh- oldγ∗ ∆and hence make designs more likely to be expensive. Intuitively, whether a design is cheap depends on how informative it is. For expensive designs, the cutoff rule from Frankel and Kasy (2022) does not provide a large enough ex ante publication probability to entice the researcher to conduct research in the first place. Hence, the planner needs to commit to publishing more results in order to satisfy the researcher’s individual rationality constraint. It is best for the planner to publish results that move the audience’s action the most, even if these results do not move the audience’s action enough to justify the attention cost cp. Hence, the planner sets a cutoff that ensures an ex ante publication chance of C∆—i.e., a cutoff of Φ−1(C∆/2) q S2 ∆+η2 0. This second cutoff is below γ∗ ∆for (and only for) expensive designs, and is the optimal cutoff for such designs. Thus, if research costs are large enough that the implemented design is expensive, the researcher’s incentives play a central role in determining the optimal publication rule, unlike in Frankel and Kasy (2022): the publication process should become less stringent as research becomes more costly. The following corollary summarizes and formalizes the preceding discussion. Corollary 1. (a) If ∆is a cheap design, then the cutoff t⋆ ∆for a constrained optimal publication rule is t⋆ ∆=γ∗ ∆. (b) If ∆is an expensive design, then t⋆ ∆=|Φ−1(C∆/2)|p S2 ∆+η2 0is the cutoff for a constrained optimal publication rule. This cutoff is strictly decreasing in the cost C∆. 3.2 Which designs are ever worth incentivizing As a second step building up towards the main results of this section, we study when a design is ever worth incentivizing, relative to the outside option of no study (i.e., relative to not making it individually rational for the researcher to conduct research with design ∆). LetL∗ ∆=E Lp⋆ ∆(X(∆),∆, θ0) denote the optimal expected loss for the planner once implementing design ∆. (Here p⋆ ∆is a constrained optimal publication rule for ∆ .) The expected loss if no research is published
https://arxiv.org/abs/2504.21156v1
is the prior variance η2 0. Comparing these two quantities determines whether a design is worth incentivizing in the first place. Definition 3. A design ∆ is worthwhile ifL∗ ∆≤η2 0. We next characterize which designs are worthwhile. If a design is cheap, then the planner can selectively publish only results that move the audience’s beliefs enough to justify the 10 publication or attention cost. Thus, the ex post loss from incentivizing the design (under the constrained optimal publication rule) is always lower than η2 0, and similarly its ex ante loss. Proposition 2 (When are cheap designs worthwhile?) .Every cheap design ∆is worthwhile. The proof is in Appendix A.2.2. Whereas cheap design are always worthwhile (regard- less of their cost), for expensive designs, the situation is more delicate. Incentivizing the researcher to implement a design requires committing to publish results that the planner would ex post prefer not to publish. When attention costs cpare large enough, the costs of publishing these marginal results outweigh the benefits of publishing surprising results. How large cpneeds to be for this to occur depends on the design’s cost and variance. To formalize this intuition, it will be convenient to express our results in terms of the difference between the posterior and prior variances conditional on publication of the results of a design ∆, which we denote by PostVarRed(∆) := η2 0−S2 ∆η2 0 S2 ∆+η2 0=η4 0 S2 ∆+η2 0. This quantity is a measure of the informativeness of a design: it represents how much learning the results of the design improves the expected utility of a Bayesian audience with L2loss. Note that PostVarRed(∆) is increasing in η2 0and decreasing in S2 ∆. Proposition 3 (When are expensive designs worthwhile?) .Let∆be unbiased on average. (a) If PostVarRed(∆) ≥C∆cp+π 6η2 0(1−C∆)3, then ∆is worthwhile. (b) If PostVarRed(∆) < C ∆cp, then ∆is not worthwhile. The proof is in Appendix A.2.3. To understand this result, in the case of large research costs, suppose that C∆≈1. A design is worthwhile only if the posterior variance reduction exceeds the product of the attention and research cost (up to a small remainder).5An experiment instead is notworthwhile if the posterior variance reduction relative to the cost of the experiment is smaller than the publication cost. That is, as C∆orcpincrease, the posterior variance reduction should increase proportionally to these costs for the experiment to be worthwhile. The proposition provides us with explicit recommendation on when designs are not worth- while as a function of the design’s costs. For instance, suppose that research costs scale inversely with the design’s variance (and therefore linearly in the sample size). That is, 5Note that from Lemma 6 the bounds in Proposition 3 are sharp up to an error of order O((1−C∆)3) and we provide explicit expressions in the Appendix. 11 C∆=cv S2 ∆+cf, where cvcaptures a variable cost of sample size, and cfcaptures a fixed cost of the design. Then whenever the variance exceeds a threshold that depends on both the variable cost of the design and the attention cost such designs are not worthwhile. Example
https://arxiv.org/abs/2504.21156v1
1. Consider a design ∆ that is unbiased on average, and suppose C∆=cv S2 ∆+cf. Ifη2 0> cpcfandS2 ∆>cpcv η2 0−cpcf, then design ∆ is not worthwhile. 3.3 Choosing which design to incentivize We next study the optimal publication rule when there is more than one possible design. We first focus on a setting with two, unbiased designs and then return to a continuum of designs. Without loss, we suppose that both designs are worthwhile, and that the one with a lower mean squared error has a higher research cost, so the planner faces a non-trivial problem about which design to incentivize. Assumption 1. Researchers can choose between two designs E, O that are unbiased on average and worthwhile. The designs have different mean squared errors S2 E< S2 Oand costs CE> C O. We think of ∆ = Eas a possibly expensive experiment and ∆ = Oas a lower-cost experiment (or, in some cases observational study). To understand this case, let us return to our example of the choice of control in a clinical trial from the introduction. Here, ∆ = E represents a possibly expensive (large) medical experiment with a treatment and placebo control group, and ∆ = Orepresents a lower-cost medical study such as a clinical trial with a smaller sample size or with a pre-specified synthetic control group. Design Ohas smaller cost, since under that design, researchers have to recruit fewer participants. For simplicity, we suppose that the bias of both studies EandOis unknown with mean zero; as we discuss below, we can then capture the bias as part of the mean-squared error. Example 2 (Low-cost and costly experiment) .Suppose that Ecorresponds to an experi- ment with costly implementation CE, where Ois an experiment with larger variance (fewer participants), but smaller cost of implementation. In this case E[X(E)|θ0] =E[X(O)|θ0] =θ0 while the two experiments may have different costs (and variances). Example 3 (Experiment versus pre-specified synthetic control group) .LetX(E) =θ0+ εE, X(O) =θ0+bO+εO, where εE∼ N (0, S2 E) denote the estimation noise from an ex- periment, εO∼ N (0, σ2 O) denotes the idiosyncratic noise from an observational study and bO|εO∼ N(0, σ2 B) denotes the observational study’s random effect which capture unobserved bias drawn from a fixed (Gaussian) distribution. For example, Rhys Bernard et al. (2024) 12 empirically investigates the distribution of bOthrough a meta-analysis, where σ2 Bcaptures the impact of the random bias in statistics’ distribution.6We then have that X(O)∼ N(θ0, S2 O), where the mean-squared error S2 O=σ2 O+σ2 Bincludes both sampling uncertainty σ2 Oand irreducible error σ2 Barising the variance of the bias. We next use Proposition 1 to study the optimal choice between the two designs. Because ∆ is observable by the planner (and verifiable as part of the publication process), the planner can incentivize their preferred design by setting p⋆(X,∆) = 1 {∆ = ∆planner}p⋆ ∆(X) where ∆planner∈arg min ∆∈{E,O}L∗ ∆. (4) For instance, the planner may only accept experiments with a minimum level of precision. It is immediate that p⋆(X,∆) minimizes the planner’s expected loss. We therefore study the optimal
https://arxiv.org/abs/2504.21156v1
design choice by comparing the minimized loss of the social planner when im- plementing the experiment versus implementing the observational study. More generally, we can use similar logic to compare the effectiveness of any two designs. Definition 4. Design ∆ is planner-preferred to design ∆′ifL∗ ∆<L∗ ∆′. It is immediate that if a design is planner-preferred to a worthwhile design ∆′, then ∆ is worthwhile. In particular, Proposition 2 implies that if a design ∆ is planner-preferred to a cheap, unbiased design ∆′, then ∆ is worthwhile. If the more precise experiment Eis cheap, then its higher research cost is irrelevant to the planner. Therefore, the experiment is planner-preferred to O. Proposition 4. Under Assumption 1, if Eis cheap, then Eis planner-preferred to O. The proof is in Appendix A.2.4. This result suggests that it suffices to compare the mean-squared error of two cheap studies to select the planner preferred. When the experiment Eis expensive, the situation is more delicate. Implementing E requires committing to publish more results, which may be costly for a planner. When the publication or attention costs cpare large enough, the costs of publishing more results outweighs the benefits of a more precise design. Proposition 5. Under Assumption 1, for Obeing an expensive design, there exists a thresh- oldc⋆ p(E, O, η 0)>0such that Eis planner-preferred to Oif and only if cp< c⋆ p(E, O, η 0), where c⋆ p(E, O, η 0) =PostVarRed( E)−PostVarRed( O) CE−CO−η2 0O (1−CE)3 −O (1−CO)3) CE−CO. 6Whenever bOhas non zero expectation we can think of Xrecentered by its expectation, that can be learned through meta-analyses (see, e.g., Rhys Bernard et al. (2024)). 13 The proof is in Appendix A.2.5. Proposition 5 shows that, assuming both OandEare expensive studies ((1 −CO)3≈0), it suffices to compare PostVarRed( E)−PostVarRed( O)≥(CE−CO)cp to choose between a more precise and more expensive experiment Eor a less precise study O. That is, we must compare the difference in the posterior variance reduction to the difference in costs, adjusted by the attention cost cp. A larger cpshifts the preference of the planner towards less costly experiments. In the following theorem we formalize these intuitions by providing exact conditions. Theorem 1 (Comparison between two studies) .Under Assumption 1, suppose that Ois expensive. (a) If PostVarRed( E)−PostVarRed( O)≥ 1−CO CE cp,then Eis planner-preferred to O. (b) If PostVarRed( E)−PostVarRed( O)≤ CE−1+CO 2 cp, then Ois planner-preferred to E. If instead Ois cheap, then (a) and (b) hold with P(|X(O)| ≥γ∗ O)in lieu of CO. The proof is in Appendix A.2.6. Intuitively the comparison between two studies must depend on the posterior variance reduction of each study (which itself depends on their “quality” captured through their mean-squared error) and the costs of each study. As the cost of attention cpincreases, the planner’s preference shifts from a more accurate design to a lessaccurate design with a smaller cost. This is in contrast with the intuition that, for higher attention cost, we should publish studies with lower mean squared error. This is because, for costly studies, the planner must internalize not only the effect of
https://arxiv.org/abs/2504.21156v1
the mean-squared error on the audience’s loss function, but also the research cost associated with the study. Whenever she can publish fewer results ( cpis larger), more costly experiments impose more stringent constraints on the publication rules, making those undesiderable for the planner. This is suggestive that medical studies with synthetic control groups may be preferred over placebo groups when the cost of the placebo group is sufficiently large. Theorem 1 provides bounds (instead of the exact expression) to enhance interpretability. However, explicit expressions can be obtained directly from our calculations in the Appendix. Using such expressions, Figure 2 reports the indifference curves between two experiments with different mean-squared errors and costs. Figure 2 shows how the planner’s preference shifts towards cheaper (and noisier) designs even when the experiment has noerror. 14 Choose cheaper design O (SO2>0)Choose precise design E (SE2=0) Indifference curve (high cp) Indifference curve (low cp)0.000.250.500.751.00 0.5 0.6 0.7 0.8 CESO2Figure 2: The figure reports on the x-axis the cost of a non-cheap experiment with no variance ( S2 E= 0), and on the y-axis the rescaled squared error S2 O/η2 0of a cheap design (e.g., placebo study or cheaper experiment) using the exact expression. The red line denotes the frontier of values for cost of publication cp/η2 0= 0.5, and the blue lines for the higher cost of publication cp/η2 0= 1. The region above the blue line denotes the set of values under which an experiment is preferred for a large cost of publication, and the area above the red line denotes the low cost of publication. The figure shows that a cheaper design may be preferred over a more expensive experiment, even if its mean squared error is larger, whenever the experiment Eis sufficiently costly. As the cost of publication increases, the planner prefers more the cheaper design. 3.4 Optimal sample size from a continuum of designs We conclude this discussion by characterizing the optimal sample size as a function of the costs of the experiment and cost of attention cp. Suppose that the sample size of the experi- ment can be chosen freely, and affects the research costs linearly—i.e., C∆=cv S2 ∆+cf, where cvdenotes a variable cost of sample size, and cfdenotes a fixed costs of a design. Then, the optimal design is expensive, but involves a sample size that is bounded in terms of the costs. An exact solution is often infeasible. We therefore study ε-approximately optimal designs. Definition 5 (Approximate optimality) .For a class of designs Dand a sequence εη→0 as η2→ ∞ , a design ∆ ∈ Disεη-approximately planner optimal if L⋆ ∆−min ∆′L⋆ ∆′=O(εη). An approximately optimal design is a design which is optimal in the limit as the prior variance η2 0converges to infinity (i.e., when we have limited knowledge about θ). Theorem 2 (Optimal variance for experiments) .Consider a family of designs ∆∈ D that are unbiased on average and worthwhile, where C∆=cv S2 ∆+cf. Suppose in addition that cpcv≤η4 0(i.e., the audience lacks sufficient prior knowledge about θ). Then the design ∆∗ with S2 ∆∗=√cpcvis an εη-approximately optimal
https://arxiv.org/abs/2504.21156v1
design, with εη=cpcv η2 0+(cpcv)3/2 η4 0, provided thatC∆∗≤1. 15 The proof is in Appendix A.2.7. Intuitively, for cheap designs, it is always optimal to increase the sample size to obtain more precise results (Proposition 4). By contrast, once the sample size is large enough (i.e., variance small enough) that the design is expensive, in- creasing the sample size while preserving individual rationality inefficiently directs attention to unsurprising results. As a result, the optimal sample size is bounded above. Here, Theorem 2 provides an explicit expression for the optimal variance in an experi- ment as a function of the variable costs and costs of attention. It states that for an expensive experiment, we choose the variance proportional to (and therefore increasing in) the square root of the attention cost and variable cost. This choice is dictated by the researcher’s con- straints. As the variable cost increases, the researcher’s cost increases, which is compensated by an increase in the variance. Similarly, as the attention constraints become more binding, we prefer less costly (and hence less accurate) experiments, as expensive experiments require forcing publication not to reduce the supply of research, despite of more stringent attention costs. Taken together, these results suggest how a social planner may direct the supply of research (and the design of a study) through the researcher’s incentives. Corollary 2. The approximately optimal S2 ∆∗is increasing in both the variable costs cvand attention costs cp. It is independent of fixed costs cfas long as cf≤1−q cv cp. In the next section, we then turn to settings where researchers may manipulate the findings through, e.g., p-hacking. 4 Publication rules under non-verifiable designs In this section, we turn to optimal publication rules when researchers choose the research design ∆ using information of the statistics drawn in the experiment. Here, the design ∆ (and its corresponding bias β∆) are unknown to the social planner but not to the researcher. Specifically, we consider the following model. Assumption 2. Consider a class of designs ∆ ∈R, with β∆= ∆ known to the researcher but not observable to the social planner, and X(∆) = θ0+β∆+ε, ε|θ0∼ N (0, S2 E). The researcher observes θ0+εand chooses ∆ (and so β∆) to maximizes her realized payoff vp(X(∆),∆) in Equation (3), after observing X(∆). Let C∆=cd|β∆|+C0, where cd<∞ andC0<1. The social planner chooses p(X) as a function of Xonly (and not ∆). Figure 3 illustrates the model: the researcher chooses deterministically the bias of the reported statistic. They, however, pay a cost C∆increasing in the bias. We think of the researcher’s action as a stylized description of data manipulation or selective reporting. In 16 | | | | Statistic Manipulation Evaluation AudienceX′=θ0+ε X=X′+β∆ p(X) a⋆(X) Figure 3: Illustration of the variables in the model. First, researchers observe the vector of statistics. They then manipulate the design by introducing a bias into the statistics and maximize their private utility. The social planner does not observe the bias, and evaluate the study based on a publication function p(X) that only depends on the statistics X. particular, researchers, after looking at the
https://arxiv.org/abs/2504.21156v1
data, can change their specification by, e.g., changing the covariates in a regression, winsorizing the data in particular ways, or making other design choices functions of the statistics. In our stylized description, this is approx- imated by defining Xas the sum of θ0+εplus a bias arising from manipulation. The component cd|β∆|of the cost C∆captures reputational or computational costs associated with the manipulation, assumed to be increasing and linear in the magnitude |β∆|of the bias. The researcher observes θ0+ε, and hence, she maximizes her realized utility conditional on the observed statistics when choosing ∆. As we discuss in Section 2, the audience (but not the planner) is unaware of the possibility that published findings may have some data manipulation. Finally, note that we assume that the variance of the residual noise εis independent of ∆ and equal to S2 E. Our results will be valid for any S2 Eincluding S2 E≈0 as in large observational studies. We interpret this assumption as stating that standard errors are veri- fiable as part of the publication process; we therefore focus on manipulation that introduces unobserved bias in the reported results. 4.1 Optimal publication rule under manipulation The planner cannot observe or verify β∆, knows S2 E, and minimizes expected loss over ( θ0, ε) taking into account the researcher’s (endogeneous) incentives to manipulate their results. That is, we define an optimal publication rule as p⋆∈arg max p∈PEX,θ0 Lp(X(∆⋆ p),∆⋆ p, θ0) ,∆⋆ p= arg max ∆vp(X(∆),∆) (5) 17 where Pis the set of all Borel measurable functions p(X, d) constant in d(i.e., do not depend on the design), which we write without loss as p(X) (note that pmay implicitly depend on S2 E). Here, Xsatisfies Assumption 2 and ∆⋆ pdenotes the optimal researcher’s response. The publication rule only depends on Xbut not on ∆, as this is chosen privately by the researcher. Before introducing our next theorem we introduce the following definition. Definition 6. Alinearly smoothed cutoff rule with cutoff X∗and slope mis defined by pX∗,m(X) =  0 if |X| ≤X∗−1 m 1−m(X∗− |X|) if X∗−1 m<|X|< X∗ 1 if |X| ≥X∗. A linearly smoothed cutoff rule considered here is a deterministic publication rule above and below thresholds ( X∗−1 m, X∗) for given mand it randomizes the publication chances between these two values, with publication probability increasing in the value of the re- ported statistic |X|. To gain further intuition, consider the threshold γ∗ E=S2 E+η2 0 η2 0√cp. Then X∗=γ∗ Eandm=∞corresponds to a publication rule in Frankel and Kasy (2022), i.e., a publication rule for cheap experiments. When m <∞andX∗> γ∗ E, the linearly smoothed cutoff rule publishes with probability one more surprising results than rules in the absence of manipulation, while randomizing around the threshold. In the following theorem we characterize the optimal publication probability in contexts with manipulation. Theorem 3 (Optimal publication rule under non-verifiable designs) .Under Assumption 2: (a) There exists a cutoff X∗∈ γ∗ E, γ∗ E+1−C0 cd such that the linearly smoothed cutoff rule pX∗,cdis optimal. (b) For each optimal publication rule p, there exists X∗∈ γ∗ E,
https://arxiv.org/abs/2504.21156v1
γ∗ E+1−C0 cd such that p⋆(X) =pX∗,cd(X)(resp. p⋆(X)≤C0) for almost all X≥0with pX∗,cd(X)> C 0 (resp. pX∗,cd(X)≤C0). The proof is in Appendix A.2.8. Theorem 3 characterizes the optimal publication rule under manipulation. The rule is a linearly smoothed cutoff rule that: (i) does not publish results below a certain cutoff X∗−1 cd; (ii) randomizes whenever Xis above the X∗−1 cdbut below X∗; and (iii) publishes with probability 1 for results X≥X∗. To gain further intuition, it is instructive to compare this with the optimal publication rule for a cheap experiment in Proposition 1. For simplicity, suppose C0= 0 so that we abstract from fixed research costs. Consider first a scenario in which the social planner ignores manipulation. Then we would observe bunching around the cutoff to publish γ∗ E, as 18 Action Testable observation Published findings Manipulation Select optimal cutoff rule without manipulation Large bunching Only surprising findings are published Large Add randomization below cutoff No bunching Many unsurprising findings are published None Increase the cutoff + randomize below Some bunching Some unsurprising findings are published Some Table 1: Sequence of actions to decrease the loss function. We start from a simple cutoff rule for cheap experiment wrongly assuming no data manipulation. Under this rule, we observe bunching of Xaround the cutoff. We then randomize below the cutoff to make researcher indifferent between manipulation and not. Finally, we increase the cutoff to publish less unsurprising results. researchers with θ0+ε∈(γ∗ E−1 cd, γ∗ E) would introduce a bias to publish. Researchers with θ0+ε < γ∗ E−1 cdwould find it nonprofitable to introduce any bias (as the cost would not compensate the benefits) and therefore would not publish. Next, suppose that the planner introduces randomization in the publication rule whenever X∈(γ∗ E−1 cd, γ∗ E) as described in Theorem 3. It follows that the randomization device in Theorem 3 makes the researcher indifferent between manipulating and not manipulating the data, at the cost of publishing some unsurprising results. However, this choice is sub-optimal as the planner may publish too many unsurprising results. The last step is to increase the threshold X∗to compensate the loss from publishing unsurprising results. In summary, optimal publication rules with manipulation (a) increase the cutoff for publication; (b) allow for randomization at the margin below the cutoff. Table 1 illustrates main features associated with each action. The first action is equivalent to ignoring manipulation, as in settings considered in Frankel and Kasy (2022). Manipu- lation in this scenario has testable implications since it introduces large bunching around the cutoff, see, e.g., the discussion in Elliott et al. (2022) among others. Through sufficient randomization below the cutoff, we can guarantee no bunching (and no manipulation). The last step is then to increase the cutoff. We should then observe just some bunching above the old cutoff and below the new one (see Proposition 6). This corresponds to the planner’s preferred publication rule. 4.2 Some implications In this section, we explore some of the implications of Theorem 3. The first implication we study is whether, in the planner’s preferred equilibrium , some researcher
https://arxiv.org/abs/2504.21156v1
would manipulate their findings. Proposition 6 (Manipulation in equilibrium) .Under the model in Assumption 2, under the 19 publication rule in Theorem 3 and the planner’s preferred equilibrium for |θ0+ε| ∈(γ∗ E, X∗), we have |β∆⋆ p⋆|>0. The proof is in Appendix A.2.9. Proposition 6 states that some manipulation should occur just below the threshold. Intuitively, if researchers could not manipulate their findings, then at the optimum, we would have published findings above the threshold γ∗ E. However, because manipulation can occur, the social planner increases the threshold from γ∗ EtoX∗. It follows that manipulation is beneficial in the interval between the old threshold γ∗ Eand the “new” threshold X∗. It guarantees that findings that should have been published in the absence of manipulation do get published, while the cost of manipulation is second order. This result suggests that in equilibrium, we should not force manipulation to be exactly zero. A practical implication of Proposition 6 is that we observe a discontinuous distribution of Xbelow X∗and above γ∗ Eat the optimum . As a second exercise, we study the role of fixed costs in the presence of manipulation. Proposition 7 (Implementation costs) .Under the model in Assumption 2, for C0≥1− cdγ∗ E, the set of optimal cutoffs X∗in Theorem 3 is strictly decreasing in C0in the strong set order. The proof is in Appendix A.2.10. Proposition 7 shows that for sufficiently costly studies, the social planner lowers the cutoff as the fixed cost increases. This result is suggestive that less surprising results may be published more when the study is sufficiently costly. It differs from what we found in Theorem 1 in the absence of manipulation: in the absence of manipulation, the planner may force the researcher to run cheap over expensive studies. However, with manipulation, fixed costs imply a lower chance of manipulation, motivating increasing the chance of publication for such studies. Next, we states that some findings below the optimal cutoff γ∗ Ein the absence of data manipulation, do get published in the planner’s preferred equilibrium under manipulation. Proposition 8 (Some unsurprising results are published) .Under the model in Assumption 2, under the publication rule in Theorem 3 and the planner’s preferred equilibrium, for some values of |θ0+ε|< γ∗ E, we have p⋆(X)>0butβ∆∗ p∗= 0. The proof is in Appendix A.2.11. Proposition 8 shows that some unsurprising results that would have not been published without manipulation do get published under the optimal publication rule with manipulation. This feature is not due to manipulation of these results, but rather to deter manipulation. It is in stark contrast with the case without manipulation, as in Frankel and Kasy (2022). 20 5 Implications for alternative publication mechanisms In this section we study the implications of our results for alternative publication mecha- nisms, such as incentivizing more observational studies with possible data-manipulation or costly experiments with no data manipulation (e.g., when enforcing pre-analysis plans) or introducing a registry of non-surprising results. 5.1 Pre-analysis plans We first study the implications of Theorem 3 for pre-analysis plans. Definition 7 (Pre-analysis vs possible manipulation) .Consider the following
https://arxiv.org/abs/2504.21156v1
two possible families of designs, all of which have variance S2 E. (A) Experiment with pre-analysis plan: Experiment Eis unbiased on average and worth- while. Researchers cannot manipulate their findings ( cd=∞), truthfully report X=θ0+ε, and pay a research cost CE. The social planner chooses a publication rulep⋆ Eas in Definition 1 and incurs a loss L∗ E (B) Possible manipulation (observational studies): Assumption 2 holds, so there is a family of designs ∆ ∈R. The manipulation cost is cd<∞, and researchers can manipulate their findings after observing θ0+ε. The fixed research cost is C0= 0. The social plan- ner chooses a publication rule p⋆as in Equation (5) and incurs corresponding expected lossL∗ M=EX,θ0 Lp⋆(X(∆⋆ p⋆),∆⋆ p⋆, θ0) under the planner’s preferred equilibrium. Scenario (A) states that the researcher cannot manipulate her findings but pays a fixed costCE, interpreted as the cost that guarantees no bias in the study. This setting may correspond to the cost of conducting an experiment (including time for grant approval, research assistants, etc.).7Scenario (B) allows for manipulation of the findings, with the researcher not required to write a pre-analysis plan. In this case, we assume for simplicity no fixed costs CE= 0, but possible (reputational) costs associated with manipulation. We think of (B) as scenarios where an experiment or observational study is already available to the researcher (and therefore, its cost is sunk). To simplify notation, we introduce the following definition that will be useful for our subsequent analysis. Definition 8 (Cheap vs expensive gap) .Denote GAP( E) =L∗ E− L∗ E′where E′is a cheap design with CE′= 0 and variance S2 E′=S2 E. Namely, GAP( E) denotes the difference in the 7For simplicity, we consider here experiments that are worthwhile. 21 planner’s loss (under the planner’s optimal action) for a design Eand the same design (with same variance) E′but no implementation costs CE′= 0. It follows that GAP( E) measures the additional costs on the social planner of private research costs CE. Whenever Eis a cheap design, GAP( E) = 0, otherwise GAP( E)>0. Also, note that GAP( E) can be readily computed from the expressions in the Appendix. Proposition 9 (Value of pre-analysis plan) .Under (A)and(B)in Definition 7: (a) If Eis cheap (i.e., GAP( E) = 0 ), then Eis planner-preferred to M; (b) If GAP( E)>1 c2 d, then Mis planner preferred to E. In Proposition 9, we show that an experiment (with a pre-analysis plan) is preferred over an observational study with possible manipulation (with no pre-analysis plan) if the cost of the observational study is sufficiently low. When the cost of the experiment is high (and therefore GAP( E) is large), the trade-off depends on (i) its cost CEand (ii) the cost of data manipulation cd. Clearly, if cd= 0, then an experiment with a pre-analysis plan may be preferred. Intuitively, with a low cost of manipulation, the planner must publish with positive probability a larger set of unsurprising results, at a possibly large publication cost. Suppose instead cd<∞. Then, an observational study with possible manipulation is preferred for sufficiently high researcher’s
https://arxiv.org/abs/2504.21156v1
cost. This is despite the cost of the experiment being private and paid by the researcher and not by the planner. This is because, a sufficiently large of the experiment CEincreases the loss of the social planner who must publish results that would otherwise not publish with low experimentation costs. Different from (and complementary to) models where either no experimentation or no reputational costs occur (e.g., Kasy and Spiess, 2023; Spiess, 2024), these results illustrate trade-offs in the choice of unbiased vs. biased studies. A larger experiment cost may decrease the supply of research, making the planner prefer possible manipulation. 5.2 Registry of unsurprising results A core theme is that if attention is costly, publication decisions must take into account the private research cost. It is natural to ask whether we can think of publication mechanisms that reward studies without paying such a cost of attention. For instance, consider a scenario where the researcher conducts a (possibly) expensive study with cost CEin the model in Section 3—i.e., abstracting from manipulation for sim- plicity. Because attention is costly, we only want to publish sufficiently surprising results. 22 Suppose that we implement the first-best policy, ignoring the researcher’s costs. This takes the form p⋆ E= 1{|X| ≥γ∗ E}of a threshold crossing rule as described in Section 3.1. In our model in Section 3, if the experiment is costly, however, the researcher may choose the outside option. To guarantee implementability of p⋆ Esuppose now we can reward the researcher whose results are not published in the journal by publishing the results in a repository of unsurprising results. This repository may have no cost cpand guarantee a reward Rto the researcher. The researcher’s objective function upon conducting the experiment reads as E[vp⋆ E(X(E), E)] =P(|X(E)| ≥γ∗ E) +RP(|X(E)| ≤γ∗ E)−CE. When choosing R≥CE+1 P(|X(E)|≥γ∗ E)−1,the researcher is always indifferent between conducting or not the research study. Rewarding results through a “repository of unsurprising results”, with a reward increasing in the cost of the experiment guarantees that the first-best policy is implementable at no attention cost for the audience. 6 Conclusion This paper studies how researcher’s incentives shape the optimal design of the scientific process. Ignoring the researcher’s incentives, it is optimal to publish the most surprising results (Frankel and Kasy, 2022). When researcher’s incentives matter, we show that optimal publication rules depend on private costs of research and incentives for research manipulation. As a first exercise, we show that, in the absence of manipulation, the planner prefers experiments or observational studies with larger mean-squared erro) over sufficiently costly experiments. In medical studies, a pre-specified synthetic control group obtained from medi- cal records (Jahanshahi et al., 2021; Food and Drug Administration, 2023) may be preferred over a sufficiently expensive placebo control group, despite lack of randomization. With manipulation, we show that it is optimal to (i) publish some unsurprising results and (ii) knowlingly allow for manipulation (biased studies) at the margin. Observationally, the optimal policy would reduce the bunching of the findings (e.g., t-tests) around the publication cutoff. However, the optimal policy does not completely remove bunching.
https://arxiv.org/abs/2504.21156v1
Even when the planner can mandate a signal to enforce no manipulation, this may not be her preferred policy when the signal entails a larger research cost. Our model disentangles the contribution of design choice and data manipulation to op- timal publication decisions. Future research should study more complex communication strategies. For example, in contexts with pre-analysis plans, the planner may allow multiple 23 signals to decrease the burden on the authors or allow for the publication of non-prespecified findings. Similarly, researchers may report multiple findings in their study. As we point to in our results in Section 5, studying more complex action spaces can shed light on alternative mechanisms relevant to scientific communication. This paper opens new questions at the intersection of econometrics and mechanism de- sign. Future research should study the implications of some of these conclusions on empirical research. Our framework relies on parameters that capture costs and benefits for the re- searcher. In the absence of manipulation, such parameters can be learned using cost data for medical trials (e.g., Tetenov, 2016) and field experiments (e.g., Viviano et al., 2024). With manipulation, and reputational costs associated with it, such parameters may be learned using information from meta-analyses through the distribution of the submitted findings (different form, but in the spirit of Andrews and Kasy, 2019). 24 References Abadie, A. (2020). Statistical nonsignificance in empirical economics. American Economic Review: Insights 2 (2), 193–208. Allcott, H. (2015). Site selection bias in program evaluation. Quarterly Journal of Eco- nomics 130 (3), 1117–1165. Ana, J. (2004). The role of a general medical journal. BMJ 328 (7439), 591. Andrews, I. and M. Kasy (2019). Identification of and correction for publication bias. Amer- ican Economic Review 109 (8), 2766–2794. Andrews, I. and J. M. Shapiro (2021). A model of scientific communication. Economet- rica 89 (5), 2117–2142. Banerjee, A., S. Chassang, and E. Snowberg (2017). Chapter 4 – Decision theoretic ap- proaches to experiment design and external validity. In A. V. Banerjee and E. Duflo (Eds.), Handbook of Field Experiments , Volume 1 of Handbook of Economic Field Experi- ments , pp. 141–174. North-Holland. Banerjee, A., E. Duflo, A. Finkelstein, L. F. Katz, B. A. Olken, and A. Sautmann (2020). In praise of moderation: Suggestions for the scope and use of pre-analysis plans for RCTs in economics. Working paper, National Bureau of Economic Research. Banerjee, A. V., S. Chassang, S. Montero, and E. Snowberg (2020). A theory of experi- menters: Robustness, randomization, and balance. American Economic Review 110 (4), 1206–1230. Bates, S., M. I. Jordan, M. Sklar, and J. A. Soloff (2022). Principal-agent hypothesis testing. arXiv:2205.06812. Bates, S., M. I. Jordan, M. Sklar, and J. A. Soloff (2023). Incentive-theoretic Bayesian inference for collaborative science. arXiv:2307.03748. Brodeur, A., N. Cook, and A. Heyes (2020). Methods matter: P-hacking and publication bias in causal analysis in economics. American Economic Review 110 (11), 3634–3660. Chassang, S., G. Padro I Miquel, and E. Snowberg (2012). Selective trials: A principal-agent approach to randomized controlled experiments. American Economic Review 102 (4), 1279–1309. DeMaria, A. N. (2022). The role of a medical journal. Structural
https://arxiv.org/abs/2504.21156v1
Heart 6 (4), 100079. Di Tillio, A., M. Ottaviani, and P. N. Sørensen (2021). Strategic sample selection. Econo- metrica 89 (2), 911–953. Elliott, G., N. Kudrin, and K. W¨ uthrich (2022). Detecting p-hacking. Econometrica 90 (2), 887–906. 25 Food and Drug Administration (2023). Considerations for the design and conduct of externally controlled trials for drug and biological products. Available at https://www.fda.gov/regulatory-information/search-fda-guidance- documents/considerations-design-and-conduct-externally-controlled-trials-drug-and- biological-products. Frankel, A. and M. Kasy (2022). Which findings should be published? American Economic Journal: Microeconomics 14 (1), 1–38. Gechter, M. and R. Meager (2022). Combining experimental and observational studies in meta-analysis: A debiasing approach. Working paper, Pennylvania State University and London School of Economics. Glaeser, E. L. (2006). Researcher incentives and empirical methods. Working paper, National Bureau of Economic Research. Grabowski, H., J. Vernon, and J. A. DiMasi (2002). Returns on research and development for 1990s new drug introductions. Pharmacoeconomics 20 , 11–29. Henry, E. and M. Ottaviani (2019). Research and the approval process: the organization of persuasion. American Economic Review 109 (3), 911–55. Hirano, K. and J. R. Porter (2009). Asymptotics for statistical treatment rules. Economet- rica 77 (5), 1683–1701. Jahanshahi, M., K. Gregg, G. Davis, A. Ndu, V. Miller, J. Vockley, C. Ollivier, T. Franolic, and S. Sakai (2021). The use of external controls in FDA regulatory decision making. Therapeutic Innovation & Regulatory Science 55 (5), 1019–1035. Kasy, M. and J. Spiess (2023). Optimal pre-analysis plans: Statistical decisions subject to implementability. Working paper, University of Oxford and Stanford University. Kitagawa, T. and A. Tetenov (2018). Who should be treated? Empirical welfare maximiza- tion methods for treatment choice. Econometrica 86 (2), 591–616. Kitagawa, T. and P. Vu (2023). Optimal publication rules for evidence-based policy. Working paper, Brown University. Libgober, J. (2022). False positives and transparency. American Economic Journal: Microe- conomics 14 (2), 478–505. Manski, C. (2004). Statistical treatment rules for heterogeneous populations. Economet- rica 72 (4), 1221–1246. Manski, C. F. and A. Tetenov (2016). Sufficient trial size to inform clinical practice. Pro- ceedings of the National Academy of Sciences 113 (38), 10518–10523. McCloskey, A. and P. Michaillat (Forthcoming, 2024). Critical values robust to p-hacking. Review of Economics and Statistics . 26 Miguel, E. (2021). Evidence on research transparency in economics. Journal of Economic Perspectives 35 (3), 193–214. Milgrom, P. and I. Segal (2002). Envelope theorems for arbitrary choice sets. Economet- rica 70 (2), 583–601. Mirrlees, J. A. (1971). An exploration in the theory of optimum income taxation. Review of Economic Studies 38 (2), 175–208. Myerson, R. B. (1981). Optimal auction design. Mathematics of Operations Research 6 (1), 58–73. Olken, B. A. (2015). Promises and perils of pre-analysis plans. Journal of Economic Per- spectives 29 (3), 61–80. Popat, S., S. V. Liu, N. Scheuer, G. G. Hsu, A. Lockhart, S. V. Ramagopalan, F. Griesinger, and V. Subbiah (2022). Addressing challenges with real-world synthetic control arms to demonstrate the comparative effectiveness of pralsetinib in non-small cell lung cancer. Nature Communications 13 (1), 3500. Raices Cruz, I., M. C. Troffaes, J. Lindstr¨ om, and U. Sahlin (2022). A robust Bayesian bias-adjusted random effects model for consideration of uncertainty about bias terms
https://arxiv.org/abs/2504.21156v1
in evidence synthesis. Statistics in Medicine 41 (17), 3365–3379. Rhys Bernard, D., G. Bryan, S. Chab´ e-Ferret, J. De Quidt, J. Fliegner, and R. Rathelot (2024). How much should we trust observational estimates? accumulating evidence using randomized controlled trials with imperfect compliance. Working Paper, Toulouse School of Economics. Riveros, C., A. Dechartres, E. Perrodeau, R. Haneef, I. Boutron, and P. Ravaud (2013). Timing and completeness of trial results posted at clinicaltrials. gov and published in journals. PLoS Medicine 10 (12), e1001566. Rosenzweig, M. R. and C. Udry (2020). External validity in a stochastic world: Evidence from low-income countries. Review of Economic Studies 87 (1), 343–381. Savage, L. J. (1951). The theory of statistical decision. Journal of the American Statistical Association 46 (253), 55–67. Shinohara, K., A. Tajika, H. Imai, N. Takeshima, Y. Hayasaka, and T. A. Furukawa (2015). Protocol registration and selective outcome reporting in recent psychiatry trials: New an- tidepressants and cognitive behavioural therapies. Acta Psychiatrica Scandinavica 132 (6), 489–498. Spiess, J. (2024). Optimal estimation when researcher and social preferences are misaligned. Working paper, Stanford University. Tetenov, A. (2012). Statistical treatment choice based on asymmetric minimax regret criteria. Journal of Econometrics 166 (1), 157–165. 27 Tetenov, A. (2016). An economic theory of statistical testing. Working Paper, University of Bristol. Thompson, S. G. and J. A. Barber (2000). How should cost data in pragmatic randomised trials be analysed? BMJ 320 (7243), 1197–1200. Thorlund, K., L. Dron, J. J. Park, and E. J. Mills (2020). Synthetic and external controls in clinical trials–a primer for researchers. Clinical Epidemiology , 457–467. Viviano, D., K. Wuthrich, and P. Niehaus (2024). A model of multiple hypothesis testing. arXiv:2104.13367. Wald, A. (1950). Statistical Decision Functions . Wiley. Williams, C. (2021). Preregistration and incentives. Working paper, Durham University. Wong, H.-H., A. Jessup, A. Sertkaya, A. Birkenbach, A. Berlind, and J. Eyraud (2014). Examination of clinical trial costs and barriers for drug development final. Office of the Assistant Secretary for Planning and Evaluation, US Department of Health & Human Services , 1–92. Yin, X., P. S. Mishra-Kalyan, R. Sridhara, M. D. Stewart, E. A. Stuart, and R. C. Davi (2022). Exploring the potential of external control arms created from patient level data: a case study in non-small cell lung cancer. Journal of Biopharmaceutical Statistics 32 (1), 204–218. Yoder, N. (2022). Designing incentives for heterogeneous researchers. Journal of Political Economy 130 (8), 2018–2054. 28 A Omitted Proofs We will define Φ( ·) the cumulative density function of a Standard Normal Distribution and ϕ(·) its probabiliy density function. We will use the y=O(x) notation to indicate that y≤Kxfor a finite constant K. We define the function Λ( t) = Φ−1(1−t/2)ϕ(Φ−1(1−t/2)). A.1 Preliminary Lemmata The following lemma provides an approximation to a function of the cost that will be used in the characterization of the planner’s loss function for expensive designs. Lemma 1. LetΛ(t) = Φ−1(1−t/2)ϕ(Φ−1(1−t/2)). Then for t∈(0,1), the following holds: (i)Λ′(t)≥ −1/2for all t∈(0,1); (ii)Λ(t)≤1 2(1−t); (iii) Λ(t)≥1 2(1−t)−π 6(1−t)3; (iv)2Λ(t) +tis monotonically increasing in t; (v)1 2(1−t)h Φ−1(1−t 2)i2 ≥1−t−2Λ(t). Proof of Lemma 1. We prove each claim separately separately.
https://arxiv.org/abs/2504.21156v1
Proof of (i) Define z(t) = Φ−1 1−t 2 ,Λ(t) = z(t)ϕ z(t) . By the chain rule and the derivative of Φ−1(x),d dxΦ−1(x) =1 ϕ(Φ−1(x)),we get z′(t) = −1 2ϕ z(t).Therefore, it follows that Λ′(t) = z′(t)ϕ z(t) +z(t)ϕ′ z(t) z′(t). Recall ϕ′(x) =−x ϕ(x) for the standard normal PDF. Substitute back: Λ′(t) = z′(t)ϕ z(t) +z(t) −z(t)ϕ z(t) z′(t) = z′(t)ϕ z(t) 1−z(t)2 . But from above, z′(t)ϕ z(t) =−1 2.Therefore, Λ′(t) =−1 2 1−z(t)2 =1 2 z(t)2−1 . 29 The result is immediate since z(t)2≥0. Proof of (ii) Let t= Φ−1(1−CE/2), so Λ( CE) =tϕ(t). Since ϕis decreasing on R≥0, we have Λ( CE)≤tϕ(s) for all s∈[0, t]. By the Mean Value Theorem, it follows that Λ(CE)≤Φ(t)−Φ(0) = 1 −CE 2−1 2=1 2(1−CE). Proof of (iii) Define (as an implicit function of t)ε= 1−t(0< ε < 1) and F(ε) = Λ(1−ε), x(ε) = Φ−1 1 2+ε 2 (>0).Because x(0) = 0 and xis real analytic and odd in εand ϕ(x) is real analytic and even in x,h(ε) =x(ε)ϕ(x(ε)) is an odd power-series in ε. A simple calculation shows that h(ε) =1 2ε−π 6ε3+R5(ε) where h′(0) =1 2, the coefficient −π 6is the third derivative term and R5(ε)≥0 since, by induction, the remaining odd-order coefficients are strictly positive. Thus h(ε)≥1 2ε−π 6ε3 forε∈(0,1). This completes the claim. Proof of (iv) Note thatd dt 2Λ(t) +t = 2Λ′(t) + 1≥0 where the inequality follows from (ii) completing the proof for ( iv). Proof of (v) Using the definition above for z(t) = Φ−1(1−t 2), the desired inequality becomes K(t) :=1 2(1−t)z(t)2−h 1−t−2z(t)ϕ(z(t))i ≥0. First, note that as t→0+ we have K(t)→ ∞ from trivial calculations. In addition for t= 1, we have K(1) = 0. Therefore it suffices to show that K′(t)≤0 for all ∈(0,1) to show the desired claim. With some algebra, we can write K′(t) =1 2h z(t)2−(1−t)z(t) ϕ(z(t))i . Note that by ( ii), we have z(t)ϕ(z(t))≤1 2(1−t)≤(1−t). Therefore, we can write because z(t), ϕ(z(t)) are positive for∈(0,1),z(t)2≤(1−t)z(t) ϕ(z(t)). This proves the desired claim. The following lemma provides a simple decomposition of the social planner’s loss function conditional on the realized statistic X. Lemma 2 (Loss function) .Suppose that Assumption 1 holds. Then Eh Lp(X(∆),∆, θ0) X(∆)i =h cp−η4 0 (S2 ∆+η2 0)2X(∆)2i p(X,∆) +E[θ2 0|X(∆)] Proof of Lemma 2. Recall that under Assumption 1, β∆= 0. Using the fact that E[θ0|X(∆)] = X(∆)η2 0 S2 ∆+η2 0, andEh θ0|X(∆)i =η2 0 S2 ∆+η2 0X(∆). We can write Eh Lp(X(∆),∆, θ0) X, β ∆i =hη2 0 S2 ∆+η2 02 β2 ∆+cp−η2 0 S2 ∆+η2 0(X(∆)−β∆)2i p(X,∆) +E[θ2|X] −2η2 0 S2 ∆+η2 0β∆E[θ0|X(∆)]p(X,∆) + 2η2 0(X(∆)−β∆) S2 ∆+η2 0η2 0 S2 ∆+η2 0β∆p(X,∆) =h cp−η2 0 S2 ∆+η2 0X(∆)2i p(X,∆) +E[θ2 0|X(∆)]. 30 The following lemma provides an exact characterization of the social planner’s loss func- tion for a publication rule corresponding to a threshold rule. Lemma 3 (Loss for threshold rule) .Suppose that Assumption 1 holds. Then for any rule p∆= 1{|X(∆)|√ S2 ∆+η2 0≥t∆}, for given threshold t∆ Eh Lp∆(X(∆),∆, θ0)i =η2 0+ 2Φ(−t∆) cp−η4 0 S2
https://arxiv.org/abs/2504.21156v1
∆+η2 0h 1 +t∆ϕ(t∆) (1−Φ(t∆))i . Proof of Lemma 3. First, we write Eh Lp⋆ ∆(X(∆),∆, θ0)|X(∆)i =E[θ2 0|X]−η2 0 S2 ∆+η2 02 X2−cp 1n1 S2 ∆+η2 0X2≥t2 ∆o We can write Eη2 0 S2 ∆+η2 02 X21n1 S2 ∆+η2 0X2≥t2 ∆o =E η2 0 S2 ∆+η2 02 X2 |X|q S2 ∆+η2 0≥t∆  | {z } (I)×P |X|q S2 ∆+η2 0≥t∆  | {z } (II). Recall that X∼ N (0, η2 0+S2 ∆). It is easy to show that by symmetry of the Gaussian distribution (I) =E"η2 0 S2 ∆+η2 02 X2 Xp S2 ∆+η2 0≥t∆# . Using properties of the (truncated) Gaussian distribution, we can write (I) =η4 0 S2 ∆+η2 0h 1 +t∆ϕ(t∆) 1−Φ(t∆)i ,(II) = 2Φ( −t∆), and the lemma follows. The following lemma is a simple definition of p⋆ ∆in Proposition 1. Lemma 4. Forp⋆ ∆in Proposition 1, it follows that p⋆ ∆= 1{|X|√ S2 ∆+η2 0≥t∆}where t∆= √ S2 ∆+η2 0 η2 0p cp−max{λ(C∆),0}, and λ(C∆) =cp− η2 0√ S∆+η2 0Φ−1(1−C∆/2)2 . Proof. The proof is immediate from rearrangement. The following lemma provides an exact characterization of the loss planner’s loss function in the presence of an expensive design. 31 Lemma 5 (Loss for expensive experiments) .Suppose that Assumption 1 holds. Suppose thatλ(C∆)>0, where λ(C∆)is as defined in Lemma 4. Then E[Lp⋆ ∆(X(∆), S∆, θ0)] = η2 0+C∆h cp−η4 0 S2 ∆+η2 0i −2η4 0 S2 ∆+η2 0Φ−1(1−C∆/2)ϕ(Φ−1(1−C∆/2)) =η2 0+C∆cp−PostVarRed(∆) C∆+ 2Λ( C∆) . Proof of Lemma 5. The proof follows directly from Lemma 3, noting that when λ(C∆)>0, the constraint is binding, and C∆/2 = Φ( −t∆). Lemma 6 (Approximate loss for large costs) .Suppose that Assumption 1 holds. Suppose thatλ(C∆)>0, where C∆>0andλ(C∆)is as defined in Lemma 4. Then E[Lp⋆ ∆(X(∆), S∆, θ0)] =η2 0+C∆h cp−η4 0 S2 ∆+η2 0i −η4 0 S2 ∆+η2 0(1−C∆) +O(η2 0(1−C∆)3) =η2 0+C∆cp−PostVarRed(∆) + O(η2 0(1−C∆)3). Proof of Lemma 6. The lemma follows directly from Lemma 5 and Lemma 1(ii) and (iii) since PostVarRed(∆) ≤η2 0. A.2 Proofs of the main results A.2.1 Proof of Proposition 1 Using Lemma 2 and the lagrangian formulation with multiplier λ, the objective reads as Zh cp−λ−η2 0 S2 E+η2 0x2i p(x)dFX(x) +λCE+η2 0 where FX=N(0, S2 E+η2 0), and λ≥0 is the lagrangian multiplier. We can solve and obtain pλ(x) = 1{(η2 0 S2+η2 0x)2≥cp−λ}as the minimizer of the above expression. From complementary slackness, whenever the constraint is binding, we can write λ:Z pλ(x)dFX(x) =CE, so that (since X∼ N(0, S2 E+η2 0)), 2Φ −p cp−λp S2 E+η2 0 η2 0 =CE⇒λ=cp−η2 0p SE+η2 0Φ−1(1−CE/2)2 . 32 When the constraint is not binding, λ= 0 from complementary slackness. Under Definition 2, an experiment is expensive when λ≥0 and cheap otherwise. Our result directly follows from simple rearrangement of pλunder these two scenarios using Lemma 4. A.2.2 Proof of Proposition 2 Using Lemma 2, we can write for any cheap design, the corresponding loss evaluated at p∗ ∆= 1 cp− η2 0 S2 E+η2 0x2 ≤0 as Z cp−η2 0 S2 E+η2 0x2 1 cp−η2 0 S2 E+η2 0x2 ≤0 dFx(x) +η2
https://arxiv.org/abs/2504.21156v1
0 which clearly is always weakly smaller than η2 0by construction of p∗ ∆. A.2.3 Proof of Proposition 3 We will use the following lemma to prove the desired claim. Lemma 7. (a) For each unbiased-on-average design ∆withC∆>0, there exists a thresh- oldc⋆ p(∆, η0)>0such that ∆is worthwhile if and only if cp≤c⋆ p(∆, η0). (b) This threshold satisfies PostVarRed(∆) C∆≥c⋆ p(∆, η0)≥PostVarRed(∆) −η2 0π 6(1−C∆)3 C∆. Proof. We prove (a) and (b) separately. Proof of (a) Let Λ( C∆) = Φ−1(1−C∆/2)ϕ(Φ−1(1−C∆/2)). From Lemma 5, if ∆ is expensive, then L∗ ∆=η2 0+C∆ cp−PostVarRed(∆) −2 PostVarRed(∆)Λ( C∆). (6) Consider a threshold defined as follows c⋆ p= PostVarRed(∆)C∆+ 2Λ( C∆) C∆. (7) It follows directly from Equation (6) that ∆ is worthwhile if and only if cp< c⋆ p. 33 Proof of (b) From Lemma 1 (since PostVarRed(∆) ≤η2 0) PostVarRed(∆) C∆≥PostVarRed(∆)C∆+ 2Λ( C∆) C∆≥PostVarRed(∆) −η2 0π 6(1−C∆)3 C∆. This completes the proof. Proposition 3 is a direct corollary of Lemma 7. A.2.4 Proof of Proposition 4 Proposition 4 follows directly from the fact that for any cheap design the individual ratio- nality constraint is non-binding. Therefore L⋆ ∆for a cheap design with cost C∆equalL⋆ ∆′ for the same design with cost C∆′= 0. It directly follows that the social planner objective in Definition 1 is increasing in S2 ∆, since S2 ∆increases PostVarRed(∆). Therefore L⋆ E− L⋆ O≤0 ifS2 E≤S2 O(which follows by Assumption 1) when Eis a cheap design. A.2.5 Proof of Proposition 5 From Lemma 6, for an expensive design ∆ we can write L∗ ∆=η2 0+C∆cp−PostVarRed(∆) + O(η2 0(1−C∆)3) Therefore, it follows that L∗ E−L∗ O= (CE−CO)cp− PostVarRed( E)−PostVarRed( O) +O(η2 0(1−CE)3)+O(η2 0(1−CO)3). The proof completes from simple rearrangement. A.2.6 Proof of Theorem 1 We prove (a) and (b) separately for Obeing an expensive design first. We then return to Onot being an expensive design at the end of the proof. Recall that PostVarRed( E)≥ PostVarRed( O) under Assumption 1. Proof of (a) Take two expensive designs EandO. We can write L∗ E− L∗ O =CE[cp−PostVarRed( E)]−2 PostVarRed( E)Λ(CE)−CO[cp−PostVarRed( O)] + 2 PostVarRed( O)Λ(CO). 34 We add and subctract from the expression CEPostVarRed( O), so that we obtain L∗ E− L∗ O =cpCE−cpCO−CEPostVarRed( E) +CEPostVarRed( O)| {z } (I) −2 PostVarRed( E)Λ(CE) + 2 PostVarRed( O)Λ(CO)−CEPostVarRed( O) +COPostVarRed( O)| {z } (II). Under (a), ( I)≤0, so that it suffices to show that ( II)≤0 for (a) to hold. Using the fact that PostVarRed( E)≥PostVarRed( O), we can write (II)≤ −2 PostVarRed( O)Λ(CE) + 2 PostVarRed( O)Λ(CO)−CEPostVarRed( O) +COPostVarRed( O) ≤PostVarRed( O) 2Λ(CO) +CO−2Λ(CE)−CE Now note that under Assumption 1, CE≥CO. By Lemma 1(v), 2Λ( t) +tis monotonically increasing in tfort∈(0,1). Therefore it follows that 2Λ(CO) +CO−2Λ(CE)−CE ≤0 completing the proof of (a), since PostVarRed( O)≥0 by definition of PostVarRed( O). Proof of (b) Recall that for an expensive experiment by Definition 2, we have CO≥ P(|X(O)| ≥γ⋆ O) where γ⋆ O=S2 O+η2 0 η2 0√cp. In addition, it directly follows from rearrange- ment that P(|X(O)| ≥γ⋆ O) =P(|X(O)|√ S2 O+η2 0≥q cp PostVarRed( O)) = 2(1 −Φ(q cp
https://arxiv.org/abs/2504.21156v1
PostVarRed( O))) since X(O)∼ N(0, η2 0+S2 O) (unconditional on θ0). Denote C∗ O= 2(1 −Φ(q cP PostVarRed( O))). Note that by assumption CO≥C∗ O. It follows that the following holds: cp PostVarRed( O)=h Φ−1(1−C∗ O 2)i2 ≥h Φ−1(1−CO 2)i2 (8) since CO≥C∗ Oand Φ−1(1−CO 2) is decreasing in CO. We next prove our statement. We can write L∗ E− L∗ O=CEcp−COcp− 2Λ(CE) +CE PostVarRed( E) + 2Λ(CO) +CO PostVarRed( O) ≥CEcp−COcp−PostVarRed( E) + PostVarRed( O) + 2Λ(CO) +CO−1 PostVarRed( O) where we used the fact that 2Λ(CE) +CE ≤1 from Lemma 1(ii). Therefore, to prove the statement, it suffices to prove that the following holds:  2Λ(CO) +CO−1 PostVarRed( O) +1 +CO 2cp−COcp = 2Λ(CO) +CO−1 PostVarRed( O) +1 2cp−COcp 2≥0(9) 35 Letg(CO) = 2Λ( CO)+CO. To prove Equation (9) it suffices to show that the following holds (1−CO)cp 2 PostVarRed( O)≥1−g(CO). From Equation (8), we have (1−CO)cp 2 PostVarRed( O)≥1−CO 2 Φ−1 1−CO 22 . From Lemma 1(v), we have1−CO 2h Φ−1 1−CO 2i2 ≥1−g(CO) completing the proof. Proof for Obeing a cheap design Whenever Ois a cheap design L⋆ O=L⋆ O′ where O′is a design with S2 O′=S2 OandCO′=P(|X(O)| ≥γ⋆ O). This follows immediately from Definition 1, since for a cheap design COis non-binding. Therefore, L⋆ Ois equivalent to the loss of a different design C′ Owith constraint that is exactly binding under Definition 1. We can therefore verbatim the proof by comparing L⋆ EtoL⋆ O′where O′is an expensive design with cost CO′=P(|X(O)| ≥γ⋆ O). A.2.7 Proof of Theorem 2 Any optimal design is an expensive design We first claim that within the class of de- signsDany cheap design is weakly dominated by at least one expensive design. Here is the proof of the claim. Under Definition 2, following the proof of Proposition 1, it follows that the constraint is non binding for all cheap design and binding for expensive designs only. In addition, for any cheap design C∆does not affect the planner’s objective. By Proposition 1, any design ∆ that satisfies C∆=P(|X(O)| ≥γ∗ ∆) is an expensive design and the loss L∗ ∆ equal the loss of any cheap design with a publication rule as in Proposition 1. Therefore, we can always find an expensive design (such as a design that makes the individual ratio- nality constraint exactly binding for the researcher) that weakly dominates any cheap design. Upper bound on the objective function For any expensive design, under Lemma 1(ii), we can write L∗ ∆=η2 0+C∆cp−PostVarRed(∆) C∆+ 2Λ( C∆) ≤η2 0+C∆cp−PostVarRed( C∆).(10) 36 In addition, for any expensive worthwhile design ∆′∈ D with cost C∆′, it directly follows thatL∆′≥cpE[p(X,∆′)] =C∆′cp≥cfcp.Next, we minimize the upper bound in Equation (10). Because Dcontains worthwhile experiments, for any design ∆′∈ D L∗ ∆′≥cfcp Optimal solution to the upper bound Setf(x) =cpcv x−η4 0 x+η2 0. By taking first order conditions, we obtain that the function is minimized at x⋆=√cpcvη2 0 η2 0−√cpcv, assuming η2 0>√cpcv. Plugging in the solution for S2 ∆yields the following bound L∗ ∆≤cpcv η2 0+cpcf. Therefore, it follows that for S2 ∆=x∗at the optimal solution, L∗ ∆−min
https://arxiv.org/abs/2504.21156v1
∆′∈DL∗ ∆′≤cpcv η2 0. Difference between optimal and approximate solution Take x1=√cpcv,then g x1 =g√cpcv =cpcv√cpcv−η4 0√cpcv+η2 0=√cpcv−η4 0 η2 0+√cpcv. Hence with simple rearrangement g√cpcv −g x∗ = cpcv3/2 η2 0 η2 0+√cpcv≤(cpcv)3/2 η4 0. The proof is complete. A.2.8 Proof of Theorem 3 LetX0=X(0) denote the type of the researcher. Note that θ0|X0∼ N η2 0 η2 0+S2 EX0,S2 Eη2 0 S2 E+η2 0 . Hence, writing ω=η2 0 η2 0+S2 Ethe planner’s ex- pected loss conditional on X0if the researcher chooses a nontrivial design ∆ that has publi- cation probability pis ω2β2 ∆+cp p+ω2X2 0(1−p) +ω2S2 E= ω2(β2 ∆−X2 0) +cp p+ω2(X2 0+S2 E). 37 Moreover, if the researcher chooses the trivial design, then the planner’s expected loss con- ditional on X0isω2(X2 0+S2 E).For 0≤v≤1, define L∗(X0, v) = min p,β∆∈[0,1]×R p−cd|β∆|=v ω2(β2 ∆−X2 0) +cp p (11) denote the minimum expected loss conditional on X0generated by a nontrivial design and publication probability that delivers utility exactly v−C0to the researcher. Lemma 8. (a) If |X0| ̸=γ, then there is a unique optimizer (˜p(X0, v), β∆(X0, v))in(11) given by ˜p(X0, v) =  v if|X0|< γ min 1,2v+√ v2+3c2 d(X2 0−(γ∗)2) 3 if|X0|> γ andβ∆(X0, v) = ˜p(X0, v)−v. (b) If |X0|> γ∗(resp. |X0|=γ∗,|X0|< γ∗), then L∗(X0, v)is negative (resp. zero, positive) for v >0, and has negative (resp., zero, positive) derivative in vover (0,1). (c)L∗(X0, v)has negative derivative in X0over (0, γ∗)∪(γ∗,∞). Proof. Writing |β∆|=p−u−C0 cd, note that (11) can be written equivalently as L∗(X0, v) = min p∈[0,1]|p≥v(" ω2p−v cd2 −ω2X2 0+cp# p) . (12) Noting that γ∗=1 ω√cp, we divide into cases based on the value of |X0|to complete the proof. •Case 1: |X0|> γ∗. In this case, we claim that for 0 < v < 1, the quantity L∗(X0;v) is the optimum of the relaxed problem L∗(X0, v) = min p∈[0,1](" ω2p−v cd2 −ω2X2 0+cp# p) , (13) and moreover that all optimizers ˜ p(X0, v) satisfy ˜ p(X0, v)> v. Indeed, taking p=v in (13), we can see that the right hand side is negative. It follows that in optimum in (13), we must have that ω2 p−v cd2 −ω2X2 0+cp<0. The first-order condition for the optimality of pthen entails that ˜ p(X0, v)> v, as desired. In particular, we then have that L∗(X0;v)<0. It also follows from first-order condition 38 for the optimality of pthat ˜p(X0, v) = min( 1,2v+p v2+ 3c2 d(X2 0−(γ∗)2) 3) . The Envelope Theorem (Milgrom and Segal, 2002, Theorem 3) guarantees that L∗(X0;v) is partially differentiable in vandX0, and that ∂L∗(X0, v) ∂v=2ω2(p∗(v)−v)p∗(v) c2 d<0 ∂L∗(X0;v) ∂X0=−2ω2X0p∗(v)<0 for X0>0. •Case 2: |X0| ≤γ∗. In this case, the objective in (12) is increasing in pon the interval [v,1]. The optimum is therefore achieved at ˜ p(X0, v) =v(uniquely for |X0|< γ), so we have that L∗(X0, v) = [ cp−ω2X2 0]v. For |X0|=γ∗, this function is zero. For |X0|< γ∗, this function is positive for v > 0, has positive derivative in v, and has negative derivative in X0forX0>0. The cases exhaust all possibilities, which completes the proof. Let
https://arxiv.org/abs/2504.21156v1
us now consider the constrained problem in which the planner must choose a publi- cation rule that provides type X0=γ∗an indirect utility of u∗∈[0,1−C0]. The linearly smoothed cutoff rule p∗ γ∗+1−C0−u∗ cd,cdlies within this class, and under the planner’s preferred equilibrium, delivers expected loss conditional on X0of this publication rule is   ω2(X2 0+S2 E) if |X0| ≤γ∗−u∗ cd L∗(X0, u∗+cd(X0−γ∗) +C0) +ω2(X2 0+S2 E) if γ∗−u∗ cd<|X0|< γ∗+1−C0−u∗ cd L∗(X0,1) +ω2(X2 0+S2 E) if |X0| ≥γ∗+1−C0−u∗ cd. Lemma 9. Within the class of publication rules that provide type X0=γ∗an indirect utility ofu∗∈[0,1−C0], for all types X0>0: (a) the linearly smoothed cutoff rule pγ∗+1−C0−u∗ cd,cdminimizes the expected loss conditional onX0(under the planner’s preferred equilibrium), and (b) any other publication rule within this class minimizes the expected loss conditional on X0must provide the same indirect utility to type X0aspγ∗+1−C0−u∗ cd,cd. Proof. We divide into cases based on the value of X0. 39 •Case 1: X0> γ∗. If type X0obtains utility u, then type γ∗could obtain utility at least u−cd(X0−γ∗) by choosing a design with the same mean as the design chosen byX0. Hence, we must have that u−cd(X0−γ∗)≤u∗. Obviously, we must have that u≤1−C0. By Lemma 8(b) and the definition of L∗, the expected loss conditional on X0must be at least L∗ X0,min u∗+cd(X0−γ∗) +C0,1  +ω2(X2 0+S2 E) with equality only if the indirect utility to X0is min u∗+cd(X0−γ∗) +C0,1 . •Case 2: X0=γ∗. By Lemma 8(b) and the definition of L∗, the expected loss conditional onX0must be at least ω2(X2 0+S2 E) =L∗(X0, u∗+C0) +ω2(X2 0+S2 E). •Case 3: γ∗−u∗ cd≤X0< γ∗. Type X0could obtain utility at least u∗+cd(X0−γ∗)>0 by choosing a design with the same mean as the design chosen by γ∗. By Lemma 8(b) and the definition of L∗, the expected loss conditional on X0must be at least L∗(X0, u∗+cd(X0−γ∗) +C0) +ω2(X2 0+S2 E), with equality only if the indirect utility to type X0isu∗+cd(X0−γ∗). •Case 4: X0< γ∗−u∗ cd. By Lemma 8(b) and the definition of L∗, the expected loss conditional on X0must be at least ω2(X2 0+S2 E), with equality only if the indirect utility to type X0is 0. The cases exhaust all possibilities, which completes the proof. To prove the first part of the theorem, note that Lemma 9(a) implies that there exists a utility level u∗∈[0,1−C0] for type γ∗such that pγ∗+1−C0−u∗ cd,cd(under the planner’s preferred equilibrium) is optimal. Writing v∗=u∗+C0,the expected loss of pγ∗+1−v∗ cd,cd(under the planner’s preferred equilibrium) is E(v∗, C0) =EX0 L∗(X0,min{v∗+cd(X0−γ∗),1}) 1 |X0| ≥γ∗−v∗−C0 cd +η2 0. Differentiating under the integral sign using Lemma 8(b), we see that∂E ∂v∗ v∗=C0<0 and that ∂E ∂v∗ v∗=1>0. So for pγ∗+1−C0−u∗ cd,cdto be optimal, we must have 0 < u∗<1−C0, hence 0<1−C0−u∗ cd<1−C0 cd. 40 To prove the second part of the theorem, consider any optimal publication rule p∗that delivers utility level u∗to type X0=γ∗. By Lemma 8(a), the publication rule p∗(under the planner’s preferred equilibrium) must deliver expected loss conditional on X0equal to that ofpγ∗+1−C0−u∗ cd,cdfor almost all type X0, and pγ∗+1−C0−u∗ cd,cdmust be optimal. In particular, we must have that 0 < u∗<1−C0.
https://arxiv.org/abs/2504.21156v1
Using these consequences of optimality, we prove the two assertions in this part separately. •Suppose for sake of deriving a contradiction that p∗(X)̸=pγ∗+1−C0−u∗ cd,cd(X) for a positive measure of X > 0 with pγ∗+1−C0−u∗ cd,cd(X)> C 0. Then, at least one of the following must occur. –Case 1: p∗(X)̸= 1 for a positive measure of X > γ∗+1−C0−u∗ cd. Then types X0> γ∗+1−C0−u∗ cdwith p∗(X0)<1 must obtain indirect utility less than 1 −C0, which, by Lemma 9(b), must lead to expected loss conditional on X0strictly greater than that of pγ∗+1−C0−u∗ cd,cd—a contradiction. –Case 2: p∗(X)̸=u∗+C0+cd(X−γ∗) for a positive measure of results X∈ γ∗−u∗ cd, γ∗+1−C0−u∗ cd . Letting ˜ p(X0, v) be as defined in Lemma 8(a), by conti- nuity, a positive measure of types X0∈ γ∗−u∗ cd, γ∗+1−C0−u∗ cd must satisfy p∗ X0+˜p(X, u∗+C0+cd(X0−γ∗))−u∗−C0 cd ̸= ˜p(X, u∗+C0+cd(X0−γ∗)). Hence, if such types are to obtain indrect utility u∗+cd(X0−γ∗), they must have publication probability different from ˜ p(X, u∗+C0+cd(X0−γ∗)), which, by Lemma 8(a), would also lead to expected loss conditional on X0strictly greater than that of pγ∗+1−C0−u∗ cd,cd. But Lemma 9(b) implies for X0∈ γ∗−u∗ cd, γ∗+1−C0−u∗ cd , to obtain expected loss conditional on X0equal to that of pγ∗+1−C0−u∗ cd,cd, type X0 must obtain indirect utility u∗+cd(X0−γ∗)—a contradiction. •Suppose for sake of deriving a contradiction that p∗(X)> C 0for a positive measure of X > 0 with pγ∗+1−C0−u∗ cd,cd(X)≤C0. Then types X0∈(0, γ∗−1−u∗ cd) with p∗(X0)> C 0 must obtain positive indirect utility, which, by Lemma 9(b), must lead to expected loss conditional on X0strictly greater than that of pγ∗+1−C0−u∗ cd,cd—a contradiction. 41 A.2.9 Proof of Proposition 6 In the notation of the proof of Theorem 3, let X0∈(γ∗, X∗). The expected loss conditional onX0if type X0chooses a nontrivial design ∆ is ω2(β2 ∆−X2 0) +cp p, where p=cd(|β∆|−(X∗−X0)).Lemma 8(a) implies that there is a unique minimizer, which satisfies β∆>0. A.2.10 Proof of Proposition 7 In the notation of the proof of Theorem 3, consider the function E(v∗, C0) on the domain {(v∗, C0)|1≥v∗≥C0≥1−cdγ∗}. Differentiating E∗using the fundamental theorem of calculus implies that ∂E ∂C0=−2 cdL∗ γ∗−v∗−C0 cd, C0exp − γ∗−v∗−C0 cd2 2(S2 E+η2 0) p 2π(S2 E+η2 0). Lemma 8(b) implies that L∗ γ∗−v∗−C0 cd, C0 ≥0, and it then follows from Lemma 8(c) that∂2E ∂v∗∂C0≤0. Hence, Eis a submodular on {(v∗, C0)|1≥v∗≥C0≥1−cdγ∗}, which is a lattice. Topkis’s Theorem implies that arg min v∗∈[C0,1]E(v∗, C0) is increasing in the strong set order in C0over [1 −cdγ∗,1]. Since E(v∗, C0) is the expected loss of pγ∗+1−v∗ cd,cd(under the planner’s preferred equilibrium), the proposition follows. A.2.11 Proof of Proposition 8 In the notation of the proof of Theorem 3, take |X0| ∈(X∗ 0−1−C0 cd, γ∗ E). The expected loss conditional on X0if type X0chooses a nontrivial design ∆ is ω2(β2 ∆−X2 0) +cp p, where p=cd(|β∆|−(X∗−X0)).Lemma 8(a) implies that there is a unique minimizer, which satisfies β∆= 0. Since |X0|>1−C0 cd,we have p > C 0, so type X0will choose a nontrivial design and obtain a nonzero publication probability. 42 A.2.12 Proof of Proposition 9 We prove each claim separately.
https://arxiv.org/abs/2504.21156v1
Proof of (i) The first claim follows directly from the fact that if ∆ = Eis a cheap design, p⋆ E is the minimizer of the loss function in the absence of incentive compatible constraints and where the individual rationality constraint is not binding at the optimum by Definition 2. On the other hand, the minimizer of LMfurther imposes additional incentive compatibility constraints. Therefore, L∗ E≤ L∗ MsinceL∗ Eis the minimum achieved under no constraints on the class of publication rules. Proof of (ii) For the second claim, from Theorem 3, we can write L∗ M≤Eh Lp′(X(∆⋆ p′),∆⋆ p′, θ0)i , p′(X) = 1 |X| ≥γ∗ E+1 cd where p′is a sub-optimal publication rule. We can write L∗ M≤E ε+β∆∗ p′2 p′(X) +η2 0(1−p′(X)) +cpp′(X) =E ε2p′(X) +η2 0(1−p′(X)) +cpp′(X) | {z } (A)+E[β2 ∆⋆ p′p′(X)] + 2E[β∆⋆ p′εp′(X)] | {z } (B) It follows that under p′for all results for which |θ0+ε| ∈[γ∗ E, γ∗ E+1 cd], the researcher chooses |β∆⋆ p′|=γ∗ E+1 cd− |θ0+ε|. For all |θ0+ε|< γ∗ E, the researcher chooses |β∆⋆ p′|= 0. Therefore, it follows that p′(X) = 1{|θ0+ε| ≥γ∗ E}. As a result, we can write as we define p′′(x) = 1{|x| ≥γ⋆ E} (A) =E ε2p′(X) +η2 0(1−p′(X)) +cpp′(X) =Eh Lp′′(θ0+ε,0, θ0)i (B) =E[β2 ∆⋆ p′p′′(θ0+ε)] + 2E[β∆⋆ p′εp′′(θ0+ε)]. That is ( A) equals the loss function as if there was no manipulation, under a publication rulep′′. (B) captures the manipulation component. We next study ( B). We consider each term in ( B). First note that for the first term, we can write β2 ∆∗ p′≤1 c2 d⇒E[β2 ∆⋆ p′p′′(θ0+ε)]≤1 c2 d since the researcher will never choose a bias larger than 1 /cdby the argument above. For the second term, recall that β∆⋆ p′= sign( θ0+ε)h γ∗ E+1 cd− |θ0+ε|i Eh β∆⋆ p′ε |θ0+ε|i p′′(|θ0+ε|) =c0Eh sign(θ0+ε)ε |θ0+ε|i p′′(|θ0+ε|) 43 for a finite constant c0(that depends on |θ0+ε|and other parameters in the model). Because θ0is centered around zero and independent of ε,E[sign( θ0+ε)ε||θ0+ε|] = 0. Collecting the terms, we can write L∗ E− L∗ M≥ L∗ E−Eh Lp′′(θ0+ε,0, θ0)i −1 c2 d. Here,L∗ Eis the loss for an expensive design with no manipulation with cost CE. On the other hand, Eh Lp′′(θ0+ε,0, θ0)i =L∗ Ofor a design Owith cost CO= 0 and variance S2 O=S2 E. It follows that L∗ E−Eh Lp′′(θ0+ε,0, θ0)i = GAP( E), completing the proof. 44
https://arxiv.org/abs/2504.21156v1
arXiv:2504.21288v1 [math.ST] 30 Apr 2025Algebraic Approach for Orthomax Rotations Ryoya Fukasaku1, Michio Yamamoto2,5,6, Yutaro Kabata3, Yasuhiko Ikematsu4, Kei Hirose4 1Faculty of Mathematics, Kyushu University, 744 Motooka, Ni shi-ku, Fukuoka 819-0395, Japan 2Graduate School of Human Sciences, Osaka University, 1-2 Ya madaoka, Suita, Osaka 565-0871, Japan 3School of Information and Data Sciences, Nagasaki Universi ty, Bunkyocho 1-14, Nagasaki 852-8131, Japan 4Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan 5RIKEN Center for Advanced Intelligence Project, 1-4-1, Nih onbashi, Chuo-ku, Tokyo 103-0027, Japan 6Data Science and AI Innovation Research Promotion Center, S higa University, 1-1-1, Banba, Hikone, Shiga, 522-8522, Japan E-mail: fukasaku@math.kyushu-u.ac.jp, yamamoto.michio .hus@osaka-u.ac.jp, kabata@nagasaki-u.ac.jp, ikematsu@imi.kyushu-u.ac.jp, hirose@imi.kyushu-u.ac. jp Abstract In exploratory factor analysis, rotation techniques are employed to derive interpretable factor loading matrices. Factor rotations deal with equality-cons trained optimization prob- lems aimed at determining a loading matrix based on measure of simplicity , such as “perfect simple structure” and “Thurstone simple structure.” Numerous cr iteria have been proposed, since the concept of simple structure is fundamentally ambiguous an d involves multiple dis- tinct aspects. However, most rotation criteria may fail to consist ently yield a simple struc- ture that is optimal for analytical purposes, primarily due to two ch allenges. First, existing optimization techniques, including the gradient projection descent method, exhibit strong dependence on initial values and frequently become trapped in subo ptimal localoptima. Sec- ond, multifaceted nature of simple structure complicates the ability of any single criterion to ensure interpretability across all aspects. In certain cases, e ven when a global optimum is achieved, other rotations may exhibit simpler structures in specific aspects. To address these issues, obtaining all equality-constrainedstationarypoints — includ ing both global and local optima — is advantageous. Fortunately, many rotation criteria are expressed as algebraic functions, and the constraintsin the optimizationproblemsin facto rrotationsareformulated as algebraic equations. Therefore, we can employ computational a lgebra techniques that uti- lize operations within polynomial rings to derive exact all equality-con strained stationary points. Unlike existing optimization methods, the computational alge braic approach can determine global optima and all stationary points, independent of in itial values. We conduct Monte Carlo simulations to examine the properties of the orthomax r otation criteria, which generalizes various orthogonal rotation methods. Key Words : Factor Analysis; Factor Rotation; Simple Structure; Orth omax Rotation; Com- putational Algebra; 1 Introduction Thefactor analysis modelisalatent variablemodelinitial ly introducedby(Spearman,1904). Recently, its applications have expanded social and behavi oral sciences to encompass diverse 1 fields, includingmarketing, life sciences, materials scie nces, and energy sciences (Lin et al., 2019; Shkeer and Awang, 2019; Shurrab et al., 2019; Kartechina et a l., 2020; Vilkaite-Vaitone et al., 2022). Akey objective infactor analysis istheestimation o f theloadingmatrix, whichrepresents the influence of latent variables — termed common factors — on observed variables. This matrix is typically estimated via maximum likelihood o r least squares method. After that, in order to enhance interpretability, factor rotatio ns aim to achieve a “simple structure,” quantified throughvariousmeasures. Notably, “perfectsim plestructure”and“Thurstonesimple structure” are widely known as such a simple structure. Factor rotations deal with equality-constrained optimiza tion problems, aiming
https://arxiv.org/abs/2504.21288v1
to maximize or minimize specific rotation criterion. The concept of simp le structure is inherently ambiguous, encompassing multiple distinct aspects. Thus, numerous ro tation criteria have been proposed. However, most existing rotation criteria may fail to consis tently yield a simple structure con- ducive to interpretability due to two primary challenges. F irst, optimization methods such as the gradient projection descent method exhibit strong depe ndence on initial values owing to the nonlinearity of rotation criteria. Therefore, such met hods may result in suboptimal local optima. Many software packages employ a single initial valu e, which may obscure suboptimal solutions from analysts unfamiliar with this dependence. Second, given the multifaceted nature of simple structure, no single criterion can ensure that a loading matrix is interpretable across all aspects. Analy sts select criteria based on specific simplicity objectives or empirical considerations. Howev er, in certain cases, other rotations, e.g. stationary points, may yield a simpler structure compared t o the global optimum in specific aspects. As conventional methods typically yield only a sin gle solution, analysts are constrained from exploring potentially more informative alternatives . This limitation is intrinsic because conventional approac hes predominantly yield a single solution. To address these constraints, obtaining all equa lity-constrained stationary points — including both global and local optima — would enable a more i nformed selection of the most interpretable factor loadings. Fortunately, most of crite ria can be formulated as algebraic func- tions, and the constraints in the optimization problems in f actor rotations are formulated as algebraic equations. Since algebraic equations can be deri ved by applying the method of La- grange multipliers to an equality-constrained optimizati on problem, the problem can be solved by solving these algebraic equations. Even more fortunatel y, algebraic operations can yield all solutions to such algebraic equations. For instance, (Jennrich, 2004) conducted simulations on or thogonal rotation criteria using the gradient projection algorithm for orthomax rotations i ntroduced by (Jennrich, 2001). How- ever, this method is highly sensitive to initial values and i s incapable of computing all possible solutions. To address this issue, the present study elimina tes such dependency by employing simulations based on algebraic operations. Sinceexisting research has not utilized this approach, as will be demonstrated later, it yields novel findings in thi s study. This study examines the “orthomax criteria” as introduced i n (Harman, 1976, section 4.6), whichgeneralizes variousorthogonalrotation criteria, i ncludingthe“quartimaxcriterion,” “vari- max criterion,” “equamax criterion,” “parsimax criterion ”, and etc. Specifically, orthomax ro- 2 tations deal with the following equation-constrained opti mization problem: argmax T∈Rk×kQω(AT) subject to T⊤T=Ik, whereIkis thek-th identity matrix, and A∈Rp×kis an initial solution estimated via methods such as maximum likelihood or least squares. The matrix Λ is d efined as Λ = ( λij)1≤i≤p 1≤j≤k=AT, and the function Qω(AT) =Qω(Λ) is defined as Qω(Λ) =p/summationdisplay i=1k/summationdisplay j=1λ4 ij−ω p/summationdisplay j=1/parenleftiggp/summationdisplay i=1λ2 ij/parenrightigg2 , for a given hyperparameter 0 ≤ω≤p. Here,pdenotes the number of observed variables, and kis the number of common factors. For example, Q0is the quartimax criterion, Q1varimax criterion, Qk/2equamax criterion, and Q(p(k−1))/(p+k−2)parsimax criterion, as summarized
https://arxiv.org/abs/2504.21288v1
in (Browne, 2001, table 1). The main contributions of this stud y are threefold: one theoretical result and two practical advancements. The primary contributions of this study are as follows. Firs t, we address theoretical results concerning the existence of orthogonal rotation capable of reconstructing simple structures. Specifically, we establish an equivalent condition for the e xistence of orthogonal rotation that yield perfect simple structures. Furthermore, an equivale nt condition for Thurstone simple structures is analyzed using the theory of polynomial ideal s (refer to Appendix B for detailed exposition). Also, utilizing the equivalent condition for the existence of orthogonal rotation that yield perfect simple structures, we perform Monte Carlo simulati ons to characterize orthomax criteria systematically. This characterization involves identify ing all stationary points of the orthomax criteria, calculating criterion values at these points, an d determining the positive or negative definiteness of bordered Hessians. These analyses will allo w us to elucidate the functional forms of orthomax criteria, including quartimax, varimax, equam ax, and parsimax criteria. Second, we report an unexpected yet practical finding obtain ed from our simulation: in orthogonal models, varimax rotation has widespread use due to its accessibility in many sta- tistical software, often as the default or sole option for or thogonal rotation. A factor-loading matrix achieves a perfect simple structure when each row con tains at most one nonzero element (Bernaards and Jennrich, 2003). This property is referred t o as “unifactoriality” by (Kaiser, 1974) and as “perfect cluster solution” by (Browne, 2001). M any researchers consider varimax to approximate this property effectively; for instance, (Bro wne, 2001, page 113) states that “Perfect [simple structure] solutions were handled effectiv ely by varimax.” However, our Monte Carlo simulations reveal an unanticipated finding, corrobo rated by algebraic analysis, wherein factor rotations derived using the quartimax criterion exh ibit superior interpretability compared to those obtained via the varimax criterion. Third, our algebraic approach offers several practical advan tages. Unlike numerical ap- proaches, such as the gradient projection method, our algeb raic approach enables the compu- tation of all stationary points. This allows not only for the exact solution of the optimization 3 problem for a given criterion and but for the selection of sol utions that facilitates interpretability for the analyst. These aspects are elaborated upon in Sectio n 6. The remainder of this study is structured as follows: Sectio n 2 provides a concise review of orthomax factor rotations. Section 3 establishes an equi valent condition for the existence of orthogonal rotation that yield perfect simple structure s. Section 4 extends the discussion to Thurstone simple structures, analyzing the correspondi ng condition. Section 5 introduces a novel computational algebraalgorithmfordeterminingall optimalcandidates. Section6presents numerical results derived from artificial datasets. 2 Orthomax rotations 2.1 Factor rotations LetX= (X1,···,Xp)⊤be ap-dimensional random vector. A factor analysis model is expressed as X=µ+ΛF+ε, whereµis a mean vector, Λ = ( λij) is the factor loading matrix, and F= (F1,···,Fk)⊤ andε= (ε1,···,εp)⊤are unobservable random vectors. Here, A⊤denotes the transpose of a matrixA. The components of Fandεare referred to as the common factors and uniquefactors, respectively.
https://arxiv.org/abs/2504.21288v1
Itisassumedthat theuniquefactors aremutu ally uncorrelated, andindependentof ε. Additionally, under the orthogonal model assumption, the common factors are uncorrelated. LetA∈Rp×kdenote the estimated factor loading matrix under the orthog onal model, referred to as the initialsolution. A factor loading matrix is known to exhibit rotati onal inde- terminacy, as both AandATyield the same covariance matrix, Σ = Var( X), for any regular matrixT. Hence, factor analysis includes factor rotations, where th e aim is to find a regular matrix T that transforms Ainto a simplified factor loading matrix Λ = AT. Notable simplicity criteria include the perfectsimplestructure and Thurstone simplestructure. Afactor-loading matrix Λhas a perfectsimplestructureif each row of Λ contains at most one nonzero element, as defined by (Bernaards and Jennrich, 2003 ). This structure is alternatively referred to as unifactoriality (Kaiser, 1974) or aperfectclustersolution (Browne, 2001). In addition, Thurstone’s rules (Thurstone, 1947) for the simple structure of a factor matrix Λ withkcolumns, as described by (Browne, 2001, section “Simplicit y of a Factor Pattern”) are as follows: 1. each row of Λ contains at least one zero element, 2. each column of Λ contains at least kzero elements, 3. each pair of columnsof Λ has several rows with a zero elemen t in onecolumn and a nonzero in the other, 4. fork≥4, every pair of columns of Λ has several rows with zero elemen ts in both columns, and 4 5. each pair of columns of Λ has few rows with nonzero elements in both columns. A factor matrix Λ is said to have a Thurstone simplestructure if thse conditions are satisfied. To derive Λ = ATwith the specified simple structures for a given initial solu tionA, nu- merous rotation methods have been proposed. If Tis orthogonal, the process is referred to as anorthogonal rotation. This study emphasizes orthogonal rotations and t heir corresponding criteria. 2.2 Orthogonal rotations LetQrepresent an orthogonal criterion. To derive Λ = ATwith simple structures for a given initial solution A, the following equation-constrained optimization proble m is solved: argmax T∈Rk×kQ(AT) subject to T⊤T=Ik, (1) whereQ(AT) denotes the orthogonal rotation criterion evaluated at AT. We note that Tis orthogonal if and only if T⊤T=Ik. Some standard rotations are formulated as minimization problems. When necessary we will re-formulate them as equiv alent maximization problems. To maximize Qover orthogonal matrices T∈Rk×k, the existing literature widely adopts the gradient projection algorithm described in (Jennrich, 2001). The stopping condition for this algorithm derives directly from (Jennrich, 2001, equation (7)) and will be utilized in Section 5. We conclude this section with the stopping condition. Proposition 1. The orthognal rotation criterion Qhas a stationary point at T∈Rk×krestricted to{T∈Rk×k:T⊤T=Ik}if and only if T⊤∂Q(AT) ∂Tis a symmetric matrix, where ∂Q(AT) ∂T=/parenleftbigg∂Q(AT) ∂tjl/parenrightbigg 1≤j≤k 1≤l≤k(2) denotes the gradient of QatT= (tjl)1≤j≤k 1≤l≤k; 2.3 Orthomax criteria Variousorthogonalrotation criteriacanbeexpressedbyth eorthomax criterion Qω(Λ), which is defined as Qω(Λ) =p/summationdisplay i=1k/summationdisplay j=1λ4 ij−ω pk/summationdisplay j=1/parenleftiggp/summationdisplay i=1λ2 ij/parenrightigg2 , (3) where Λ = ( λij)1≤i≤p 1≤j≤k=AT. For instance, Q0is the quartimax criterion, Q1is the varimax criterion, Qk/2is the equamax criterion, and Q(p(k−1))/(p+k−2)is the parsimax criterion as shown in (Browne, 2001, table 1).
https://arxiv.org/abs/2504.21288v1
Note that κin (Browne, 2001, table 1) corresponds to ω/pin (3). GivenAis a matrix with real coefficients, the orthomax criterion Qω(Λ) =Qω(AT) can be expressed as a polynomial in the indeterminates tjl, with real coefficients for 1 ≤j,l≤k, where T= (tjl)1≤j≤k 1≤l≤k. Hence, Qω(Λ)∈R[tjl: 1≤j,l≤k]. 5 Here,R[tjl: 1≤j,l≤k] denotes the polynomial rings of indeterminates tjlwith real coeffi- cients (see Appendix A for details). Consequently, Proposi tion 1 provides a system of algebraic equations applicable to any orthomax criterion Qω(Λ), which is an equivalent condition for stationary points. In other words, the proposition provide s a system that allows algebraic oper- ations for any orthomax criterion to find all stationary poin ts. Thus, this proposition facilitates computation of all solutions to the system, that is stationa ry points, for any orthomax criterion Qω(Λ) using the algebraic operations within R[tjl: 1≤j,l≤k]. Hence, all candidates for the optimization problem (1) can be derived. In particular, sin ce the space{T∈Rk×k:T⊤T=Ik} is compact, we can seek a candidate to maximize Qω(Λ). We conclude this section by presenting the property concern ing perfect simple structures and orthomax criteria, as established in (Bernaards and Jen nrich, 2003, theorem 1). Proposition 2. Consider any orthomax criterion Qωwith0≤ω≤p. IfTis an orthogonal matrix and Λ =AThas a perfect simple structure, then Λmaximizes the criterion Qωover all the orthogonal matrices. Furthermore, if Ahas full column rank, any rotation of Athat maximizes the criterion differs from Λby at most one column permutation, and the column sign changes. 3 Perfect simple structure Proposition 2 describes the special properties of perfect s imple structures. Therefore, in this section, we discuss the properties of perfect simple struct ures. Monte Carlo simulations are conducted in Section 6 based on the properties presented in t his section. We present the following theorem. Proposition 2 assumes tha t there exists an orthogonal matrixTsuch that Λ = AThas a perfect simple structure. The following theorem provi des the equivalent conditions. Theorem 1. The following equations are equivalent: 1. there exists an orthogonal matrix Tsuch that Λ =AThas a perfect simple structure, 2. there exist kor fewer clusters consisting of rows of Asuch that (a) rows in the same cluster are parallel to each other, and (b) rows in different clusters are orthogonal to each other. Here, an element of a partition of rows is referred to as a clus ter. Proof.First, suppose that there exists an orthogonal matrix Tsuch that Λ = AThas a perfect simple structure. Then, each row of Λ has at most one non-zero element. Thus, there exist k or fewer clusters consisting of rows of Λ that satisfy Condit ions 2(a) and 2(b). Because Tis orthogonal, T−1=T⊤is also orthogonal. Orthogonal matrices do not affect inner pr oducts. Therefore, there exist kor fewer clusters consisting of rows of A= ΛT⊤that satisfy conditions 2(a) and 2(b). 6 Second, suppose that there exist kor fewer clusters c1,...,cmofAthat satisfy conditions 2(a) and 2(b), where m≤kis the number of clusters. Moreover there exist row vectors cm+1,...,ck∈Rksuch that cjandclare orthogonal to each other for different 1 ≤j,l≤k, according to the basis extension theorem
https://arxiv.org/abs/2504.21288v1
and the Gram-Schmi dt process. Let sj=cj/|cj|. We construct an orthogonal matrix as like S= s1 ... sk . Letaibe thei-th row of Afor each 1≤i≤p. Suppose that aiis parallel to the j-th cluster cj. The angle θbetween aiandcjis either 0 or π. Asaiandslare orthogonal for each l/\e}atio\slash=j, the inner products ais⊤ l= 0. Therefore aiS⊤=ai/parenleftig s⊤ 1···s⊤ k/parenrightig =/parenleftig ais⊤ 1···ais⊤ k/parenrightig =/parenleftigg 0···0j-th/bracehtipdownleft/bracehtipupright/bracehtipupleft/bracehtipdownright ais⊤ j0···0/parenrightigg (4) =  /parenleftigg 0···0j-th/bracehtipdownleft/bracehtipupright/bracehtipupleft/bracehtipdownright |ai|0···0/parenrightigg (θ= 0) /parenleftigg 0···0j-th/bracehtipdownleft/bracehtipupright/bracehtipupleft/bracehtipdownright −|ai|0···0/parenrightigg (θ=π) holds. Thus, we obtain the orthogonal matrix T=S⊤such that Λ = AThas a perfect simple structure. We conclude this section with the following remark. Remark 1. As the second claim of Proposition 2 indicates, if k=mis satisfied in Theorem 1, orthogonal matrices such that Λ =AThas a perfect simple structure that differs by at most column permutations and column sign changes. In fact, the ort hogonal matrix Sconstructed in the proof of Theorem 1 is uniquely determined, except for colu mn permutations and column sign changes when k=m. This section reveals the form of the initial solutions that m ake perfect simple structures the global optima. Since we can determine the optima except for c olumn permutations and column sign changes in the case k=m, we provide an initial solution that satisfies conditions 2( a), 2(b) andk=mfor our Monte Carlo simulations in Section 6. 4 Thurstone simple structure In this section, we characterize Thurstone simple structur es to assess their optimality con- cerning the orthomax criteria. Prior to this characterizat ion, we clarify ambiguities in Thur- stone’s rules 3 and 4 by introducing lower limits, denoted as γandδ: 7 3. For every pair of columns of Λ, at least γrows must contain a zero element in one column but not in the other, 4. form≥4, every pair of columns of Λ must have at least δrows with zero elements in both columns. Since the satisfaction of rules 3 and 4 implies compliance wi th rule 5, we do not address ambi- guities in the rule 5. For this study, we state that Λ has a Thurstone simplestructure ofclass (γ,δ) if Λ satisfies rules 1, 2, 3 and 4. Note that Λ has a perfect simp le structure in the case such that Λ has a Thurstone simple structure of class ( γ,δ) such that γ+δ=p. The following theorem, stated in Appendix B, establishes th at Thurstone simple structures lack the special properties inherent to perfect simple stru ctures. Theorem 2. LetQωbe any orthomax criterion with 0≤ω≤p, and let 1≤γ≤pand 1≤δ≤p. IfTis an orthogonal matrix, and even if Λ =AThas Thurstone simple structure of class(γ,δ), thenΛdoes not maximize the criterion Qωover all orthogonal matrices unless additional conditions are imposed when γ+δ/\e}atio\slash=p. Ifγ+δ=pholds, as outlined in the subsequent remark, the Thurstone s imple structure of class (γ,δ) is attained as the global optimum. Remark 2. Ifγ+δ=pholds in Theorem 2, then Λmaximizes the criterion Qωamong all orthogonal matrices. Notably, Λhas a perfect simple structure when it satisfies the Thurstone simple structure of class (γ,δ)withγ+δ=p, as established in Proposition 2. Section 6 presents Monte Carlo simulations to
https://arxiv.org/abs/2504.21288v1
analyze the be havior of global optima and stationary points under incremental deviations from a perf ect simple structure. Therefore, in the next section, we design an algorithm that is capable of co mputing not only global optima but all stationary points for the orthomax criterion. 5 Algebraic approach for Orthomax rotations Tocharacterize theorthomax criteria Qω(Λ)based onglobal optima andall stationary points for the optimization problem (1), which maximizes Qω(Λ), we develop two algorithms in this section. Numerical approaches may yield local optima that a re not global optima. Therefore, an algebraic approach is adopted. Prior to presenting the al gorithm, we establish the following corollary derived directly from Proposition 1: Corollary 1. If an orthogonal matrix Tis an optimal solution to (1)for an orthogonal rotation criterion Q, thenT⊤∂Q(AT) ∂Tis symmetric. In other words, if Tis the optimum of (1),Tsatisfies   /parenleftig T⊤∂Q(AT) ∂T/parenrightig jl=/parenleftig T⊤∂Q(AT) ∂T/parenrightig lj(1≤j < l≤k) T⊤T=Ik. (5) Since{T∈Rk×k:T⊤T=Ik}is a compact space, (1) admits optimal solution. In general, local optima may not coincide with global optima. By computi ng all algebraic solutions to Eq. 8 (5) forQ=Qωthe global optimum of (1) for Q=Qωcan be identified among them. Now, we propose Algorithm 1 to consistently locate global optima. Algorithm 1 An algorithm to find a global optimum Require: an initial solution A, a hyper parameter ω Ensure: global optimum in (1) for Q=Qωand stationary points restricted to {T∈Rk×k: T⊤T=Ik}forQ=Qω 1:T=T1,...,TL←connected components of all solutions to (5) 2:fora= 1,...,Ldo 3:ifTa⊂Rk×kis a component consisting of only one point then 4:ta←the element of the connected space Ta⊂Rk×k 5:qa←the value Qω(Ata) 6:else 7:ta←a sample point of the connected space Ta⊂Rk×k 8:qa←the value Qω(Ata) 9:end if 10:end for 11:ℓ←the index that takes the maximum value among q1,...,qL 12:return (tℓ,{t1,...,tL}) Step 1 involves computing the connected components of all al gebraic solutions to Eq. (5) using an algebraic approach. If Ta⊂Rk×kis a singleton component, the orthomax criterion valueqais evaluated at the element of the connected space in Ta⊂Rk×kas described in Steps 4 and 5. Otherwise, a sample point tais selected from the connected space Ta⊂Rk×k andqais evaluated at taas in Steps 7 and 8. Importantly, Qω(At) remains invariant for any t∈Ta, since every t∈Tais a stationary point restricted to {T∈Rk×k:T⊤T=Ik}of Q=Qω. At Step 11, the index ℓcorresponding to the maximum q1,...,qLis selected. Given that{T∈Rk×k:T⊤T=Ik}is a compact space, the global optimum tℓis obtained as detailed in Step 12. Notably, global optima and stationary points of orthomax cr iteria may not exist as discrete points; they might manifest as curves, surfaces, etc. Algor ithm 1 accommodates this by con- sidering connected components. In the next section, we appl y Algorithm 1 to our Monte Carlo simulations, which confirm that all global optima and statio nary points are indeed discrete points. It is important to note that Eq. (5) establishes an equivalen ce not for local maxima but for stationary points. Accordingly, we present an algorith m to classify the stationary points t1,...,tLusing the second-order sufficient optimality conditions der ived from the bordered Hes- sian. Specifically, apointsatisfyingtheborderedHessian criteriaforalocal minimumistermeda second-order sufficient local minimizer , whereas one fulfilling the
https://arxiv.org/abs/2504.21288v1
criteria for a local maximum is designated a second-order sufficient local maximizer . Points where the bordered Hessian fails to provide conclusive evidence — those not meeting second-ord er sufficient conditions — are clas- sified as second-order indeterminate points (which may still correspond to local extrema under 9 higher-order analysis). This classification, based solely on second-order information, facilitates a detailed examination of the shape of the criterion. In cons tructing this algorithm, we employ the properties of bordered Hessians as outlined in (Magnus a nd Neudecker, 2019, Sections 3.11 and 7.13) and (Debreu, 1952, Theorems 4 and 5). To define the bordered Hessians, consider the Lagrange funct ion for optimization problem (1): Φ(t,µ) =Qω(AT)+k(k+1) 2/summationdisplay j=1µjgj. Here G=/braceleftig g1,...,g k(k+1) 2/bracerightig ={g:g= (T⊤T−Ik)jl,1≤j≤l≤k}⊂R[tjl: 1≤j,l≤k]. Additionally, µiaretheLagrangemultipliers, t= (tjl: 1≤j,l≤k), andµ= (µi: 1≤i≤k(k+1)/2). Letµa= (µa,i: 1≤i≤k(k+1)/2) be the Lagrange multipliers corresponding to the sta- tionary point tafor 1≤a≤L, satisfying ∂Φ ∂tjl(ta,µa) =∂Φ ∂µi(ta,µa) = 0,/parenleftbigg 1≤a≤L,1≤j,l≤k,1≤i≤k(k+1) 2/parenrightbigg . For each 1≤a≤L, let φa(t) = Φ(t,µa). We define the Hessian of φaconcerning the variables {tjl}is given by Aa= ∂2φa ∂t11∂t11···∂2φa ∂t11∂t1k···∂2φa ∂t11∂tk1···∂2φa ∂t11∂tkk.................. ∂2φa ∂tkk∂t11···∂2φa ∂tkk∂t1k···∂2φa ∂tkk∂tk1···∂2φa ∂tkk∂tkk . Additionally, the constraint gradient matrix is B= ∂g1 ∂t11···∂g1 ∂t1k···∂g1 ∂tk1···∂g1 ∂tkk............ ∂gk(k−1) ∂t11···∂gk(k−1) ∂t1k···∂gk(k−1) ∂tk1···∂gk(k−1) ∂tkk , and letZ∈Rk(k−1)×k(k−1)denote the zero matrix. For each integer bsatisfying k(k+1) 2+1≤b≤k2, letAb arepresent the leading principal b×bsubmatrix of Aa, and let Bbdenote the submatrix ofBformed by its first k(k−1) rows and bcolumns. The b-th bordered Hessian of φais then defined as Hb a=/parenleftigg Ab a(Bb)⊤ BbZ/parenrightigg . 10 To testdefiniteness (i.e., nonsingularity of the Hessian on the tan gent space), we require that detHb ais nonzero and satisfies the appropriate sign condition. Now, we establish a property of bordered Hessians relevant t o our next algorithm (refer to (Magnus and Neudecker, 2019, Chapter 7, Section 13) for deta ils). Theorem 3. For any t∈Rk2such that/logicalandtext g∈Gg(t) = 0, we have 1. The point tis a second-order sufficient local maximizer if and only if, fo r every integer b∈/braceleftbiggk(k+1) 2+1,k(k+1) 2+2, ..., k2/bracerightbigg , the corresponding bordered Hessian satisfies (−1)bdetHb a>0. (6) 2. The point tis a second-order sufficient local minimizer if and only if, fo r every integer b∈/braceleftbiggk(k+1) 2+1,k(k+1) 2+2, ..., k2/bracerightbigg , the corresponding bordered Hessian satisfies (−1)k(k+1) 2detHb a>0. (7) 3. The point tis a second-order indeterminate point (i.e., it does not sat isfy the conditions for a local extremum) if and only if there exists some integer b∈/braceleftbiggk(k+1) 2+1,k(k+1) 2+2, ..., k2/bracerightbigg for which neither condition (6)nor(7)holds. Furthermore, (Algorithm 2) classifies all stationary point st1,...,tLcomputed by Algorithm 1 as follows: second-order sufficient local maxima “max,” sec ond-order sufficient local minima “min,” and second-order indeterminate point “indetermina te.” Although we focus only on orthomax rotations in this study, t he algebraic approach pro- posed in this section is applicable to arbitrary orthogonal rotations whose criteria are algebraic functions and satisfy the required differentiability assump tions. 6 Monte Carlo Simulations Numerical algorithms, including the gradient projection m ethod (Jennrich, 2001), depend on initial values and are incapable of identifying all stati onary points. In contrast,
https://arxiv.org/abs/2504.21288v1
our algebraic approach, formulated in Algorithm 1, operates independent ly of initial values and exactly com- putes all stationary points. Consequently, Algorithm 2, wh ich utilizes Algorithm 1 to classify stationary points, provides a rigorous framework for analy zing the shapes of stationary points 11 Algorithm 2 An algorithm to classify all stationary points into “max,” “ min,” and “indeter- minate” Require: all stationary points t1,...,tLcomputed by Algorithm 1 Ensure: stationary point patterns {Pattern}, wherePattern is “max,” “min,” or “indetermi- nate” 1:P←{} 2:fora= 1,...,Ldo 3:ifanyb=k(k+1) 2+1,...,k2satisfy (6) then 4:P←P∪{max} 5:else ifanyb=k(k+1) 2+1,...,k2satisfy (7) then 6:P←P∪{min} 7:else 8:P←P∪{indeterminate} 9:end if 10:end for 11:return P in factor rotation criteria. In this section, we present a Mo nte Carlo simulation employing Algo- rithms 1 and 2 to examine the characteristics of orthomax cri teria. Specifically, we investigate the quartimax criterion Q0, varimax criterion Q1, equamax criterion Qm/2, and parsimax cri- terionQ(p(m−1))/(p+m−2). Prior to discussing the Monte Carlo simulation results, we detail the initial solutions used in our Monte Carlo simulation. These initial solutions were derived from Theorem 1, as described below. Firstly, we consider the following initial solution: A= 0.50×1.0 0.40×1.0 0.10×1.0 0.50×1.1 0.40×1.1 0.10×1.1 0.50×1.2 0.40×1.2 0.10×1.2 0.40×1.0−0.60×1.0 0.40×1.0 0.40×1.2−0.60×1.2 0.40×1.2 0.40×0.6−0.60×0.6 0.40×0.6 0.33×1.0−0.24×1.0 0.69×1.0 0.33×1.2−0.24×1.2 0.69×1.2 0.33×1.1−0.24×1.1 0.69×1.1 . We systematically vary the entries of the initial solutions to evaluate the properties of global optima, all stationary points, and other related aspects. T he initial solution Asatisfies condition 2 of Theorem 1, thereby guaranteeing the existence of an orth ogonal matrix Tsuch that Λ = AT has a perfect simple structure. Additionally, the number of clusters formed by rows of A, which adhere to conditions 2(a) and 2(b), equal the number of commo n factors k= 3. Consequently, the global optimum can be determined up to column permutatio ns and column sign changes. Next, we consider two methodologies to incrementally disru pt the parallel clusters of the 12 initial solution Ain 27 stages. Define S= 1 4 7 10 13 16 19 22 25 2 5 8 11 14 17 20 23 26 3 6 9 12 15 18 21 24 27 , W= 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 . LetUij(1≤i≤9,1≤j≤3) be independent and identically distributed (i.i.d) rand om variables sampled from the uniform distribution U(−1,1). These variables represent perturba- tion introduced to the initial loading matrix Ato induce variability. Then, for a given integer ℓ(ℓ= 1,...,27), the perturbations Uijare incrementally added to Aijas follows: (AS ℓ)ij=Aij+Uij1{Sij≤ℓ},(AW ℓ)ij=Aij+Uij1{Wij≤ℓ}, where1(·) denotes the indicator function. Perturbations Uijthat do not satisfy 3/summationdisplay j=1(AS ℓ)2 ij∈[0,1],3/summationdisplay j=1(AW ℓ)2 ij∈[0,1] are discarded, and new random values are sampled iterativel y until these condition are satisfied. Asℓincreases, the entries of Aare progressively influenced by noise Uij, resulting in a gradual dissolution of the parallel clusters. Specifically , the parallel clusters in AS ℓvanish for ℓ≥12, whereas in AW ℓ, they vanish for ℓ≥22. Thus, the loss of parallelism in AS ℓoccurs more abruptly compared to AW ℓ.
https://arxiv.org/abs/2504.21288v1
The initial solutions AS ℓandAW ℓare referred to as Type SandType W, respectively. We generated 50 sets of U= (Uij) to compare results derived using the GPArotation pack- age, global optima identified by Algorithm 1, and stationary points obtained using Algorithm 1. TheGPArotation package was implemented in Rand was based on (Jennrich, 2001). All param- eters, including the threshold for the convergence assessm ent, were set to their default values. Algorithm 1 was implemented in a way that allows the computer algebra system Mathematica to be executed from the computer algebra system SageMath . In particular, our preliminary in- vestigation revealed that, for initial solutions like thos e generated above, only a finite number of algebraic solutions existed. Therefore, our implementati on does not employ Steps 6, 7, and 8 of Algorithm 1. On the other hand, for initial solutions in whic h the number of clusters satisfying conditions 2(a) and 2(b) of Theorem 1 was less than the number of factors, an infinite number of algebraic solutions were found to exist. Accordingly, wh en performing such simulations, it is necessary to include Steps 6, 7, and 8 of Algorithm 1. The comp utational time of our algorithm varied from a few minutes to several tens of minutes to obtain results for a single dataset. 13 The simulation results are illustrated in Figures 1 through 3. Figure 1 depicts the number of rows where the absolute values of at least two elements are le ss than 0 .1, referred to as “perfect simple rows.” Figure 2 illustrates the number of rows where t he absolute values of at least one element are less than 0 .1, denoted as “moderately simple rows.” Figure 3 displays th e number of elements with absolute values less than 0 .1, identified as “zero elements.” The horizontal axis in each figure represents the index ℓ(ℓ= 1,...,27), while the vertical axis indicates the averages of “perfect simple rows,” “moderately simple rows” and “zer o elements,” computed across fifty instances of AS ℓandAW ℓ. The figures consists of three panels: the left panels presen t results from theGPArotation package, the middle panels show global optima determined by Algorithm 1, and the right panels illustrate results from stationary poi nts maximizing the number of “perfect simple rows,” “moderately simple rows,” and “zero elements ,” respectively. The upper panels in each figure correspond to Type Sresults, whereas the lower panels correspond to Type W results. Within each panel, thebluelines represent quarti max criterion Q0, thered lines varimax criterion Q1, the green lines equamax criterion Qm/2, and the purple lines parsimax criterion Q(p(m−1))/(p+m−2). 0 5 10 15 20 250123456789Perfect Simple Rows (T ype S)GPArotation 0 5 10 15 20 250123456789Global Optima 0 5 10 15 20 250123456789Stationary Points 0 5 10 15 20 250123456789Perfect Simple Rows (T ype W)GPArotation 0 5 10 15 20 250123456789Global Optima 0 5 10 15 20 250123456789Stationary Points quartimax varimax equamax parsimax Figure 1: rows such that the absolute values of two or more ele ments are less than 0 .1 As illustrated in Figures 1–3, the
https://arxiv.org/abs/2504.21288v1
behavior of the GPArotation output closely resembles that of the global optima computed by Algorithm 1. However, GPArotation may generally converge to a stationary point that does not correspond to a g lobal optimum. Since Algorithm 1 is designed to guarantee global optimality, we obtained re sults that highlight the favorable performance of GPArotation . 14 0 5 10 15 20 250123456789Moderately Simple R ows (T ype S)GPArotation 0 5 10 15 20 250123456789Global Optima 0 5 10 15 20 250123456789Stationary Points 0 5 10 15 20 250123456789Moderately Simple Rows (T ype W)GPArotation 0 5 10 15 20 250123456789Global Optima 0 5 10 15 20 250123456789Stationary Points quartimax varimax equamax parsimax Figure 2: rows such that the absolute values of one or more ele ments are less than 0 .1. 0 5 10 15 20 25024681012141618Zero Elements (T ype S)GPArotation 0 5 10 15 20 25024681012141618Global Optima 0 5 10 15 20 25024681012141618Stationary Points 0 5 10 15 20 25024681012141618Zero Elements (T ype W)GPArotation 0 5 10 15 20 25024681012141618Global Optima 0 5 10 15 20 25024681012141618Stationary Points quartimax varimax equamax parsimax Figure 3: elements whose absolute values are less than 0 .1 15 Moreover, the presence of numerous perfect simple rows, mod erately simple rows, and zero elements indicates that the obtained rotation results are h ighly interpretable. This outcome is unexpected, as the quartimax criterion yields more inter pretable rotation results than the varimax criterion, despite the greater popularity of the la tter. Furthermore, a stationary point yields simpler rotation re sults compared to global optima, particularly in varimax rotation. Notably, these stationa ry points are not necessarily local max- ima. While the orthomax criteria are designed to extract sim ple structures, they do not fully achieve thisobjective. Ouralgebraic frameworkenables th ecomputation of all stationary points, facilitating the selection of solutions that align with ana lysts’ interpretability preferences. The following matrices Λ = AT, present, from left to right: global optima, solutions prio ritizing per- fect simple rows, solutions focusing on moderately simple r ows, and solutions emphasizing zero components. Here, Tis selected from the set t1,...,tLgenerated by Algorithm 1. Specifically, the first row corresponds to the an initial Type Wsolution at index ℓ= 9, the second row at ℓ= 18, and the third row at ℓ= 27. Moreover, components with absolute values, truncated to two decimal place, below 0.1 are denoted by ......in the following matrices: Global Optima Perfect Simple Rows Moderately Simple Rows Ze ro Elements ℓ= 9 .12−.40−.79 −.12.29 .52 .74−.54−.79 −.82...... ...... −.99...... ...... −.49...... ...... ...... ......−.79 .......11−.95 .......10−.87  .12−.40−.79 −.12.29 .52 .74−.54−.79 −.82...... ...... −.99...... ...... −.49...... ...... ...... ......−.79 .......11−.95 .......10−.87  .12−.40−.79 −.12.29 .52 .74−.54−.79 −.82...... ...... −.99...... ...... −.49...... ...... ...... ......−.79 .......11−.95 .......10−.87  .12−.40−.79 −.12.29 .52 .74−.54−.79 −.82...... ...... −.99...... ...... −.49...... ...... ...... ......−.79 .......11−.95 .......10−.87  ℓ= 18 −.30−.21−.82 .25 .14 .53 −.93......−.82 −.15.64 .11 .30−.35−.63 −.20−.40.51 .14......−.79 .17......−.94 .15......−.87  −.79−.41...... .52 .30...... .33−.61...... .......35 .56 −.63......−.45 .53−.39−.15 −.79.10...... −.95.12...... −.87.11......  −.79−.41...... .52 .30...... .33−.61...... .......35 .56 −.63......−.45 .53−.39−.15 −.79.10...... −.95.12...... −.87.11......  −.79−.41......
https://arxiv.org/abs/2504.21288v1
.52 .30...... .33−.61...... .......35 .56 −.63......−.45 .53−.39−.15 −.79.10...... −.95.12...... −.87.11......  ℓ= 27 ......−.79.42 .......53−.29 .96.......42 .11 .47 .46 −.47−.62...... .41......−.53 .13−.15.95 −.38.44 .23 −.55.56 .24  .58−.55−.40 −.40.34 .30 .23 .67−.40 .35 .48 .30 .11−.77...... −.50.25−.38 .97...... ...... .......11 .61 ...... .......81  −.62−.65...... .45 .40...... −.71.63...... −.11.39 .53 ......−.78...... .......36−.57 −.70−.12.67 .34.......52 .50.......64  −.62−.65...... .45 .40...... −.71.63...... −.11.39 .53 ......−.78...... .......36−.57 −.70−.12.67 .34.......52 .50.......64  Thus, utilizing our algebraic approach, various factor rot ations can be systematically ob- tained. This approach enables the computation of all statio nary points independently of initial values, presenting a distinct advantage over numerical met hods such as the gradient projection method. TheleftpanelsofFigure4illustratethemeanEuclideandis tances between GPArotation out- puts and global optima, while the right panels display the me an distance between GPArotation outputs and the nearest stationary points. Except for the va rimax criterion, the Euclidean dis- tance between GPArotation outputs and global optima is nearly zero, providing strong e vidence 16 thatGPArotation reliably converges to global optima. 0 5 10 15 20 250.00.20.40.60.81.0T ype SGlobal Optima 0 5 10 15 20 250.00.20.40.60.81.0Stationary P oints 0 5 10 15 20 250.00.20.40.60.81.0T ype W 0 5 10 15 20 250.00.20.40.60.81.0 quartimax varimax equamax parsimax Figure 4: Euclidean distances from the GPArotation outputs TheEuclidean distancecorrespondingtothevarimax criter ion may initially appearrelatively large in comparison to other criteria. However, the maximum possible Euclidean distance, given by√ 3×9×22/equaldotleftright10.392, indicates that the observed Euclidean distances for th e varimax cri- terion remain relatively small. Moreover, examining the ri ght panel reveals a behavior closely resembling that of the left panel, suggesting that the GPArotation has effectively converged to a global optima. Notably, our algebraic approach — specifica lly, Algorithm 1 — guarantees the computation of all global optima and stationary points. Con sequently, it facilitates the assess- ment of the performance of existing optimization algorithm s, such as the gradient projection method and its implementation in GPArotation . This section concludes with the averages of the numbers of se cond-order sufficient local maxima “max,” second-order sufficient local minima “min,” an d second-order indeterminate point “indeterminate” associated with orthomax criteria. The behavior of orthomax criteria as objective functions is analyzed through these averages, as illustrated in Figure 5. Orthomax criteria, as objective functions of maximization problems, exhibit a desirable property whereby computational results indicate a unique m aximum in most experiments. As demonstrated in Figure 5, most stationary points are classi fied as “min” or “indeterminate.” Consequently, even with established algorithms such as the gradient projection method, testing multiple initial values combined with bounded Hessians to a scertain local maximalities, often will guide the algorithm toward global optima. 17 0 5 10 15 20 250.00.51.01.52.02.53.0T ype-SMax 0 5 10 15 20 250.00.51.01.52.02.53.0Min 0 5 10 15 20 25012345678Indeterminate 0 5 10 15 20 250.00.51.01.52.02.53.0T ype- W 0 5 10 15 20 250.00.51.01.52.02.53.0 0 5 10 15 20 25012345678 quartimax varimax equamax parsimax Figure 5: second-order sufficient local maxima “max,” second -order sufficient local minima “min,” and second-order indeterminate point “indetermina te” 7 Conclusion and future works In this study, we present
https://arxiv.org/abs/2504.21288v1
the theoretical results concernin g perfect simple structures and Thurstone simple structures, formalized in Theorems 1 and 2 , respectively. Notably, the Monte Carlo simulation conducted in Section 6 employs initial sol utions constructed based on Theorem 1. Furthermore, we introduce Algorithm 1, which is based on an a lgebraic approach that is fundamentallydistinctfromnumericalmethods,suchasthe gradientprojectiontechnique, which has been extensively utilized. Specifically, numerical app roaches exhibit dependency on the initial values, renderingit infeasible to determine all st ationary points. Incontrast, our algebraic approach operates independently of initial values, thereb y ensuring that the output remains unaffected while enabling the computation of all stationary p oints. Consequently, our algebraic framework provides a systematic method for selecting inter pretable solutions from the set of computed stationary points. In particular, Section 6 demon strates that stationary points having greater simplicity than global optima can be identified, and our algebraic method facilitates factor rotation based on such stationary points. Moreover, Monte Carlo simulations conducted in this study, reveal that quartimax criterion tends to yield simpler solutions compared to the widely adop ted varimax criterion. Addition- ally, favorable outcomes were observed for the GPArotation package, which exhibits a strong 18 tendency to converge to global optima, and for the uniquenes s of local maxima across multiple initial solutions in the quartimax, varimax, equamax, and p arsimax criteria — an advantageous property in optimization problems involving a maximizatio n objective function. In this study, we employ an algebraic framework to examine th e properties and behavior of orthogonal rotations. Given its broader applicability, fu ture research may extend this approach to analyze oblique rotations, which are more frequently uti lized than orthogonal rotations. The insights derived from our investigations of orthogonal rot ations are expected to inform such subsequent studies. Notably, the proposed method demands considerably greater computational resources com- pared to numerical techniques such as the gradient projecti on method. Our simulation accounts for three factors; however, as the number of factors increas es, computational constraints may arise, rendering solutions infeasible within practical ti me limits. Therefore, optimizing compu- tational efficiency and memory utilization remains a critica l challenge for future research. Acknowledgements ThisworkwassupportedbyJSPSKAKENHIGrantNumberJP23H04 474(K.H.), JP25H01482 (R.F.), JP20K14312 (Y.K.), JP21K18312 (Y.K.), JP23H03352 (M.Y.). References T. Becker and V. Weispfenning. Gr¨ obner Bases. Springer New York, New York, NY, 1993. ISBN 978-1-4612-0913-3. doi: 10.1007/978-1-461 2-0913-3 6. URL https://doi.org/10.1007/978-1-4612-0913-3_6 . C. Bernaards, A. and R. Jennrich, I. Orthomax rotation and pe rfect simple struc- ture. Psychometrika, 68(4):585–588, 2003. doi: 10.1007/BF0229 5613. URL https://doi.org/10.1007/BF02295613 . M. Browne, W. An overview of analytic rotation in explorator y factor analysis. Multivariate Behavioral Research, 36(1):111–150, 2001. doi: 10.1207/S15327906MB R3601\05. URL https://doi.org/10.1207/S15327906MBR3601_05 . D. A. Cox, J. Little, and D. O’Shea. Ideals,Varieties, andAlgorithms: AnIntroduction to Computational Algebraic Geometry andCommutative Algebra. Springer Publishing Com- pany, Incorporated, 4th edition, 2015. ISBN 3319167200. G. Debreu. Definite and semidefinite quadratic forms. Econometrica, 20(2):295–300, 1952. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1907852 . H. H. Harman. ModernFactorAnalysis. University of Chicago Press, 1976. R. Jennrich, I. A simple general procedure for orthogonal ro tation. Psychometrika, 66(2): 289–306, 2001. doi: 10.1007/BF02294840. URL https://doi.org/10.1007/BF02294840 .
https://arxiv.org/abs/2504.21288v1
19 R. Jennrich, I. Rotation to simple loadings using component loss functions: The or- thogonal case. Psychometrika, 69(2):257–273, 2004. doi: 10.1007/BF0229 5943. URL https://doi.org/10.1007/BF02295943 . H. Kaiser, F. An index of factorial simplicity. Psychometrika, 39(1):31–36, 1974. doi: 10.1007/ BF02291575. URL https://doi.org/10.1007/BF02291575 . N. V. Kartechina, L. V. Bobrovich, L. I. Nikonorova, N. V. Pch elinceva, and R. N. Abaluev. Practical application of variance analysis of four-factor experience data as a technology of scientific research. IOPConference Series:Materials ScienceandEngineering,919(5):052030, 2020. ISSN 1757-8981. doi: 10.1088/1757-899x/919/5/0520 30. Y. Lin, S. Ghazanfar, K. Y. X. Wang, J. A. Gagnon-Bartsch, K. K . Lo, X. Su, Z.-G. Han, J. T. Ormerod, T. P. Speed, P. Yang, and J. Y. H. Yang. scMerge lever ages factor analysis, stable expression, andpseudoreplication tomergemultiplesingl e-cell RNA-seq datasets. Proceedings oftheNational Academy ofSciences,116(20):9775–9784, 2019. ISSN0027-8424. doi: 1 0.1073/ pnas.1820006116. J. R. Magnus and H. Neudecker. MatrixDifferential Calculus withApplications inStatistics andEconometrics. Wiley, 3rd edition, 2019. ISBN 978111954120 2. A. S. Shkeer and Z. Awang. Exploring the items for measuring t he marketing information system construct: An exploratory factor analysis. International ReviewofManagement and Marketing, 9(6):87–97, 2019. doi: 10.32479/irmm.8622. J. Shurrab, M. Hussain, and M. Khan. Green and sustainable pr actices in the construction industry. Engineering, Construction andArchitectural Management, 26(6):1063–1086, 2019. ISSN 0969-9988. doi: 10.1108/ecam-02-2018-0056. C. Spearman. ”General Intelligence,” Objectively Determi ned and Measured. TheAmerican JournalofPsychology, 15(2):201, 1904. ISSN 0002-9556. doi: 10.2307 /1412107. L. Thurstone, L. Multiple factoranalysis. University of Chicago Press, 1947. N. Vilkaite-Vaitone, I. Skackauskiene, and G. D´ ıaz-Menes es. Measuring Green Marketing: Scale Development and Validation. Energies, 15(3):718, 2022. doi: 10.3390/en15030718. Appendix A Basic algebraic concepts This section presents fundamental concepts in polynomial r ing theory, focusing on fields and rings, along with illustrative examples. We begin by formal ly defining fields. Definition 1. A field is a set Fequipped with two binary operations “+” and “ ·” defined on Fsatisfying the following conditions: 1. for any a,b,c∈F, (a+b)+c=a+(b+c) and (a·b)·c=a·(b·c) (associativity), 20 2. for any a,b,c∈F,a·(b+c) =a·b+a·c(distributivity), 3. for any a,b∈F,a+b=b+aanda·b=b·a(commutativity), 4. for any a∈F, there exists 0 ,1∈Fsuch that a+0 =a·1 =a(identities), 5. for any a∈F, there exists b∈Fsuch that a+b= 0 (additive inverses), 6. for any a∈F,a/\e}atio\slash= 0, there exists b∈Fsuch that a·b= 1 (multiplicative inverses). For instance, the sets Q,R, andCare fields, as they satisfy the following conditions with the sum “+” and product “ ·”. Conversely, Zdoes not form a field as it fails to satisfy the requirement of multiplicative inverses. Indeed, the eleme nt 2∈Zlacks an element b∈Zsuch that 2·b= 1. We now proceed to the definition of a commutative ring. Definition 2. A commutative ring is a set R=Fequipped with two binary operations, “+” and “·”, that satisfy conditions 1-5 outlined in Definition 1. As previously noted, Zis not a field; however, it is a commutative ring. Furthermore , the set of polynomials is a commutative ring. In this study, we consi der algebraic equations of the form (5), where such equations are defined as f= 0, with fbeing a polynomial whose coefficients belongs to a specified field. So, we provide a formal definition of polynomials. Definition
https://arxiv.org/abs/2504.21288v1
3. A monomial in z= (z1,...,zm) is an expression of the form zα=zα1 1···zαmm, where the exponent vector α= (α1,...,α m) consists of nonnegative integers, that is αi∈Z≥0. A polynomial finzwith coefficients in the real number field Ris a finite linear combination (with coefficients in R) of monomials. Explicitly, we express fasf=/summationtext α=(α1,...,αm)∈Zm ≥0aαzα, where the summation is taken over a finite set of m-tuplesα= (α1,...,α m). The set of all polynomials in zwith coefficients in Ris denoted R[z]. As noted above, R[z] =R[z1,...,zm] is a commutative ring. Specifically, R[z] is referred to as the polynomial ring. Analogous to the treatment of line ar equations via linear subspaces, algebraic equations are systematically addressed through ideals in a polynomial ring. We now proceed to define ideals. In general, ideals are defined for ar bitrary ring. For simplicity, we present the definition of ideals for commutative rings. Definition 4. LetRbe a commutative ring. A subset I ⊂Ris an ideal if it satisfies the following conditions: 1. 0∈I, 2. ifa,b∈I, thena+b∈I, 3. ifa∈Iandb∈R, thena·b∈I. 21 The concept of ideals parallels that of linear subspaces, as both structures are closed under addition and multiplication. However, an essential distin ction lies in the multiplicative; for a linear subspace, we multiply an element in the field, whereas for ideals, we multiply an element in the ring. Just as linear equations are addressed via linear spans, alg ebraic equations are handled via ideals generated by the polynomials they contain. We define s uch an ideal in the polynomial ringR[z] as follows: /a\}bracketle{tf1,...,fr/a\}bracketri}ht=/braceleftiggr/summationdisplay i=1qifi:qi∈R[z]/bracerightigg ⊂R[z], wheref1,...,fr∈R[z]. The set/a\}bracketle{tf1,...,fr/a\}bracketri}htis referred to as the ideal generated by f1,...,fr. The ideal/a\}bracketle{tf1,...,fr/a\}bracketri}htis analogous to the span of a finite set of vectors. In each case , elements are formed through linear combinations, utilizing field coe fficients for a span and polynomial coefficients for an ideal. We now show that /a\}bracketle{tf1,...,fr/a\}bracketri}htsatisfies the properties of an ideal in R[z]. Lemma 1. The set/a\}bracketle{tf1,...,fr/a\}bracketri}ht⊆R[z]is an ideal. Proof.LetJ=/a\}bracketle{tf1,...,fr/a\}bracketri}ht. Substituting the zero polynomials h1=···=hr= 0, yields 0 =/summationtextr i=1hifi, which implies 0 ∈J. Consider h,g∈J. By definition of J, there exists h=/summationtextr i=1hifi,g=/summationtextr i=1gifisuch that h1,...,h r,g1,...,gr∈R[z]. Since h+g=r/summationdisplay i=1hifi+r/summationdisplay i=1gifi=r/summationdisplay i=1(hi+gi)fi holds, we obtain h+g∈Jbyh1+g1,...,h r+gr∈R[z] and the definition of J. Leth∈Jandc∈R[z]. Subsequently, h=/summationtextr i=1hifiholds for some h1,...,h r∈R[z]. Since ch=cr/summationdisplay i=1hifi=r/summationdisplay i=1(chi)fi holds, we have ch∈Jbych1,...,ch r∈R[z] and the definition of J. Thus, we conclude that J⊆R[z] is an ideal. In particular, an ideal generated by monomials is referred t o as a monomial ideal, a con- cept utilized in the subsequent section. The following prop osition establishes a property of monominial ideals (Cox et al., 2015, Section 2.4, Lemmas 2 an d 3). This property states that monomial ideal membership problems can be resolved without requiring specialized generator sets such as Gr¨ obner bases; instead, membership can be dete rmined by verifying whether the generators divide the monomials composing a given polynomi al. Proposition 3. Letf=/summationtext α=(α1,...,αm)∈Zm ≥0aαzαbe a polynomial in R[z], and let I=/a\}bracketle{tzβ:β∈ B/a\}bracketri}htbe a monomial ideal for some B⊂Zm ≥0. Then,f∈Iif and only if ∀α∈Zm ≥0∃β∈B(aα/\e}atio\slash= 0⇒zβ|zα) 22 We introduce the notion of
https://arxiv.org/abs/2504.21288v1
radicals, whose properties will b e utilized in the subsequent section. Definition 5. An ideal Iis radical if fm∈Ifor some integer m≥1 implies f∈I. Thissectionconcludeswithafundamentalresultconcernin gradicalideals, knownasHilbert’s Nullstellensatz (Becker and Weispfenning, 1993, Theorem 7 .40). This theorem establishes that the vanishing of a polynomial over a system of algebraic equa tions can be determined via radical ideal membership. Proposition 4. Letfbe a polynomial in R[z], and let I=/a\}bracketle{tp1,...,pℓ/a\}bracketri}htbe a radical ideal in R[z]. Then,f∈Iif and only if ∀z∈Cm   p1(z) = 0 ... pℓ(z) = 0⇒f(z) = 0  In the next section, we establish Theorem 2 by leveraging the properties of monomial ideals and radical ideals. Appendix B Proof for Theorem 2 To establish Theorem 2, it suffices to demonstrate that an init ial solution satisfying Thur- stone simple structure is generally not a stationary point o f the orthomax criterion Qω(Λ) = Qω(AT), where 0≤ω≤p. The proof begins with the computation of the partial deriva tives of the orthomax criteria. For λij=/summationtextm l=1ailtlj, we derive the followings: ∂λn iv ∂tuv=∂ ∂tuv/bracketleftiggm/summationdisplay l=1ailtlv/bracketrightiggn =naiu/bracketleftiggm/summationdisplay l=1ailtlv/bracketrightiggn−1 =naiuλn−1 ivand∂λn ij ∂tuv= 0.(j/\e}atio\slash=v) Consequently, we obtain ∂Qω ∂tuv=∂ ∂tuv  p/summationdisplay i=1m/summationdisplay j=1λ4 ij−ω pm/summationdisplay j=1/parenleftiggp/summationdisplay i=1λ2 ij/parenrightigg2   =∂ ∂tuv  p/summationdisplay i=1λ4 iv−ω p/parenleftiggp/summationdisplay i=1λ2 iv/parenrightigg2   =p/summationdisplay i=1∂λ4 iv ∂tuv−∂ ∂tuv  ω p/parenleftiggp/summationdisplay i=1λ2 iv/parenrightigg2   =p/summationdisplay i=14aiuλ3 iv−2ω p/parenleftiggp/summationdisplay i=1λ2 iv/parenrightigg/parenleftiggp/summationdisplay i=1∂λ2 iv ∂tuv/parenrightigg =p/summationdisplay i=14aiuλ3 iv−2ω p/bardblλv/bardbl2/parenleftiggp/summationdisplay i=12aiuλiv/parenrightigg 23 = 4p/summationdisplay i=1aiuλiv/parenleftbigg λ2 iv−ω p/bardblλv/bardbl2/parenrightbigg , (8) where/bardblλv/bardbl2=/summationtextp i=1λ2 iv. If an initial solution satisfying Thurstone simple structu re is a stationary point of the or- thomax criterion, then the substitution of the identity mat rix into Eq. (8) must yield zero. Hence, we examine the following polynomial in the polynomia l ringR[a] =R[aij: 1≤i≤ p,1≤j≤k]: ∂Qω ∂tuv(AI) = 4/braceleftiggp/summationdisplay i=1aiuaiv/parenleftig a2 iv−ω/bardblλv/bardbl2/parenrightig/bracerightigg . Given Proposition 1, an initial solution is a stationary poi nt of the orthomax criterion if and only if the following polynomial f(a) vanishes for each u/\e}atio\slash=v: f(a) =p/summationdisplay i=1aiuaiv/parenleftig/parenleftbig a2 iu−a2 iv/parenrightbig −ω/parenleftig /bardblau/bardbl2−/bardblav/bardbl2/parenrightig/parenrightig . Byobservingthat thepolynomial f(a)dependssolely on theelements of the uandvcolumns of an initial solution A, it follows that the vanishing of f(a) is determined solely by the uandv columns of A. We assume that the initial solution satisfies a Thurstone si mple structure of the class (γ,δ), where γ+δ/\e}atio\slash=p. Furthermore, we algebraically characterize the u,vcolumns of the initial solution Athat conform to the Thurstone simple structure as follows: 2. for each j=u,v /productdisplay a∈αa= 0 for all α⊂Aj={a1j,...,a pj}such that|α|=p−m+1 3. for each i= 1,...,γuwe haveaiu= 0, and for each i=γu+1,...,γu+γvwe haveaiv= 0 4. for each i=γu+γv+1,...,γu+γv+δ, we have aiu=aiv= 0. Note that conditions 3 and 4 do not lose generality through ro w swaps within column groups. Moreover, consideration of condition 1 of Thurstone’s rule is unnecessary due to the following observations: •rows satisfying condition 1 in the u,vcolumns also fulfill conditions 3 or 4. •rowssatisfyingcondition1incolumnsotherthan u,vareunconstrainedinthe u,vcolumns. In other words, the initial solution Asatisfies Thurstone simple structure of class ( γ,δ) if and only if the following algebraic equations are satisfied for e achu,v=
https://arxiv.org/abs/2504.21288v1
1,...,k, whereu/\e}atio\slash=v:   0 =/producttext a∈αa(α⊂Aus.t.|α|=p−m+1) 0 =/producttext a∈αa(α⊂Avs.t.|α|=p−m+1) 0 =aiu (i= 1,...,γu) 0 =aiv (i=γu+1,...,γu+γv) 0 =aiu=aiv(i=γu+γv+1,...,γu+γv+δ). (9) 24 We want to show that if these equations are satisfied, the init ial solution Aalso is a sufficient condition for Ato be a stationary point. That is, we want to show that a∈Rpk((9)⇒f(a) = 0). (10) However, in the rest of this paper, we show that these conditi ons are generally not satisfied. Define Fu=/braceleftigg/productdisplay a∈αa:α⊂Aus.t.|α|=p−m+1/bracerightigg , Fv=/braceleftigg/productdisplay a∈αa:α⊂Avs.t.|α|=p−m+1/bracerightigg , Hu={aiu:i= 1,...,γu}, Hv={aiv:i=γu+1,...,γu+γv}, Huv={aiu,aiv:i=γu+γv+1,...,γu+γv+δ}. The ideal/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}htis a radical and monomial ideal in R[a]. While it is trivial that/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}htis a monomial ideal based on the definition, the following lem ma demonstrates that it is also a radical ideal. Lemma 2. The ideal/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}htis a radical ideal in R[a]. Proof.Assume for contradiction that /a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}htis not a radical ideal. Then, there exists f∈R[a] such that fm∈/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}htandf/\e}atio\slash∈/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}ht for some m≥1, by Definition 5. Let f=/summationtext α∈Zpk ≥0cαaα,cα∈R. Since/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}htis a monomial ideal, we invoke Proposition 3, which provides the following equivalence condition:   fm∈/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}ht f/\e}atio\slash∈/a\}bracketle{tFu∪Fv∪Hu∪Hv∪Huv/a\}bracketri}ht ⇐⇒  ∀t1∈Mon(fm)∃s1∈Fu∪Fv∪Hu∪Hv∪Huv(s1|t1) ∀t2∈Mon(f)∀s2∈Fu∪Fv∪Hu∪Hv∪Huv(s2∤t2), (∗) where Mon( fm) and Mon( f) denote the sets of monomials of fmandf, respectively. It follows that any monomial t1∈Mon(fm) is a product of elements from Mon( f), thereby contradicting (∗). Moreover, some monomials contained in the polynomial rwhich is a component of fare not divisible by any monomials in Fu∪Fv∪Hu∪Hv∪Huv: r(a) =p/summationdisplay i=k+ℓ+1aiuaiv/parenleftig/parenleftbig a2 iu−a2 iv/parenrightbig −ω/parenleftig /bardblau/bardbl2−/bardblav/bardbl2/parenrightig/parenrightig Hence, as the relation (10) is generally not satisfied by Prop ositions 3 and 4, Theorem 2 is established. 25
https://arxiv.org/abs/2504.21288v1
Mining and Intervention of Social Networks Information Cocoon Based on Multi-Layer Network Community Detection† Suwen Yang1and Lei Shi1,2 1School of Mathematical Sciences, Fudan University, Shanghai, 200433, China 2Shanghai Key Laboratory for Contemporary Applied Mathematics, Fudan University, Shanghai, 200433, China Email:swyang21@m.fudan.edu.cn, leishi@fudan.edu.cn Abstract With the rapid development of information technology and the widespread utilization of recommendation algorithms, users are able to access information more conveniently, while the content they receive tends to be homogeneous. Homogeneous viewpoints and preferences tend to cluster users into sub-networks, leading to group polarization and increasing the likelihood of forming information cocoons. This paper aims to handle in- formation cocoon phenomena in debates on social media. In order to investigate potential user connections, we construct a double-layer network that incorporates two dimensions: relational ties and feature-based similarity between users. Based on the structure of the multi-layer network, we promote two graph auto-encoder (GAE) based community detec- tion algorithms, which can be applied to the partition and determination of information cocoons. This paper tests these two algorithms on Cora, Citeseer, and synthetic datasets, comparing them with existing multi-layer network unsupervised community detection al- gorithms. Numerical experiments illustrate that the algorithms proposed in this paper significantly improve prediction accuracy indicator NMI (normalized mutual information) and network topology indicator Q. Additionally, an influence-based intervention measure on which algorithms can operate is proposed. Through the Markov states transition model, we simulate the intervention effects, which illustrate that our community detec- tion algorithms play a vital role in partitioning and determining information cocoons. Simultaneously, our intervention strategy alleviates the polarization of viewpoints and the formation of information cocoons with minimal intervention effort. Keywords and phrases: Multi-layer social network; Community detection; Informa- tion cocoon; Modularity tensor reconstruction; Graph auto-encoder (GAE); Markov state transition model. 1 Introduction With the exponential growth of the Internet and the ubiquitous utilization of recommenda- tion algorithms, the information cocoon phenomenon has gained increasing traction recently. †The corresponding author is Lei Shi. 1arXiv:2504.21357v3 [cs.SI] 6 May 2025 As information companies continuously analyze users’ preferences and refine recommendation algorithms to maximize business profits, individuals also benefit from increasingly personal- ized information services. While rapid development of information technology improves the experience of users, it inevitably leads to users exposing to homogeneous information, which may contribute to information narrowing and group polarization, ultimately resulting in the formation of information cocoons [49]. The concept of information cocoons was first proposed in “Infotopia: How Many Minds Produce Knowledge” by Sunstein in 2006 [49]. People’s attention would instinctively focus on those topics where their interests lay and tend to search for ideas that could support their standpoints. As for those viewpoints that are opposite from their standpoints, they could choose to ignore them automatically. Information cocoons are warm and friendly spaces for people living in an information space where people only accept the views they like and exclude information they oppose [49]. Although it seems natural for individuals to search the viewpoints where their interests lie, there has been extensive research demonstrating that personalized recommendations create filter bubbles, thus forcing the occurrence of information cocoon phenomena
https://arxiv.org/abs/2504.21357v3
[39], [17], [34]. Without recommendation systems, the search process is time-consuming and costly, requir- ing individuals to explore the entire event framework on their own. Under such circumstances, people can obtain a comprehensive understanding of events, thus being able to view issues ob- jectively. The left of Figure 1 depicts the searching tracks of users under non-recommendation conditions. Under the recommendation system, internet technology operators could precisely locate where certain users are interested, allowing users to browse what they want to obtain directly without seeing much redundant information. The right part of Figure 1 depicts the approximate browsing trajectory of users under the influence of recommendation algorithms. Figure 1: Browsing paths with or without a recommendation system In general, the widely recognized contributing factors to the formation of information co- coons mainly stem from two primary sources: information delivery mechanisms and personal information preferences [14]. Obviously, under the influence of data-driven algorithms, the impact of the former appears to be more severe. Recommendation algorithms not only make people lack comprehensive command of the whole topics, contributing to extreme views more easily generated but also reduce the browsing time for people to find their interested points, 2 leading to information cocoons accelerated forming [39]. It has been suggested that homo- geneous and singular information aggravate social opinion polarization and breed extremism. When people find their opinion supported by others, they will be confident to become more extreme, thus accelerating the dissemination of radical emotions and even group conflicts [51]. Take the elections in 2012 and 2016 as examples: social media plays a guiding role in shaping public opinions, which not only contributed to an inducing impact on the final election results to some degree but also triggered offline conflicts in multiple states [50], [15]. Therefore, it gradually becomes a high-profile issue to excavate the information cocoons in time and take intervention measures making full use of the dissemination of internet information. Complex network modeling is a powerful tool for quantitatively analyzing social media problems. In a common social network, each node represents an individual, and the links be- tween nodes symbolize the connection between individuals. Take an early network-simulated social activity case as an example: it begins with the emergence of incompatible opinions, leading to polarization in the karate club with the dissemination of standpoints and ulti- mately resulting in its split [7] (see Figure 2). A large number of researchers have studied dynamic networks that simulate the formation of information cocoons, focusing on the in- fluence of powerful users and the selective acceptance of information [14], [13], [47]. Several investigations focus on information cocoons related to network topology structure, especially community structure [40], [56]. However, the majority of research on community structure merely focuses on single-layer networks, neglecting the algorithmic influence on users. The research related to information cocoon taking both algorithm and network topology structure appears to be absent. To fully explore the topological characteristics influenced by similarity- based recommendation algorithms, we construct a double-layer network that integrates user interactions and user similarity, capable of excavating sub-structures in social relationships and
https://arxiv.org/abs/2504.21357v3
imitating social interactions shaped by data-driven algorithms. Figure 2: The separation of Karate club network The information cocoons can be analog to sub-structures with two main features. One is that most of the viewpoints in one sub-structure gradually approach consensus with one dominant viewpoint, and a small number of heterogeneous viewpoints lack influence. The 3 other is that sub-structures are relatively closed, which means the sub-structure interacts with the outside world less, and opinions from outside are hard to influence the core members of the group [24]. Considering the aforementioned characteristics and complex network modeling for information cocoon research, our task is to explore the sub-network within the overall topic information space, community detection. As a downstream task of graph representation learning, there emerges a significant volume of approaches for exploring community structures, ranging from the classical spectral cluster- ing [33], [31] to deep graph learning [44], [35], [21], [52], [20]. Nonetheless, efficient unsuper- vised frameworks for multi-layer community detection remain scarce. Our research proposes two unsupervised frameworks for solving community detection problems in multi-layer net- works, leveraging both modularity optimization and graph auto-encoder. The encoder, built upon a two-layer graph convolutional neural network, performs intra-layer feature fusion and inter-layer feature integration using the input adjacency tensor and modularity tensor. For the decoder, we explore two different strategies: the first approximates the true modularity tensor using feature reconstruction tensors from each layer independently, without feature fu- sion; the second concatenates the feature reconstruction matrices from all layers into a fused representation to approximate the modularity tensor for each layer. Current social media do not have a mature monitoring mechanism for information cocoons. In order to avoid group polarization, they only take folding or closing debate measures for some abnormal comments. To address this gap, this paper focuses on the information cocoon phenomenon, proposing a more natural exploration and monitoring framework for discovering potential information cocoons as well as an algorithmic intervention strategy for mitigating group polarization. As mentioned in previous research, the main features of information cocoon are external closure and internal homogeneity [24]. Based on these two characteristics, this paper has designed a community structure-based excavation framework for information cocoons, which can be applied to a social media dataset with user comments. Subsequently, the intervention measure is implemented within those sub-networks identified as trapped in information cocoons. The whole framework of our research is demonstrated as follows. We construct a user relationship-similarity two-layer network based on user interaction correlation and comments feature similarity, as is shown in Step 1 of Figure 3. Next, segment the network through our proposed multi-layer network community detection algorithm in Step 2 and impose intervention measures on some of the sub-networks trapped in information cocoons. To be more precise, the highly influential nodes in sub-networks are substituted by other heterogeneous perspectives. According to the previous association, those heterogeneous perspectives are recommended to other users, as illustrated in Step 3. In the remaining procedures, we simulate the propagation process by our Markov transition model. 4 Figure 3: Research framework The main contributions of this paper are summarized
https://arxiv.org/abs/2504.21357v3
as follows. •We study the phenomenon of information cocoon in the form of quantitative models. We consider the excavation of information cocoons as a dual-objective optimization problem based on graph data due to the homogeneity of viewpoints among users within the same information cocoons. This information cocoon excavation method provides a monitoring and warning mechanism. •We propose a novel method for multi-layer network community detection tasks based on graph representation learning. This paper promotes the graph auto-encoder (GAE) method in single-layer networks into multi-layer networks, proposing two modular- ity tensor reconstruction algorithms based on GAE. The first one is a mixed graph embedding-based modularity tensor reconstruction method (MGE-MTR). This method concatenates each feature matrix from the encoder into a single feature matrix, recon- structing each slice of the real modularity tensor during decoder process. The second one is the independent graph embedding-based modularity tensor reconstruction method (IGE-MTR), which utilizes each slice of graph representation tensor to reconstruct the modularity tensor. •We provide a robust intervention measure that can be flexibly applied to algorithms. In the meantime, the simulation experiments verify our proposed intervention measures could mitigate the polarization of groups to a certain extent. •The utilization of social network data is distinctive from other references. Most previous research primarily focuses on specific types of user connections [36]. However, we intro- duce a similarity network generated by probability constructed from the cosine value of 5 users’ features: P aij= 1|− →zi,− →zij = cos − →zi,− →zj =− →zi·− →zj − →zi · − →zj , where aijis the i-th row j-th column adjacent matrix, whose value equals 1 representing there exists a connection between the i-th and j-th nodes. And− →ziis the feature vector of the i-th node. It has significant meaning for excavating potential connections when adding the similarity layer. The remainder of the paper is organized as follows. Section 2 reviews the related work. In Section 3, we illustrate a basic set of multi-layer network community detection problems and offer theoretical insurance. Section 4 presents our novel methods proposed in this pa- per, including two unsupervised frameworks for addressing community detection problems in multi-layer networks leveraging modularity optimization and graph auto-encoder, as well as a user influence-based intervention strategy and double-layer intervention effect simulation system. Compared with existing multi-layer network unsupervised community detection al- gorithms, these two proposed algorithms are implemented into public Cora, Citeseer, and synthetic datasets in Section 5. Section 6 mainly focuses on real-world data analysis. Firstly, we construct double-layer networks through the Weibo dataset and implement our algorithm into this network to explore the latent information cocoons. Next, we adopt our proposed in- tervention strategy and apply a simulation system model to imitate standpoint dissemination in a double-layer network constructed by real-world data. In Section 7, we make a sensitivity analysis for some of the parameters in our simulation system. Meanwhile, we come up with the corresponding conclusion about the consequences of group polarization and information cocoons. The conclusion and further discussion can be found in Section 8. 2 Related Work 2.1 Information Cocoon Most previous
https://arxiv.org/abs/2504.21357v3
research focuses on qualitative research, which is conducted in the form of questionnaires. Ren et al.[40] investigated several short-form video users by questionnaire and came up with the main factor of forcing or hindering information cocoon generation. More concretely, the subjective preference of users and stable interest recommendations could accelerate the formation of information cocoons, while receiving heterogeneous information and random recommendations could impede the generation of information cocoons. Tracking the behavior of users, Li et al. [23] proposed solutions to information cocoons from a macro and systematic perspective-community structure. Hou et al. [17] simulated the information propagation in social networks with the effect of recommendation systems. This led to the conclusion that similarity-based recommendation systems could intensify the formation of information cocoons. Intriguing by these results, we begin with a multi-layer network structure and design a quantity method by excavating the sub-structure in social platform data for analyzing information cocoons. 6 2.2 Social Network Modeling The early development of community-based social network models primarily relied on dissem- ination principles, most notably those derived from the discrete SIR (Susceptible-Infected- Recovered) model introduced by Kermack and McKendrick in 1927 [6]. A link between two nodes is based on whether a direct connection exists between the disseminator and recip- ients. As the dissemination properties are similar between public sentiment and infection, the information propagation process can be simulated by this model. Based on the basic in- fection transmission model, Daley and Kendal proposed the DK rumor dissemination model [10]. This model endowed nodes with three statuses for simulating rumor dissemination in networks. Like rumors and sentiments, a node can pass its disease to its neighbors. Such a model is gradually developed based on complex network theory and network topology struc- ture for the research of individual association and dissemination behavior [36]. Recently, in most social network research, the connections are established through real contact between entities, such as interactions, relationships, friends, and colleagues [1]. This network model is restricted in studying diverse propagation patterns, as it solely accounts for a single-layer structure, neglecting potential inter-layer coupling information. Most datasets appear to have a strong connection between entities and contain obvious network topology structures, such as global trade networks and protein molecular data. Unlike those datasets, social network data contain much latent information apart from interaction relationships. With the diversity of propagation ways and channels, the dissemination of information con- tains two or more mutual effects, meaning the single-layer network cannot entirely reflect the information propagation process. Under different scenes, researchers established multilayer networks. Based on the infectious model (SIR), Yagan et al. [55] established society-physical double-layer information diffusion networks and further explored the interaction effect of in- formation dissemination in double-layer networks. Based on two online social platform-twitter and Facebook, Magnani et al. [25] established a two-layer network and proposed a multilayer graph representation learning model, which reflects the interaction between different plat- forms. Considering the single social media platform Facebook, Xiong et al. [54] extracted four kinds of contact relation: following, forwarding, mentioning, and replying, constructing multilayer networks, and analyzing the sentiment dissemination
https://arxiv.org/abs/2504.21357v3
in social networks. These networks are constructed by entities’ real contacts, such as following, mentioning, and reply- ing, which seems sparse. Ioannidis et al. [18] established a two-layer network utilizing Cora and Citeseer datasets through real citation relations and k-th nearest neighbors of features. The training results of graph recurrent neural networks show that such two-layer networks containing feature connections can reveal multiple structures [18]. 2.3 Community Detection Network structure data can be divided into several disjoint vertices subsets, ensuring the connections within sub-networks are dense, and the connections between sub-networks are sparse. Excavate the potential sub-structure of networks, which have extensive applications in many areas. For instance, the co-research areas can be discovered through segmenting cooperation networks; the marketing strategies can be designed by exploring similar users, the protein structure network can reveal the interconnection between molecules, and partition 7 social networks can be used to track rumor dissemination. They have developed many algorithms for single-layer networks, such as graph segmenta- tion and clustering. Graph segmentation mainly contains sub-graph partition and modularity optimization, two methods aiming at converting community partition into optimization prob- lems. Ng et al. [33] proposed a sub-graph partition algorithm by setting the Laplacian matrix as the objective function. Newman et al. [32] promoted the graph partition method through a modularity matrix. After that, advanced algorithms based on modularity optimization sprung out. Modularity-based community detection algorithms applied in the directed and weighted graphs were proposed in 2007 [2]. Louvain algorithm improved from modularity optimization was proposed in 2008, making it applicable to large-scale networks [5]. The overlapping com- munity segmentation problems in undirected and unweighted graphs were solved by Shen et al. [45] in 2009. Graph clustering approaches achieve entity labels through graph representa- tion feature vectors, mainly containing spectral cluster [31] and stochastic block model-based statistical inference method [22]. With the development of deep learning, some graph rep- resentation learning methods achieved fantastic effects in the downstream task community detection, such as recurrent graph neural network [35], graph convolutional neural network [21], graph attention neural networks [52], graph auto-encoder [20] and so on. All the above algorithms are non-linear supervised or semi-supervised learning methods to obtain low-rank graph representations, which require training samples with priori labels. Whereas most real- world data lack previous information. Some works attempted to integrate label sampling phrase and graph representation learning phrase into an unsupervised two-stage framework [53], [8], [41]. Leveraging modularity and topological information, Choong et al. [9] pro- posed a variational graph auto-encoder reconstruction algorithm, which could be applied to unsupervised community detection tasks, without requiring priori label information. Applying the single-layer network community detection algorithms directly into multi- layer structures is challenging. Research on multi-layer networks can be broadly divided into two categories: one based on matrix decomposition and clustering and the other on deep learning–based graph representation methods. The traditional matrix decomposition and cluster method tries to fuse the several layers of networks by mapping the multi-layer networks into the single-layer counterparts or integrating the labeled vertices across multiple layers of the networks. Apart from the above methods,
https://arxiv.org/abs/2504.21357v3
deep graph representation learning has attracted much attention, most of which integrate graph embeddings with representation vectors clustering into two stages. Bahadori et al. [3] designed a fusion strategy of local random walking for multi-layer networks. Song and Thiagarajan [48] proposed a deep random walking model combining the traditional random walking method and deep learning. The node could randomly walk in coupling edges, thus achieving low-rank representation. Naderipour et al. [29] proposed a possible c-means cluster model that could utilize topology structure and similarity features to divide the overlapping communities in large-scale networks. Mansoureh et al. [26] considered a newly defined degree and proposed a multi-layer general type-2 fuzzy community detection model. Paul [37] constructed multi-layer network confusion modularity degree and expected modularity degree to find the optimal community labels. Ioannidis et al. [18] proposed a semi-supervised graph recurrent neural network framework for multi-layer networks, achieving better performance in node classification tasks. 8 2.4 Reconstruction of Modularity Maximization of modularity for community partition was proposed in 2006 [30], which is expected to include as many edges as possible in the same community. Let the indicator variable Z(i, j)∈ {0,1}. When node iand node jbelong to the same community, Z(i, j) equals 1. Otherwise, its value is 0. kimeans the degree of node i.m=1 2P ikirepresents the total number of edges in the whole network. The definition of modularity is: Q=1 2mX i,j aij−kikj 2m Z(i, j). The modularity matrix B= [bij]N×N∈RN×Nis constructed in [4], where bij=aij−kikj 2m. Let community feature matrix Z= [Z1, Z2,···, ZK]∈ {0,1}N×K. The modularity can be represented as: Q=1 2mKX i=1ZT iBZi=1 2mTr ZTBZ . The optimization problem can be expressed as: max Z∈{0,1}N×K KP i=1Zi=1N1 2mKX i=1ZT iBZi. It seems to be an NP hard problem. Then, we can write down the relaxation version: max Tr(ZTZ)=NTr ZTBZ . The optimal value of the above problem is equal to the largest eigenvalue of modularity matrix B. Based on Matrix Reconstruction Theorem [12], for any modularity matrix of order N, there exists an approximation matrix ˆBof order r(where r < N ) that closely estimates it. Consequently, the modularity maximization problem can be viewed as a task of finding the optimal low-rank approximation. min ˆB∈RN×N r(ˆB)≤r B−ˆB F⇔ min ˆB∈RN×N r(ˆB)≤rq λ2 r+1+λ2 r+2+···+λ2 N. This minimization problem can be solved by nonlinear methods such as neural networks [4], [42], [28]. 3 Preliminaries 3.1 Community Detection Problem in Multi-layer Networks For multi-layer networks G= (V,E),Vrepresents nodes in L-th layer and E={E1,···, EL} is the edge set in Llayers. The adjacent tensor is A= [A1,···, AL], where Al= [al ij]N×N 9 is adjacent matrix of the l-th layer. The modularity matrix of l-th layer is B(l)=al ij−kl ikl j ml where kl iis the degree of the node iofl-th layer. A community detection task in multi-layer networks aims to discover node representation vectors to maximize the modularity value of both tensors. max Q(Z) =h Tr(ZTB(1)Z), Tr(ZTB(2)Z),···, Tr(ZTB(L)Z)i , s.t. Tr (ZTZ) =N.(3.1) We naturally arrive at using a linear weighting method to transform the multi-objective optimization problem into a single-objective
https://arxiv.org/abs/2504.21357v3
maximization problem. max α1Tr(ZTB(1)Z)+α2Tr(ZTB(2)Z) +···+αLTr(ZTB(L)Z), s.t. Tr (ZTZ) =N, α1+···+αL= 1, α1>0,···, αL>0.(3.2) 3.2 Theoretical Analysis Theorem 1. The maximization problem (3.2) is equivalence to finding the low rank approx- imation of matrix Θ = B(1)··· O ......... O···B(L) .The low rank approximation matrix ˆΘcan be formulated as the dot product of feature matrix Φ = Φ1··· O ......... O···ΦL , where Φl∈RN×rlis the reconstruction feature matrix of modularity matrix B(l), namely ˆB(l)= Φ lΦT l. Theorem 1 provides the theoretical guarantee for our independent graph embedding-based modularity tensor reconstruction approach (IGE-MTR). The NL×τdimension representation matrix means that the concatenation of the reconstruction features from each layer of the modularity tensor B(l)is equivalent to the reconstruction of the high-dimensional block matrix Θ. This enables the solution of (3.2) by reconstructing each layer of modularity tensor B(l), followed by concatenating all the reconstructed feature matrices. Theorem 2. The maximization problem (3.2) is equivalence to finding the low rank ap- proximation of matrix M=α1B(1)+α2B(2)+···+αLB(L).The low-rank approxima- tion matrix ˆMcan be formulated as the dot product of the representation feature matrix Γ =√α1Φ1√α2Φ2···√αLΦL ∈RN×τ, that is ˆM= ΓΓT. Theorem 2 demonstrates that the single feature matrix Γ ∈RN×τis capable of recon- structing each layer of the modularity tensor, thereby supporting the community detection 10 algorithm based on fused graph representation features. Since M−ˆM 2 F= α1B(1)+α2B(2)+···+αLB(L)−ˆM 2 F ≤ α1B(1)−α1ˆM 2 F+ α2B(2)−α2ˆM 2 F+···+ αLB(L)−αLˆM 2 F ≤ B(1)−ˆM 2 F+ B(2)−ˆM 2 F+···+ B(L)−ˆM 2 F,(3.3) minimizing the distance M−ˆM Fcan be relaxed into minimizing the sum of each layer reconstruction distance B(1)−ˆM F+ B(2)−ˆM F+···+ B(L)−ˆM F. Proof of Theorem 1. The problem (3.2) is equivalence to: max Tr  ZT,···, ZT α1B(1)··· O ......... O ···αLB(L)  ZT ... ZT  , s.t. Tr (ZTZ) =N, α1+···+αL= 1, α1>0, α2>0,···, αL>0.(3.4) The problem (3.4) can also be written as: max Tr √α1ZT,···,√αLZT B(1)··· O ......... O···B(L)  √α1ZT ...√αLZT  , s.t. Tr (α1ZTZ) +Tr(α2ZTZ) +···+Tr(αLZTZ) =N, α1>0, α2>0,···, αL>0.(3.5) Let Θ = B(1)··· O ......... O···B(L) andU= √α1Z ...√αLZ , (3.4) can be formulated as: arg max Tr(ZTZ)=N α1+···+αL=1 α1>0,···,αL>0α1Q(1)+α2Q(2)+···+αLQ(L) = arg max Tr(UTU)=NTr(UTΘU). Through eigenvalue decomposation, Θ can be written as the multiple of block matrix: Θ = H(1)P(1)··· O O ............... O O ···H(L)P(L)  Λ(1) r1 O O Λ(1) N−r1... Λ(L) rL O O Λ(L) N−rL  H(1)T··· O P(1)T··· O ......... O···H(L)T O···H(L)T . 11 Intriguing by Lemma 2, we can find the r1+···+rL-order low-rank approximation ˆΘ = H(1)··· O ......... O···H(L)  Λ(1) r1··· O ......... O···Λ(L) rL  H(1)T··· O ......... O···H(L)T . Denote Λτ= Λ(1) r1··· O ......... O···Λ(L) rL , where τ=r1+···+rLandrl=rank Λ(l) rl .Since the eigenvalue diagonal matrix Λ τis composed of positive eigenvalue, it can be represented as Λ τ= Σ τΣτ, where Σ τcan be for- mulated as a block matrix: Στ= Σ(1) r1··· O ......... O···Σ(L) rL . Let Φ = H(1)··· O ......... O···H(L) Στ∈RNL×τ, thus ˆΘ = ΦΦTand Φ = H(1)Σ(1) r1··· O ......... O ···H(L)Σ(L) rL = Φ1··· O ......... O···ΦL , where Φ l=H(l)Σ(l)
https://arxiv.org/abs/2504.21357v3
rl∈RN×rlis the representation matrix of B(l)since ˆB(l)=H(l)Σ(l) rlΣ(l)T rlH(l)T =H(l)Λ(l) rlH(l)T. The multi-objective optimization problem (3.2) can be converted into low- rank approximation of Θ: arg max Tr(ZTZ)=1 α1+···+αL=1 α1>0,···,αL>0α1Q(1)+α2Q(2)+···+αLQ(L) = arg min ˆΘ∈RNL×NL r(ˆΘ)≤τ Θ−ˆΘ F = arg min Φ∈RNL×τ r(ˆΦ)≤τ Θ−ΦΦT F. The proof of Theorem 1 is then completed. Proof of Theorem 2. From Theorem 1, we know that the matrix Θ can be approximated by ˆΘ = ΦΦT. Let M=α1B(1)+α2B(2)+···+αLB(L)=√α1IN···√αLIN Θ √α1IN ...√αLIN ∈RN×N. 12 Theτorder rank approximation of M is: ˆM=√α1IN···√αLINˆΘ √α1IN ...√αLIN  =√α1IN···√αLIN H(1)··· O ......... O···H(L) Λτ H(1)T··· O ......... O···H(L)T  √α1IN ...√αLIN  =√α1H(1)···√αLH(L) Λτ √α1H(1)T ...√αLH(L)T . Let˜H(l)=√αlH(l), then ˆM=˜H(1)··· ˜H(L) Λτ ˜H(1)T ... ˜H(L)T , where τ=r1+···+rLandrl=rank Λ(l) rl .Since the eigenvalue diagonal matrix Λ τis composed of positive eigenvalue, it can be represented as Λ τ= Σ τΣτ. Let Γ =˜H(1)··· ˜H(L) Στ∈RN×τ, thus ˆM= ΓΓT. And the representation ma- trix Γ can be formulated by Φ l: Γ =√α1Φ1√α2Φ2···√αLΦL . The multi-objective optimization problem (3.2) can be converted into low-rank approximation of Θ or M: arg max Tr(ZTZ)=N α1+···+αL=1 α1>0,···,αL>0α1Q(1)+α2Q(2)+···+αLQ(L) = arg min ˆM∈RN×N r(ˆM)≤τ M−ˆM F = arg min Γ∈RN×τ r(Γ)≤τ M−ΓΓT F. The proof of Theorem 2 is then completed. 4 Proposed Methods 4.1 Independent Graph Embedding Algorithm for Multi-layer Network Community Detection Graph auto-encoder (GAE) was applied to reconstruct the modularity matrix. In this part, we develop a novel GAE-based multi-layer network community detection algorithm. The graph representation vectors containing community information obtained from the encoder are subsequently applied to reconstruct the modularity tensor. 13 Based on the modularity tensor optimization problem (3.1), we first develop an indepen- dent graph embedding-based modularity tensor reconstruction algorithm (IGE-MTR). Input modularity tensor B= B(1),···, B(L) composed of Llayers symmetry modularity matrix and multi-layer adjacent tensor A= A1,···, AL intoLlayers graph convolutional encoder. Them-th graph convolution is formulated as GCN1 m Am, B(m) =σ AmB(m)Wm 1 ,1≤ m≤L, where σ(·) is activation function. Since modularity matrices are always sparse, we choose tanh( ·) as the activation function. The encoder consists of two sets of graph convolutional layers, each of which comprises L graph convolution layers and operates feature aggregation and transformation for input ma- trices B(m)andAm, and outputs L graph representation matrices. A feature fusion operation is performed before being fed into the second graph convolutional module. The outputs of the first graph convolutional module h1 0= tanh A1B(1)W1 0 ,···, hL 0= tanh ALB(L)WL 0 are concatenated as h=concat (h1 0,···, hL 0). Then input the aggregation feature hinto the sec- ond set of feature feature fusion operation GCN2 m(Am, h) =σ(AmhWm 1). Figure 4 exhibits our proposed IGE-MTR algorithm. Figure 4: Multi-layer network community detection (IGE-MTR) The modularity reconstruction effect is evaluated by the sum of the F-norm. From Theo- rem 1 Θ−ˆΘ 2 F=Tr LX l=1 B(l)−ˆB(l) B(l)−ˆB(l)T! =LX l=1Tr B(l)−ˆB(l) B(l)−ˆB(l)T =LX l=1 B−ˆB 2 F, we set the loss function L B,ˆB =LP l=1 ˆB(l)−B(l) 2 F=LP l=1NP i=1NP j=1 ˆb(l) ij−b(l) ij2 . 14 4.2 Mixed Graph Embedding Algorithm for Multi-layer Network Commu-
https://arxiv.org/abs/2504.21357v3
nity Detection We propose a mixed graph embedding-based modularity reconstruction algorithm (MGE- MTR), which incorporates a feature fusion operation into the encoder output. This contrasts with the independent feature approach proposed in the IGE-MTR algorithm described in Subsection 4.1. To be precise, we concatenate the output of the second graph convolutional module h(1)=GCN2 1 A1hW1 1 ,···,h(L)=GCN2 L ALhWL 2 intoN×(rL) dimensional graph representation matrices: H=concat h(1), h(2),···, h(L) . Theorem 2 provide the theoretical insurance for converting problem (3.2) into finding the optimal reconstruction of matrix M=α1B(1)+α2B(2)+···+αLB(L). We try to construct each layer of modularity tensor through a single graph representation matrix H. Similarly, we evaluate the effectiveness of modularity reconstruction through the Frobenius- norm distance between matrix Mand the reconstruction matrix ˆM: M−σ H·HT 2 F. In- trigued by inequation (3.3), minimization of reconstruction distance ˆM: M−σ H·HT 2 F can be relaxed into the sum of all layers Frobenius-distance B(m)−σ H·HT 2 F,∀m,1≤ m≤L. Figure 5: Multi-layer network community detection (MGE-MTR) Figure 5 depicts the framework of independent graph embedding-based multi-layer network community detection. Accordingly, the setting of loss function is L B,ˆB =LP l=1 ˆB(l)−B(l) 2 F =LP l=1NP i=1NP j=1 ˆb(l) ij−b(l) ij2 . 4.3 Intervention Strategy Based on User Influence According to the Matthew effect in social networks [38], highly influential users play a domi- nant role in shaping the polarization of group opinions. If their positions are effectively lever- 15 aged, they can influence the attitudes of others, thereby achieving the desired intervention outcomes. In this part, we proposed a novel intervention strategy that can be programmati- cally executed to target communities trapped in information cocoons. Figure 6: Intervention strategy Figure 6 vividly demonstrates the process of intervention strategy. At first, several highly influential vertices were picked up in a relatively closed sub-network via the influential factors computation method in Appendix C. Try to reduce the exposure of high-impact viewpoints and simultaneously replace those leader attitudes with opposite or neutral viewpoints following the previous recommendation. The intervention strategy can be simulated programmatically. Suppose that certain users maintain firm attitudes that are resistant to external influence. Consequently, we set a sus- ceptible parameter θrepresenting the proportion of users who find it difficult to accept other heterogeneous viewpoints. This group of users always accept other opinions with a small prob- ability. In the simulation process, the users’ acceptance of new viewpoints matters, rather than the indiscriminate propagation of viewpoints in the network. User correlations can in- fluence sensitivity, as it is widely acknowledged that users with strong correlations are more likely to exhibit similar sensitivity levels. 4.4 Simulation System for Double-layer Networks This section introduces a simulation system for viewpoint propagation tailored explicitly for a double-layer network scenario. The network comprises two distinct layers: one dedicated to 16 propagating viewpoints and another governing susceptibility status. The bottom layer of Figure 8 is constructed by real interaction and response. Moreover, we assume that the viewpoints are propagated in this layer. The nodes in this layer can be classified into three statuses: positive, neutral, and negative. Under normal circumstances,
https://arxiv.org/abs/2504.21357v3
two users with similar characteristics are more likely to exhibit comparable acceptance levels. Therefore, we categorize the nodes in the upper layer of Figure 8 into susceptible and insusceptible statuses. These two susceptibility states are dynamically adjusted through the feature similarity network. Assume that users in susceptible states are more likely to be influenced by the viewpoints of their neighboring nodes and shift their opinions. In contrast, users with neutral views are more likely to experience changes in susceptibility. Motivated by the microscopic Markov chain approach in the research of green behavior propagation among different groups [57], we establish a Markov chain-based state transition simulation system. Figure 7: States transition model Figure 7 exhibits how the states of nodes transition in two different layers. The left sub- figure depicts the transition probability among different attitude states of nodes. Parameter αrepresents the propagation rate, which is the probability that users with different view- points spread their views to others. Parameter βrepresents the acceptance rate, which is the probability that users alter their views under the influence of other users. Parameter R1 represents probability of users under neutral status changing their attitudes. We recommend setting the value of R1to be less than 0.5 to ensure the validity of probability constraints. We define λas propagation adjustment parameters, whose value is affected by propagation rateα, inter-layer influential parameters γ1and number of heterogeneous perspectives among adjacent nodes. λi P(α) = 1−(1−γ1α)Npositive= 1−Y j(1−γ1αaij 1{attitude [j] == positive }), λi N(α) = 1−(1−γ1α)Nnegative= 1−Y j(1−γ1αaij 1{attitude [j] == negative }). The nodes with susceptible status in similarity networks are more susceptible to external opinions; meanwhile, the inter-layer interaction influence coefficient is γ1>1. When the 17 status of nodes in a similarity network is insusceptible, we choose inter-layer interaction influence coefficient γ1= 1. In general cases, the range of the inter-layer interaction influence coefficient is γ1∈ 1,1 α . The right sub-figure is the state transition diagram in the user similarity network. Param- eterR2represents the recovery rate, which is the probability of states transferring from sus- ceptible to insusceptible. Parameter sRate represents the probability of sensitivity changing. Parameter diffRate means transmission rate, which is the probability that the susceptible nodes spread their status. λs(diffRate ) represents the probability of insusceptible nodes changing their states, whose value is influenced by propagation rates, inter-layer interaction coefficient γ2and the number of susceptible nodes in its neighborhood. λS(diffRate ) =1−(1−γ2diffRate )Nsusceptible =1−Y j(1−γ2·diffRate ·aij 1{state [j] == susceptible }). When the nodes with neutrality states in the correlation layer, their attitudes tend to polarize more. Thus, their sensitivity would be more likely to alter, at this time, inter-layer interaction influence coefficient γ2>1. If the states in the correlation layer are not neutral, we take the value of inter-layer interaction influence coefficient γ2= 1. In general cases, the range of inter-layer interaction coefficient is γ2∈h 1,1 diffRatei . Algorithm 1 Framework of Intervention Strategy 1:Set the network status parameters. Intervention parameters η, susceptible parameters θ. The first layer parameters: propagation rate α, acceptance rate β, transition rate R1, inter-layer interaction coefficients γ1. The second
https://arxiv.org/abs/2504.21357v3
layer parameters: propagation rate diffRate , sensitivity alteration rate sRate , recover rate R2, interlayer interaction coefficients γ2. 2:Input: Two layer adjacent tensor A, Scale of network N, Attitude labels. 3:Initialize the network states. toprank index =EC[1 :ηN], l1states(0)=initstate (attitude [toprank index ]), l2states(0)=initstate (θ). 4:Iteration: While t < epoch : l1states(t+1)=diffusion 1 A1, l1state(t), l2state(t), α, β, γ 1 , l2states(t+1)=diffusion 2 A2, l2state(t), l1state(t), diffRate, sRate, γ 2 , t=t+ 1, Return l1states(t+1), l2states(t+1). 5 Numerical Experiments 5.1 Evaluation Index •Normalized mutual information ( NMI ) To quantify the similarity between actual community affiliations and those identified by 18 algorithms, NMI was introduced for graph community evaluation in 2005 [11]: NMI (Y, C) =2I(Y;C) H(Y) +H(C), where Yrepresents the priori class labels of nodes, while Crepresents the label results processed by the algorithm. H(·) means the cross entropy: H(X) =−|X|P i=1P(i)logP(i). I(Y;C) is mutual entropy: I(Y;C) =H(Y)−H(Y|C). In its discrete form, for two different community partitions, NMI can be expressed as follows: NMI =−2NAP u=1NBP v=1Muvlog nMuv MuMv NAP u=1MulogMu n +NBP v=1MvlogMv n, (5.1) where nis the number of graph nodes, Muvis the elements confusion matrix M,NA is the number of communities in partition AandNBis the number of communities in partition B. Besides, Muis the sum of the u-th row of the confusion matrix, and Mv is the sum of the v-th row of the confusion matrix. The larger the NMI value, the greater the similarity between the two community structures. If NMI reaches 1, the community partitions are identical. •Modularity degree Qfor multi-layer networks Due to the absence of priori labels for communities in real-world network data analysis, we cannot directly calculate the classification accuracy, thus utilizing modularity degree index Qto evaluate the quality of network partitioning. The value of modularity degree index Qalways lies in [ −1 2,1]. If the partition is well-effective, the modularity value goes to 1. The definition of multi-layer networks with inter-layer coupling is proposed in 2010 [27]: Q=1 2MX ijsr" A(s) ij−γsk(s) ik(s) j 2L(s)! δsr+δijζjsr# δ(zis, zjr), (5.2) where A(s)represents the adjacent matrix of s-th layer multilayer network and ζisris the index indicator of inter-layer links. ζisr= 1 means the i-th node has an edge between s-th layer and r-th layer. The indicator’s value is zero if the inter-layer coupling edge does not exist. k(s) i=P jAijsrepresents the total number of edges connected to the i-th node in the s-th layer. Mdenotes the total number of layers. ζis=P rζisrrepresents total number of links between the i-th vertex in s-th layer and the i-th vertex in the other different layers. γsis the tuning parameter controlling the expected modularity degree. δ(zis, zjr) is the delta indicator that takes the value 1 if zis=zjr, and 0 otherwise. If δ(zis, zjr) = 1, the vertex iand vertex jaffiliate the same community. Otherwise, the vertex iand vertex jbelong to different communities. Without considering inter-layer coupling connection, [37] promote Newman-Girvan (NG) modularity degree through layer-wise normalization: QNM=1 MX sX i,j1 2L(s)" A(s) ij−k(s) ik(s) j 2L(s)# δ(zi, zj). (5.3) 19 It is named multi-normalized average (MNavrg) modularity.
https://arxiv.org/abs/2504.21357v3
Meanwhile, another shared degree modularity degree is proposed in [37] with the average frequency for estimating links between entity iand entity jin the s-th layer, which is given by QSD=1 MX sX i,j1 2L(s)" A(s) ij−L(s)P sk(s) iP sk(s) j 2L2# δ(zi, zj). (5.4) •Similarity index based on KL divergence All the above evaluation indices are constructed by an adjacent matrix, which merely considers network topology structure. In real-world scenarios, the similarity between node features is often more crucial than the density of topological connections when evaluating the effectiveness of community partitioning. Therefore, we introduce a new evaluation metric based on KL divergence. KL divergence is a metric that evaluates the similarity between two probability distri- butions. The discrete form of KL divergence is expressed as KL[P(X)||Q(X)] =EX∼P(x) logP(x) Q(x) =NX i=1PilogPi Qi. In the community detection task, we aim to ensure that users within the same commu- nity are more similar while users in different communities exhibit lower similarity. The similarity evaluation between i-th vertex and j-th vertex based on KL divergence can be expressed as follows: simij=KL(Hi||Hj) =FX k=1hi klog hi k+ 1 hj k+ 1! , where Hi= hi 1,···, hi k,···, hi FTandHj= hj 1,···, hj k,···, hj FT denote the feature vectors of the i-th and j-th user vertices, respectively. Fis the length of the feature vector. By traversing all nodes within the same community Ck, the average pairwise similarity among users can be regarded as the similarity index of community Ck. similarity (Ck) =1 NkNkX i=1NkP j=1simij Nk=1 N2 kNkX i=1NkX i=1K(Hi||Hj), where Nkis node number in the community Ck. The similarity of the entire networks with K partitions {C1, C2,···, CK}is the average of each community similarity indices similarity (Ck): KLsimilarity index (C1,···, CK) =1 KKX k=1similarity (Ck) =1 KKX k=11 N2 kNkX i=1NkX i=1K(Hi||Hj).(5.5) 20 This evaluation index measures the discrepancy between the two distributions of users’ features. If features of different users have a significant discrepancy gap, the distance between two samples might be huge, while the value of KL divergence might be large. Thus, the more similarity of users’ features within one community, the less value of similarity index similarity (Ck), and the corresponding whole partition evaluation will become less, which means the partition is effective. •Similarity index based on JS divergence Similar to KL divergence, JS divergence measures the similarity of two probability distributions. Unlike the asymmetry of KL divergence, however, JS divergence has a symmetrical characteristic. JS[P(X)||Q(X)] =1 2KL[P(X)||M(X)] +1 2KL[Q(X)||M(X)], where M(X) =P(X)+Q(X) 2. The discrete version of M(X) isMij=Hi+Hj 2=1 2 hi 1+hj 1,···, hi k+hj k,···, hi F+hj FT . The similarity index based on JS divergence: JSsimilarity index (C1,···, Ck) =1 2KKX k=11 N2 kNkX i=1NkX j=1(KL(Hi||Mij) +KL(Hj||Mij)) =1 2KKX k=11 N2 kNkX i=1NkX j=1 KL Hi||Hi+Hj 2 +KL Hj||Hi+Hj 2 ,(5.6) where KL(Hi||Mij) =FP k=1hi klog 2(hi k+1) hi k+hj k+2 . Likewise, if the community partition is effective, the features of users within the same community tend to be similar, resulting in a lower KL divergence KL(Hi||Mij). Conse- quently, the JS similarity index JSsimilarity
https://arxiv.org/abs/2504.21357v3
index (C1,···, Ck) also exhibits a lower value. 5.2 Dataset In real-world scenarios, most of the datasets lack priori labels. We evaluate our algorithm re- garding prediction accuracy and topological structure in this part. To assess performance, we compare our algorithms with the tucker decomposition with integrated SVD transformation (TWIST) algorithm [19] using citation datasets (Cora and CiteSeer) and simulated datasets with prior labels. The details of the TWIST algorithm can be found in Appendix A. Citation datasets consist of authors and their citation relationships. The Cora dataset con- sists of 2,708 machine learning-related papers categorized into 7 classes. In numerical analysis, we separate this dataset into two parts: gathering reinforcement learning, rule learning, and theory as dataset Cora1 with 724 papers in it and collecting probabilistic method, theory, case-based, and genetic algorithms with 1493 papers in it as the Cora2 dataset. For the Citeer dataset, we select artificial intelligence, machine learning, and agents, three categories with a total number of 1203 as dataset Citeseer1. 21 We construct user relationship-similarity two layer networks through citation network dataset. The first layer is constructed via the citation relationship two authors have, and then there exists one edge between the two authors. We consider the word vector in the content file as a feature vector for constructing a second-layer network-similarity network. The cosine value between two feature vectors reflects the node’s similarity. The probability of the existence of an edge between two nodes is P(aij= 1|zi,zj) =cos⟨zi,zj⟩=zi·zj |zi|·|zj|. Since the comparison algorithm TWIST has a significant impact on large-sample classi- fication, we add small-size simulation network samples for testing. The simulation datasets are generated by multilayer mixture stochastic block model [19]. The parameters are set as follows: the layers number is 3, the communities number is 2, the average degree for each single layer is approximately 10, and the number of nodes is 300 and 400. 5.3 Experimental Analysis We apply three algorithms to six datasets for testing. Through NMI and multi-layer networks modularity degree ( QNM,QSD), we evaluate the three algorithms from prediction accuracy and network topology structure two aspects. The codes of this section are available.1 Table 1: Numerical Experiments Results Dataset Evaluation IndexAlgorithm 1 Algorithm 2 Algorithm 3 TWIST IGE-MTR MGE-MTR Cora1QNM 0.1508 0.2603 0.3104 QSD 0.1706 0.2344 0.3062 NMI 0.0294 0.3237 0.3846 Cora2QNM 0.2603 0.2938 0.4011 QSD 0.2630 0.2905 0.3987 NMI 0.2876 0.3012 0.5676 Citeseer1QNM 0.2022 0.1697 0.2663 QSD 0.2027 0.1693 0.2581 NMI 0.0294 0.0141 0.1953 Simulation Dataset (300 vertices)QNM 0.0991 0.1883 0.1919 QSD 0.4087 0.4715 0.4741 NMI 0.3130 0.9288 1.0000 Simulation Dataset (400 vertices)QNM 0.1043 0.1089 0.1952 QSD 0.4186 0.3868 0.4732 NMI 0.2632 0.0058 1.0000 Simulation Dataset (500 vertices)QNM 0.1962 0.1941 0.1962 QSD 0.4757 0.4739 0.4757 NMI 1.0000 0.9530 1.0000 Table 1 illustrates that our novel mixture graph embedding-based multi-layer community detection algorithm (MGE-MTR) significantly outperforms other algorithms in community detection tasks. The prediction accuracy in labels and network topological partition effect have been promoted more than previous ones. Meanwhile, our novel MGE-MTR multi-layer 1https://github.com/ysw-git123/Multi-layer network community detection algorithm.git 22 community detection algorithm overcomes the drawback of low classified accuracy
https://arxiv.org/abs/2504.21357v3
in small scale networks. 6 Real Data Analysis In the social media topic dissemination process, users are prone to find the search points of view that could support their opinions. Under the influence of recommendation systems, new users can find the idea they agree with and join them swiftly. The topic debate group would gradually evolve into two conditions: the one is viewpoints gradually evolving into two opposite versions, and users holding these opinions steadily separate into different camps, forming multiple information cocoons. Another situation is that multiple perspectives existed initially, but with the topic becoming hot, one category viewpoint places the dominant and submerges other viewpoints. Users holding distinctive opinions are gradually inclined to reach a consensus. In some cases, such consensus can be positive, or even extremely one-sided, ultimately resulting in the formation of an entire information cocoon. In topic comments networks, users’ emotional tendencies have specific associations with community structures. However, not all users in one relatively closed social network commu- nity necessarily hold the same attitude, which means not all sub-networks have experienced the phenomenon of information cocoons. Sometimes, different opinions exist in one sub- network, which cannot be ascertained as being trapped in information cocoons. Moreover, there is no need to exert intervention measures on those sub-networks. Our task is to explore the sub-network in which the information cocoons might occur. Users with frequent inter- actions and connections might be divided into the same community, and individuals within the same community are prone to being trapped in information cocoons. In this section, we apply our novel community detection algorithms to explore the potential information cocoons in social networks. The code of real data analysis and Algorithm 1 is available.2 6.1 Data Pre-Processing Select the fierce debate topic on social media platforms to investigate information cocoons. In our research, we selected the comments about the movie ”Moon Man” and collected some of the topics discussed about this movie from the social platform Weibo. The keywords from the website include username, user reply recipient, comment posted location, user level, and user comment content. In order to maintain a single channel of information acquisition and minimize the influence of divergent perspectives from other platforms, we focus on a short time window of user comments on the Weibo platform. It is assumed that, within this period, users obtain information exclusively from Weibo. The selected time frame is from 16:00 to 21:00 on April 10th, 2022. During dataset preprocessing, we filtered out comments containing fewer than eight words, those consisting solely of emojis or images, and removed samples from low-activity users who participated in the discussion. After preprocessing, a total of 775 comments were retained for analysis. 2https://github.com/ysw-git123/Multi-layer network community detection algorithm.git 23 Use the pre-trained model in the NLTK package to classify sentiment and attitude ten- dencies in comments , followed by manual adjustment of the outputs for improved accuracy. In the final filtered dataset, 417 comments hold negative attitudes toward the movie, 232 comments maintain supportive attitudes toward the movie, and 128 comments have neutral attitudes about the movie’s quality,
https://arxiv.org/abs/2504.21357v3
which we classify as neutral. Figure 8: Multi-layer network In Figure 8, the bottom layer is constructed by reply connection, and the top layer is generated by user similarity. The nodes in the network represent each user, the inter-layer dashed lines mean node alignment, and real lines within one layer mean real connections. The network constructed by the relationship of comments and responses appears to have obvious cluster distribution, meaning that most interactions occur within clusters with less interaction between clusters. Meanwhile, there is a small cluster of vertices with some connections with different clusters and a small number of nodes disconnected from the leading network. Due to the similarity-based recommendation system’s impact and other users’ expressions, the similarity probability generates the second layer. The similarity is computed using the cosine similarity of the weighted word frequency feature vectors. If the similarity value is high, one edge between two vertices is more likely to exist. A two-layer network reflects the connection of users in different dimensions, which is suitable for exploring information in sparse networks in a single layer. 6.2 Partition Communities Exploration of information cocoons aims to partition relatively independent and closed sub- networks without dense connections. Real-world problems always lack priori labels, so we need to determine the optimal number of communities. Here, we leverage our proposed MGE-MTR algorithm and try to find the optimal number of communities corresponding with modularity Q value. Traversing the numbers 2 to 16, Figure 9 exhibits modularity Q values vary with the number of communities. 24 Figure 9: Q value variation with community count. Figure 10: t-SNE dimensionality reduction. t-SNE is a widely-used method that could avoid the trend of feature vectors centralizing and maintaining the distance of high-dimension features. Here, we use the t-SNE method to project the high dimensional output vectors into two-dimensional space, and visualization results are exhibited in Figure 10 with the communities number ranging from 2 to 10. From Figure 9, the optimal partitioning of the topological structure is achieved when the number of communities is set to three. Additionally, the cluster feature map in Figure 10 vividly depicts projected features with class labels. When setting three as the community number, features in such partition remain a certain distance without apparently overlapping. 25 Figure 11: Community partition Figure 11 demonstrates the distribution of three communities, from which we know users belonging to the same communities have more association than those from different communi- ties in terms of similarity or interaction. The upper layer network reveals the latent association between low-activity-reliant nodes and less interactive user groups within the bottom layer. To verify the effectiveness of our double-layer network modeling approach, we compare the similarity-based double-layer network with knn-based double-layer network and single layer network in terms of the topology structure and partition similarity. Table 2 illustrates that our similarity-based network modeling approach is the optimal choice. Table 2: Comparison of Networks Modeling Method Evaluation Index NetworkAlgorithm QNM QSD KLsimilarity JSsimilarity Similaity-based double-layer networkIGE-MTR 0.1623 0.1728 0.0775 0.0758 MGE-MTR 0.2586 0.2383 0.0955 0.0955 TWIST 0.1572 0.1834 0.1184 0.1064 knn-based double-layer
https://arxiv.org/abs/2504.21357v3
network (k=6)IGE-MTR 0.1107 0.1234 0.1181 0.1058 MGE-MTR 0.1902 0.2217 0.1192 0.1071 TWIST 0.0789 0.0551 0.1037 0.0949 Single layer network MMR 0.1650 0.1625 0.1121 0.1054 Next, we need to determine which community is trapped in an information cocoon and analyze the three communities that have been divided individually. We compute the per- centage of attitude labels in those three communities. In the first community, the negative attitude occupies a dominant position, taking up 77.63 percent of all users in this community. People with those three attitudes constitute one-third of the total in the second community. In the final community, the positive attitude comments comprise approximately two-thirds of all comments. Consequently, the first and last communities are likely to be trapped in an 26 information cocoon. To confirm this, take the first community, for example; we select the top 15 highly influ- ential users. The influence factors are calculated by the eigenvector centrality-based method [46], which assumes that the importance of a certain vertex in a complex network relies on itself and its neighbor vertices. If a vertex is connected with a highly influential node, this vertex might obtain high importance. Assume the transferring matrix is M=D−1A, where A= (aij)N×Nis adjacent matrix of limited graph, D=diag(d1, d2,···, dN) is the degree matrix of adjacent matrix and di=NP j=1aijis degree of the i-th degree. The eigenvector centrality is composed of the eigenvector centrality of its neighbors and its influence on its around neighbors, which can be formulated as: EC′(i) =λNX j=1mijEC(j) + (1 −λ)EC(i), whose vector iteration version is: EC(α+1)=λM·EC(α)+ (1−λ)EC(α). The explicit influential factors calculation process will be demonstrate in Appendix C. Figure 12 depicts the first communities’ top 15 highly influential users. As is shown in Figure 12, the main attitude in those highly influential users is negative, accounting for over 93 percent. Figure 12: Influential scores of high influential vertex 6.3 Simulation of Viewpoint Dissemination Based on the two-layer network established in Subsection 6.1, this part tests the viewpoint propagation simulation system proposed in Subsection 4.4. The multi-layer networks comprise viewpoints propagation layer and susceptibility effect layer. 27 For simulation, we set our susceptible parameter θ= 0.1. Firstly, we will test the stability of our simulation system. From 5 percent to 25 percent, we select five groups of intervention ratio with a 5 percent interval. The aim is to examine whether the ratio of each state in two layers will achieve stability after some time in status propagation. The parameters of the simulation system are set as follows: Table 3: Numerical Experiments Parameter Setting User Relationship layer User Similarity layer Parameter Name value Parameter Name value α propagation rate 0.3 diffRate propagation rate 0.3 β acceptance rate 0.2 sRate susceptible rate 0.2 R1 transition rate 0.3 R2 recover rate 0.2 γ1 inter-layer coefficients 1 1.5 γ2 inter-layer coefficients 2 1.5 In the numerical study, we iterate the diffusion process 50 times and examine whether the ratios of nodes with insusceptible states in similarity networks come to stability. Figure 13: Simulation iteration process The horizontal axis of Figure 13 substitutes iteration steps for
https://arxiv.org/abs/2504.21357v3
attitudes propagation and random status change. In contrast, the vertical axis indicates the share of insusceptible state nodes in a similarity network. With iterations, it is evident that the proportion of insusceptible nodes remains substantially stable at 50 percent with slight fluctuation. We can confidently say that the propagation comes to global relative stability after maximum iterations. At that time, the node’s status can be reckoned as the final distribution of information dissemination. 28 Figure 14: Intervention effect in the first community Figure 15: Intervention effect in the second community We try to exert intervention on the first community mentioned in Subsection 6.2 and simulate the distribution of viewpoints after 50 times iterations under different intervention proportions. The result exhibits in Figure 14. When the propagation stabilizes, the distribution of attitudes in the first community ex- periences a significant change. With the enhancement of the intervention level, the rate of positive attitudes shows an upward trend, while the share of negative attitudes decreases sig- nificantly. This means that our proposed intervention method can well relieve the polarization of viewpoints. We also want to check the effect of intervention in other communities without interven- tion measures. Figure 15 exhibits the fluctuation of attitudes label distribution in a non- 29 intervention community. We can conclude that there are no apparent changes in the second community when we only exert intervention measures on the first community. It also il- lustrates that there is little influence between communities while the standpoints propagate only within the community, which means the community partitions can well explore informa- tion cocoons. Our proposed approach is effective in alleviating the emergence of information cocoons. 7 Discussion 7.1 Sensitivity Analysis for Simulation System In the last section, we propose a novel propagation simulation system for two-layer networks. We set two initialization parameters and eight simulation system parameters in that simulation model. In this section, we would like to perform a sensitivity analysis of some parameters to compare the final distribution when the parameters alter. We first analyze the sensitivity of two initialization parameters, including sensitivity pa- rameter θand intervention ratio η. Setting five groups of susceptible parameters ranging from 0.1 to 0.5, we traverse the six groups of intervention ratio from 0 to 25 percent and compare the changes in the ratio of users holding negative attitudes. Figure 16: Sensitivity analysis in the first community 30 Figure 17: Sensitivity analysis in the second community As we can see from Figure 16, under the different susceptible ratio θ, the intervention effects vary to some extent in the communities where the intervention measures are mainly implemented, indicating the proposed simulation system has a certain sensitivity to parameter θ. As mentioned above, the lower the susceptible parameter θis, the fewer people insist on their standpoints. The remaining part of people is more receptive to others’ viewpoints, under which conditions they are more likely to change their standpoints. Therefore, the intervention effects will be sensitive to intervention intensity, especially as the susceptibility parameter decreases. As shown in Figure 17, the ratio of negative attitudes in the
https://arxiv.org/abs/2504.21357v3
second community is slightly different, with the susceptible parameters θchanging when laying intervention measures in the first community. As the intervention ratio increases, the proportion of negative attitudes only appears moderate decline and remain unchanged overall. Next, we analyze the sensitivity of simulation system parameters to examine the influence on propagation system stability with various parameters. 31 Figure 18: Sensitivity analysis for γ1,γ2andβ Figure 19: Sensitivity analysis for αanddiffRate Figure 18 and Figure 19 exhibit a ratio of insusceptible attitudes under different inter- layer coefficients γ1,γ2and acceptance rate βwhen the propagation reaches stability. With the acceptance rate βrising, the ratio of insusceptible states shows a noticeable upward trend. When the acceptance rates βchange in a low range, the simulation’s final distribution variation will be significant. For instance, when the acceptance rate βchanges to a high-level range of around 0.4, the influence on the simulation system’s final distribution will be slight. For inter-layer coupling effects between double-layer networks, on the other hand, the impact of inter-layer coefficients γ1andγ2can be ignored, meaning this parameter is robust for the entire system. Likewise, in the user association layer, the sensitivity of propagation rate αis low. 32 Figure 20: Sensitivity analysis for R1,R2andβ Figure 21: Sensitivity analysis for R1,R2andα Figure 20 and Figure 21 exhibit the sensitivity varying with propagation rate α, acceptance rateβ, recover rate R1and transition rate R2. The transition rate R2in the similarity network has a direct impact on the simulation system: Namely, the nodes in the similarity networks transfer their states from susceptible to insusceptible. 7.2 Parameters Analysis As we can see from Figure 16 and Figure 17, when the attitudes of high influential nodes in the first community change, the users’ attitudes distribution in the second community has little change while the attitudes in the first community change significantly, meaning that 33 our novel community detection algorithm has fantastic sub-network segmentation effects. The sub-networks formed by each community exhibit relative close, wherein the influence of viewpoints among community is low and information exchange is relatively little. However, viewpoint exchange is mainly centralized within each sub-network, which also illustrates that the standpoints of individuals are easily influenced by users with high relevance with them. For initialization parameters, intervention effects is sensitive to susceptible parameters θ and intervention ratio η, illustrating that the distribution of users’ attitudes is related to the intensity of intervention measure implementation. Meanwhile, the level of acceptance plays an indispensable role in information cocoons emergence and extinction. This also confirms some conclusions about information cocoons drawn from scholars the improvement of literacy of the population in terms of compatibility with heterogeneous information and different opinions has profound significance in eliminating information cocoons [43]. For simulation system parameters, propagation rate αhas little impact on our model, because such parameter has coupling effect with the level of users’ acceptance to heterogeneous information, users’ susceptible status and the number of neighbours holding heterogeneous attitudes. Thus, the level of propagation for each individual has little impact on the ultimate attitude distribution. Acceptance rate βfor individuals and recover rate R1have influence on sensitivity, meaning that
https://arxiv.org/abs/2504.21357v3
states transition parameter has influence on users’ sensitivity. To be precise, the tendency of users changing their states by adopting heterogeneous perspectives have significant influence on the dissemination of viewpoints and the formation of information cocoons. 8 Conclusion Although the recommendation system makes people’s lives more convenient, its excessive use is widely reckoned to be the attribution of information cocoons and might contribute to group polarization. Therefore, it is necessary to regularize recommendation rules and design an automatic community detection monitor scheme. Consequently, it can precisely locate the key nodes that influence a small group of sub-networks at the initial stage of group polarization and adjust recommendation regulations on time, which could provide a more comprehensive understanding of the whole events for viewers, avoiding the occurrence of group polarization. In short, the problem brought by the algorithm can be solved by the algorithm itself through reasonable utilization. Aiming at quantity analysis for information cocoons, this paper proposes a novel multi- layer network community detection algorithm, which is effective for monitoring information cocoons. Simultaneously, this paper also proposes an intervention strategy in which algorithms can operate. The dissemination principle-based double-layer network Markov transition mod- els are used to simulate intervention measures, verifying that our intervention strategy in this paper specifically affects de-homogenization for individuals within relatively closed sub- networks. Consequently, it illustrates that the intervention measure proposed in our paper can relieve the information cocoon phenomenon and meanwhile ensure members in another community are less affected. Some weaknesses exist in our models. Firstly, feature vectors are excavated only from word frequency without consideration of semantic perspectives. Due to the limitation of the 34 social network dataset, we cannot obtain more users’ attribution data. Thus, the second layer network is merely constructed by word weighted vectors. Secondly, apart from the three attitudes of positive, neutral, and negative, the node labels can be divided into different themes based on content and semantics. Finally, the reasonable range of threshold values for the information cocoons monitoring scheme has not been proposed in experimental analysis. It is always based on experience when determining whether we need to exert intervention measures on sub-networks. Further research would consider more user attribution, such as user attribution character- istics, users’ history information, and semantic features. This could bring the similarity layer closer to recommendation regulations, thus making the segmentation of sub-networks more precise. Combining semantic information and user history attribution, we can narrow down the scope of this intervention and achieve precise determination indexes. References [1] Mahmoud Al-Ayyoub, Mohammed Al-Andoli, Yaser Jararweh, Mohammad Smadi, and Brij Gupta. Improving fuzzy c-mean-based community detection in social networks using dynamic parallelism. Computers & Electrical Engineering , 74:533–546, 2019. [2] Alex Arenas, Jordi Duch, Alberto Fern´ andez, and Sergio G´ omez. Size reduction of complex networks preserving modularity. New Journal of Physics , 9(6):176, 2007. [3] Sondos Bahadori, Parham Moradi, and Hadi Zare. An improved limited random walk approach for identification of overlapping communities in complex networks. Applied Intelligence , 51(6):3561–3580, 2021. [4] Aritra Bhowmick, Mert Kosan, Zexi Huang, Ambuj Singh, and Sourav Medya. Dgcluster: a
https://arxiv.org/abs/2504.21357v3
neural framework for attributed graph clustering via modularity maximization. In Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty- Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence , 2024. [5] Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment , 2008(10):P10008, 2008. [6] Fred Brauer. The kermack–mckendrick epidemic model revisited. Mathematical Bio- sciences , 198(2):119–131, 2005. [7] Sandro Cavallari, Vincent W Zheng, Hongyun Cai, Kevin Chen-Chuan Chang, and Erik Cambria. Learning community embedding with community detection and node embed- ding on graphs. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management , page 377–386, 2017. [8] Hongxu Chen, Hongzhi Yin, Tong Chen, Quoc Viet Hung Nguyen, Wen-Chih Peng, and Xue Li. Exploiting centrality information with graph convolutions for network repre- sentation learning. In 2019 IEEE 35th International Conference on Data Engineering (ICDE) , pages 590–601, 2019. 35 [9] Jun Jin Choong, Xin Liu, and Tsuyoshi Murata. Learning community structure with vari- ational autoencoder. In 2018 IEEE International Conference on Data Mining (ICDM) , pages 69–78, 2018. [10] Daryl J Daley and David G Kendall. Epidemics and rumours. Nature , 204(4963):1118– 1118, 1964. [11] Leon Danon, Albert D´ ıaz-Guilera, Jordi Duch, and Alex Arenas. Comparing commu- nity structure identification. Journal of Statistical Mechanics: Theory and Experiment , 2005(09):P09008, 2005. [12] Carl Eckart and Gale Young. The approximation of one matrix by another of lower rank. Psychometrika , 1(3):211–218, 1936. [13] Andreas Flache and Michael W Macy. Small worlds and cultural polarization. In Micro- Macro Links and Microfoundations in Sociology , pages 146–176. Routledge, 2014. [14] Ming Gu, Tian-Fang Zhao, Liang Yang, Xiao-Kun Wu, and Wei-Neng Chen. Modeling information cocoons in networked populations: Insights from backgrounds and prefer- ences. IEEE Transactions on Computational Social Systems , 11(3):4497–4510, 2024. [15] Andrew M Guess, Neil Malhotra, Jennifer Pan, Pablo Barber´ a, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science , 381(6656):398–404, 2023. [16] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic block- models: First steps. Social Networks , 5(2):109–137, 1983. [17] Lei Hou, Xue Pan, Kecheng Liu, Zimo Yang, Jianguo Liu, and Tao Zhou. Information cocoons in online navigation. IScience , 26(1), 2023. [18] Vassilis N Ioannidis, Antonio G Marques, and Georgios B. Giannakis. A recurrent graph neural network for multi-relational data. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 8157–8161, 2019. [19] Bing-Yi Jing, Ting Li, Zhongyuan Lyu, and Dong Xia. Community detection on mix- ture multilayer networks via regularized tensor decomposition. The Annals of Statistics , 49(6):pp. 3181–3205, 2021. [20] Thomas N Kipf and Max Welling. Variational graph auto-encoders. In Neural Informa- tion Processing Systems Workshop on Bayesian Deep Learning , pages 1–3, 2016. [21] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning
https://arxiv.org/abs/2504.21357v3
Representations , 2017. [22] Clement Lee and Darren J Wilkinson. A review of stochastic block models and extensions for graph clustering. Applied Network Science , 4(1):1–50, 2019. [23] Nian Li, Chen Gao, Jinghua Piao, Xin Huang, Aizhen Yue, Liang Zhou, Qingmin Liao, and Yong Li. An exploratory study of information cocoon on short-form video platform. InProceedings of the 31st ACM International Conference on Information & Knowledge Management , pages 4178–4182, 2022. 36 [24] Jiamin Liu, Tao Wang, Feng Yao, Jingbo Huang, Renjie He, and Jiting Li. Career cocoons: Analyzing occupational mobility with graph embedding model. In 2023 6th International Conference on Data Science and Information Technology (DSIT) , pages 115–123, 2023. [25] Matteo Magnani and Luca Rossi. The ml-model for multi-layer social networks. In 2011 International Conference on Advances in Social Networks Analysis and Mining , pages 5–12, 2011. [26] Naderipour Mansoureh, Fazel Zarandi Mohammad Hossein, and Bastani Susan. A mul- tilayer general type-2 fuzzy community detection model in large-scale social networks. IEEE Transactions on Fuzzy Systems , 30(10):4494–4503, 2022. [27] Peter J Mucha, Thomas Richardson, Kevin Macon, Mason A. Porter, and Jukka-Pekka Onnela. Community structure in time-dependent, multiscale, and multiplex networks. Science , 328(5980):876–878, 2010. [28] Tsuyoshi Murata and Naveed Afzal. Modularity optimization as a training criterion for graph neural networks. In Complex Networks IX: Proceedings of the 9th Conference on Complex Networks CompleNet 2018 9 , pages 123–135. Springer, 2018. [29] Mansoureh Naderipour, Mohammad Hossein Fazel Zarandi, and Susan Bastani. A type- 2 fuzzy community detection model in large-scale social networks considering two-layer graphs. Engineering Applications of Artificial Intelligence , 90:103206, 2020. [30] Mark EJ Newman. Modularity and community structure in networks. Proceedings of the National Academy of Sciences , 103(23):8577–8582, 2006. [31] Mark EJ Newman. Spectral methods for community detection and graph partitioning. Phys. Rev. E , 88:042822, Oct 2013. [32] Mark EJ Newman and Michelle Girvan. Finding and evaluating community structure in networks. Phys. Rev. E , 69:026113, 2004. [33] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In T. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems , volume 14. MIT Press, 2001. [34] Eli Pariser. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think . Penguin Publishing Group, 2011. [35] Luca Pasa, Nicol` o Navarin, Alessandro Sperduti, et al. Deep recurrent graph neural networks. In ESANN 2020-Proceedings, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , pages 157–162, 2020. [36] Romualdo Pastor-Satorras and Alessandro Vespignani. Epidemic spreading in scale-free networks. Phys. Rev. Lett. , 86:3200–3203, Apr 2001. [37] Subhadeep Paul and Yuguo Chen. Null models and community detection in multi-layer networks. Sankhya A , pages 1–55, 2021. [38] Matjaˇ z Perc. The matthew effect in empirical data. Journal of The Royal Society Interface , 11(98):20140378, 2014. 37 [39] Jinghua Piao, Jiazhen Liu, Fang Zhang, Jun Su, and Yong Li. Human–ai adaptive dynamics drives the emergence of information cocoons. Nature Machine Intelligence , 5(11):1214–1224, 2023. [40] Shanjiao Ren, Lili Liu, Suting Yang, and Jiujiu
https://arxiv.org/abs/2504.21357v3
Jiang. Investigating information co- coon attitudes in short-form video applications. In International Conference on Human- Computer Interaction , pages 89–96. Springer, 2022. [41] Mehrdad Rostami and Mourad Oussalah. A novel attributed community detection by integration of feature weighting and node centrality. Online Social Networks and Media , 30:100219, 2022. [42] Guillaume Salha-Galvan, Johannes F Lutzeyer, George Dasoulas, Romain Hennequin, and Michalis Vazirgiannis. Modularity-aware graph autoencoders for joint community detection and link prediction. Neural Networks , 153:474–495, 2022. [43] Fernando P Santos. How to break information cocoons. Nature Machine Intelligence , 5(12):1338–1339, 2023. [44] Oleksandr Shchur and Stephan G¨ unnemann. Overlapping community detection with graph neural networks. In International Conference on Learning Representations , 2019. [45] Huawei Shen, Xueqi Cheng, Kai Cai, and Mao-Bin Hu. Detect overlapping and hi- erarchical community structure in networks. Physica A: Statistical Mechanics and its Applications , 388(8):1706–1712, 2009. [46] Luis Sol´ a, Miguel Romance, Regino Criado, Julio Flores, Alejandro Garc´ ıa del Amo, and Stefano Boccaletti. Eigenvector centrality of nodes in multiplex networks. Chaos: An Interdisciplinary Journal of Nonlinear Science , 23(3), 2013. [47] Hao Song and Danxiang Ai. Modeling and simulation of user behavior under the influence of network information cocoon. In 2023 15th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC) , pages 63–66, 2023. [48] Huan Song and Jayaraman J Thiagarajan. Improved deep embeddings for inferencing with multi-layered graphs. In 2019 IEEE International Conference on Big Data (Big Data) , pages 5394–5400, 2019. [49] Cass R Sunstein. Infotopia: How Many Minds Produce Knowledge . Oxford University Press, 2006. [50] Cass R Sunstein. Republic: Divided Democracy in the Age of Social Media . Princeton University Press, 2018. [51] Cass R Sunstein and Lucia A Reisch. Trusting Nudges: Toward A Bill of Rights for Nudging . Taylor & Francis, 2019. [52] Petar Veliˇ ckovi´ c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations , 2018. [53] Atul Kumar Verma and Mahipal Jadeja. An efficient centrality-based gnn for commu- nity detection in dynamic networks. In International Conference on Smart Systems: Innovations in Computing , pages 671–682. Springer, 2024. 38 [54] Xi Xiong, Yuanyuan Li, Shaojie Qiao, Nan Han, Yue Wu, Jing Peng, and Binyong Li. An emotional contagion model for heterogeneous social media with multiple behaviors. Physica A: Statistical Mechanics and its Applications , 490:185–202, 2018. [55] Osman Yagan, Dajun Qian, Junshan Zhang, and Douglas Cochran. Conjoining speeds up information diffusion in overlaying social-physical networks. IEEE Journal on Selected Areas in Communications , 31(6):1038–1048, 2013. [56] Yini Zhang, Fan Chen, and Karl Rohe. Social media public opinion as flocks in a mur- muration: Conceptualizing and measuring opinion expression on social media. Journal of Computer-Mediated Communication , 27(1):zmab021, 2021. [57] Magdalena Zio lo, Piotr Br´ odka, Anna Spoz, and Jaros law Jankowski. Modeling the im- pact of external influence on green behaviour spreading in multilayer financial networks. In2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) , pages 1–10, 2022. A Comparison Algorithm: TWIST The comparison algorithm tucker decomposition with integrated SVD
https://arxiv.org/abs/2504.21357v3
transformation (TWIST) is based on the stochastic block model. A.1 Single Layer Graph Generated Model Stochastic block model has been proposed since 1983 [16]. It has become the most common method in graph generation model, usually applied to community partition. Its essence is a probability generation model, taking the adjacent matrices as samples for estimating the probability of a link between two nodes. In general, a link is more likely to exist between two nodes that belong to the same community. In contrast, the nodes belong to different commu- nities has less possibility to be associated. We can establish a probability graph representation matrix, thus deducing the community affiliation status. For graph G= (V,E), the adjacent matrix is A= (aij)n×nand matrix P= (pij)n×n represents the probability of node iand node jbelonging to the same community. One- hot vectors represent community labels. Assume we have K communities in total and the community representation vector is noted as Zi= (zi1,···, zim,···, ziK)T∈RK, where zimis the m-th entities of vector Ziwith a value of 0 or 1. Graph representation matrix Z= ZT 1,···, ZT p,···, ZT nT∈Rn×Kwith community information is composed of graph representation vector for each node. Denote the probability πmisi-th node affiliating with the m-th community, namely P(zim= 1) = πmandPK m=1πm= 1.Stochastic block model assume that the probability of existing an edge between two nodes obeys aij∼Bernoulli 1, ZT iPZj . A.2 Multi-Layer Graph Generated Model In multilayer networks, huge differences exist between each layer, so the task for community detection is finding a suitable partition including features in each layer. Unlike the stochastic 39 community setting in single layer networks, the multi-layer networks stochastic block model assumes each layer randomly belonging to a special partition that the stochastic model deter- mines. Assume there is an Llayers network with Mdifferent community partition. Denote thel-th layer belongs to partition ψl. The probability of l-th layer affiliates community mis marked as πm, then P(ψl=m) =πm(l= 1,2,···, L) andMP m=1πm= 1. A single layer stochastic block model generated the m-th partition. The probability matrix Pm= (pij)Km×Km∈RKm×Kmwith the value between 0 and 1 represents the probability of communities with association. Community affiliation indicator matrix Zm∈Rn×Kmwith one-hot vectors constituting each row represents which community each node belongs to. Similarly, the existence of a link between node iand node jobeys Bernoulli distribution. For thel-th layer network (1 ≤l≤L), the probability of an edge between node iand node j follows Bernoulli distribution: al ij∼Bernoulli Zψl(i,:)PψlZψl(j,:)T .Thus, the l-th layer of adjacent tensor Alis:E Al|ψl =Zψl(i,:)PψlZψl(j,:)T. A.3 Tensor Decomposition For each single layer network Glin aLlayer network G={G1,···,GL}, there exists n×Kl dimensional graph representation matrix. The global graph representation matrix Z= (Z1, Z2,···, Zm)∈ {0,1}n×PL j=1Kjis obtained by concatenating each layer of graph rep- resentation matrix. From Tensor Representation Theorem [19], E(A|L) =P×1C×2C×3R., where Ris layer partition matrix, each line of which is composed of a one-hot vector. R= (γψ1, γψ2,···, γψL)T∈ {0,1}L×Mmeans each layer belongs to which kind of partition. A.4 Adjacent Tensor Decomposition and Graph Representation Tensor Es- timation We apply SVD decomposition to community representation matrix: C=ZΣVT,
https://arxiv.org/abs/2504.21357v3
where Z∈Rn×randV∈RPM j=1Kj×r. Matrix Σ is composed of community representation ma- trix singular value in descending order: Σ = σ1(Σ)··· 0 ......... 0···σr(Σ) , (A.1) where σ1(Σ)≥σ2(Σ)≥ ··· ≥ σr(Σ). Then E(A|L) =P×1C×2C×3R=S×1Z×2Z×3R′, where the tensor S=P×1(ΣVT)×2(ΣVT)×3D1 2∈Rr×r×MandD=R′−1(RR)R′−1. We can determine whether node iand node jbelongs to the same community through the distance between their representation vector. If node iand node jare not in the same community, then ∥Z(i,:)−Z(j,:)∥ ≥1 max( σ(Σ)). 40 Figure 22: TWIST Algorithm B Lemmas Lemma 1. (Eckart and Young Theorem [12]) For a real matrix A∈Rm×n(n≥m). Suppose the singular value decomposition of AisA=UΣVT. The optimal solution of the following minimization problem (1) arg min ˆA∈Rm×n rank (ˆA)≤r A−ˆA F(B.1) is called the best r rank approximation of A, which is formulated as ˆAk=rP i=1σiuivT i. And A−ˆAk F=q σ2 r+1+···+σ2n. Lemma 2. [Low Rank Approximation] The modularity maximization problem (2)in single layer network max Tr(ZTZ)=NTr ZTBZ (B.2) is equivalent to rth-order low-rank reconstruction of modularity degree matrix B. Moreover, therth-order low-rank reconstruction can be formulated as ˆBr= ΨTΨ, where Ψ = HΛ, Σr= ΛTΛandB= [H, P]Σ[H, P]T. Proof of Lemma 1. The singular value decomposition of AisA=UΣVT. ∥A∥F= UΣVT F=q Tr(VΣTUTUΣVT) =q Tr(VΣTΣVT) 41 =q Tr(ΣTΣVTV) =q Tr(ΣTΣ) =q σ2 1+···+σ2n. Thus A−ˆAr F=q σ2 r+1+···+σ2n. The proof is then finished. Proof of Lemma 2. We use the eigenvalue decomposition: B= H PΣrO OΣn−rHT PT , where Σ ris diagonal matrix composed of the top r-th eigenvalue. Consequently, the optimal r-rank approximation of modularity matrix B is ˆBr=HΣrHT. From Lemma 1, we know that the r-th largest eigenvalue of modularity tensor B. Thus, part of block matrix Σ τis positive-definite. Let Σ r= ΛΛT, then ˆBr=HΛΛTHT= (HΛ)(HΛ)T. The proof is then finished. C Computation of Influential Factors and Analysis In this section, we formulate the details of iteration process for influential factors computation. Algorithm 2 Computation of Influential Factors 1:Initialize the parameters by degree centrality. DC= [DC(1), DC(2),···, DC(N)]T, EC(0)=DC, λ = 0.85. 2:Calculate the normalization adjacent matrix in each layer ˜A(l)=D+A, where D= diagP ja1j,···,P jaNj . ˜A=˜A(1)∪˜A(2)∪ ··· ∪ ˜A(L) M=˜D−1˜A 3:Iteration: While EC(α+1)−EC(α) > ϵ &α < epoch : EC(α+1)=λ·M·EC(α)+ (1−λ)EC(α) α=α+ 1 To verify this algorithm, we test the top 100 of high influential users and visualize its max- imum associated users. The maximum associated users is the cardinal number of maximum neighbour vertex: |Nmax(i)|= ∪L l=1N(l, i) . 42 Figure 23: Neighbor count varying with influence rank. In Figure 23, with the decrease of users’ influential, the number of adjacent users ap- pears corresponding downward trend except a slight fluctuation concentrating on that of high influential users. 43
https://arxiv.org/abs/2504.21357v3
arXiv:2504.21363v1 [math.ST] 30 Apr 2025The differential structure shared by probability and moment matching priors on non-regular statistical models via the Lie derivative∗ Masaki Yoshioka and Fuyuhiko Tanaka The University of Osaka, Osaka, Japan. *Corresponding author(s). E-mail(s): yoshioka@sigmath.es.osaka-u.ac.jp ; Contributing authors: ftanaka.celas@osaka-u.ac.jp ; Abstract In Bayesian statistics, the selection of noninformative pr iors is a crucial issue. There have been various discussions on theoretical justific ation, problems with the Jeffreys prior, and alternative objective priors. Among them, we focus on two types of matching priors consistent with frequentist th eory: the probability matching priors and the moment matching priors. In particul ar, no clear rela- tionship has been established between these two types of pri ors on non-regular statistical models, even though they share similar objecti ves. Considering information geometry on a one-sided truncated exponential family, a typical example of non-regular statistical models, we find t hat the Lie derivative along aparticular vector field providesthe conditions for b oth theprobability and moment matching priors. Notably, this Lie derivative does n ot appear in regular models. These conditions require the invariance of a genera lized volume element with respect to differentiation along the non-regular param eter. This invariance leads to a suitable decomposition of the one-sided truncate d exponential family into one-dimensional submodels. This result promotes a uni fied understanding of probability and moment matching priors on non-regular mode ls. Keywords: Information geometry, Bayesian statistics, truncated exp onential family, probability matching prior, moment matching prior MSC Classification: Primary 62F15; Secondary 62B11 ∗This is a preprint submitted to Sankhya A . 1 1 Introduction The choice of noninformative priors is one of the challenges in Bayesia n statistics. Since the effectiveness of Bayesianmethods depends on priors, it is necessaryto set up an appropriate prior for statistical analysis according to statistic al models and tasks. We can use a subjective prior if we know something about the parame ters. However, “objectivity”intheprioroftenmakesBayesianmethodseffectivew henwedonotknow the parameters. Such a prior is called a noninformative prior or an ob jective prior. Theoretical studies on noninformative priors have a long history an d many objec- tivity criteria and resulting priors. Jeffreys(1961) derives an invariant prior under parameter transformations, so-called the Jeffreys prior. The th eory of reference priors justifies the Jeffreys prior ( Bernardo ,1979). Furthermore, the probability matching prior and the moment matching prior are also well-known noninformat ive priors that combine the frequentist theory and the Bayesian statistical theo ry. The probability matching prior matches the posterior and frequent ist probabilities of the confidence interval. Welch and Peers (1963) introduce this idea, which has been developed since then (see also Peers(1965),Tibshirani (1989),J.K. Ghosh and Muk- erjee(1992),Datta and Mukerjee (2004) andSweeting (2008)). Probability matching priors also have invariance under parameter transformations ( Datta & Ghosh ,1996). On the other hand, the moment matching prior by M. Ghosh and Liu (2011) matches the Bayesian posterior mean and the maximum likelihood estimator. Us ing moment matching priors, the posterior mean has the asymptotic optimality o f the maximum likelihood estimator, and the bias correction can also be performed. On the other hand,
https://arxiv.org/abs/2504.21363v1
there are also many studies of noninformative p riors in the non-regular statistical models where the support of the distribut ions depends on the parameters ( Ghosal(1997);Hashimoto (2021);Ortega and Basulto (2016);Shemyakin (2023)).Bayesianstatisticsforthenon-regularmodelsisalsoessential sincethesemod- elshavemanyapplications( Brown&Walker ,1995;Lancaster ,1997).Inparticular,the model by Ghosal and Samanta (1995) has been investigated well. Ghosal(1999) gives the probability matching prior, and Hashimoto (2019) provides the moment matching prior in this model. Weconstructatheorythattreatsprobabilityandmomentmatchin gpriorsinauni- fied manner in these kinds of non-regular models. Two matching prior s have a similar purpose: choosing a prior distribution that matches the frequent ist theory. However, their relationship could have been more precise, although they have been discussed separately. Therefore, by considering the information geometry of non-regular models, we clarify the structure of the two matching priors. Information geometry is helpful in statistical theory ( Amari & Nagaoka ,2000), including figuring out noninformative priors. For regular models, Takeuchi and Amari (2005)giveafamilyofnoninformativepriorscalledthe α-parallelpriorsfromageomet- ric point ofview. Tanaka(2023)discoversgeometricpropertiesofsomenoninformative priors and, in particular, clarifies that the conditions of the moment matching prior depend on the geometric properties. However, the use of informa tion geometry in non- regular models is limited. Recently, Yoshioka and Tanaka (2023a) discuss information geometry of a one-sided truncated exponential family (oTEF), a t ypical non-regular model. 2 The present paper provides sufficient conditions and the characte rization of the probability and moment matching priors for multivariate non-regular models, espe- cially for an oTEF. In this model, we derive the asymptotic expansion o f the posterior distribution and the partial differential equations for the two matc hing priors with nuisance parameters. Furthermore, by restricting the model to an oTEF, we represent those partial differential equations by the Riemannian metric and th eα-connection coefficients ( Yoshioka & Tanaka ,2023a). Then, the Lie derivative along the common vector field appears in the conditions of the two types of matching p riors. These con- ditions require an invariance of a generalization of the volume element with respect to the differentiation with respect to the non-regular parameter u nder a parameter transformation. This geometric property induces a natural −1-dimensional submodel. This paper is organized as follows. Section 2 defines the one-sided tr uncated family and the notations. In Section 3, we derive the conditions of the pro bability matching priors on the oTF. In Section 4, we also derive the conditions of the m oment matching priorsontheoTF.Then,wediscusstherelationshipbetweenthetw otypesofmatching priors in Section 5. Finally, Section 6 summarizes these results. 2 One-sided truncated family A one-sided truncated family ( Akahira,2021), shortly an oTF, is a typical non-regular statistical model with a parameter-dependent support. Let Θ be an open subset of RdandI= (I1,I2) be an open interval, where −∞ ≤I1< I2≤ ∞. Consider a parametrized family P={Pθ,γ:θ∈Θ,γ∈I}of probability distributions Pθ,γ, having a density p(x;θ,γ) =q(x;θ)e−ψ(θ,γ)· /BD[γ,I2)(x) (x∈I) with respect to the Lebesgue measure, where q(x;θ) is positive. This family Pis called aone-sided truncated family (oTF) , or more precisely, a lower-truncated family (lTF). We call the parameter γthetruncation parameter , and the parameter θ= (θ1,...,θd) theregular parameter . Suppose that Pis identifiable in the sense that for anyθ1,θ2∈Θ andγ∈I,Pθ1,γ=Pθ2,γimpliesθ1=θ2. We also assume
https://arxiv.org/abs/2504.21363v1
that p(x;θ,γ) is infinitely differentiable in θandγon the interval ( γ,I2). An oTF is a non-regular statistical model because the support [ γ,I2] of the distribution depends on the truncation parameter γ. Note that the submodel {Pθ,γ:θ∈Θ}is regular for anyγ∈I. We also consider a one-sided truncated exponential family, a submo del of an oTF. When the function q(x;θ) has the form q(x;θ) = exp/braceleftiggd/summationdisplay i=1θiFi(x)+M(x)/bracerightigg (x∈I), (1) whereM∈C(I), Fi∈C∞(I)(i= 1,...,d), we call the family Pe= {Pθ,γ:θ∈Θ,γ∈I}aone-sided truncated exponential family (oTEF) . In this case, we call the regular parameters θnatural parameters .Bar-Lev (1984),Akahira (2016) 3 andAkahira (2017) investigate the statistical properties of the oTEF in detail. For any fixed parameter γ∈I, the submodel {Pθ,γ:θ∈Θ}is an exponential family. LetX1,...,Xnbe i.i.d. random samples from a distribution p(x;θ,γ) in an oTF P. The MLE of γis given by ˆγML= min 1≤i≤nXi. Assume that there exists a unique solution ˆθi ML(1≤i≤n) of the likelihood equation (1/n)/summationtextn j=1logp(xj;θ,x(1)) = 0 for any x= (x1,...,xn)∈(γ,I2)n. Then, there exists an MLE ˆθi MLthat satisfies the likelihood equation ( Akahira,2021) n/summationdisplay j=1∂ilogp(Xj;θ,ˆγML) = 0. Note that the two MLEs ˆθi ML,ˆγMLhave different orders of convergence. The subse- quent sections will provide further details on this difference. Writing the expectations of the derivatives of the log-likelihood functions in vector notation s implifies the pre- sentation of results in the following sections. Define ∂i:=∂/∂θifori= 1,...,dand ∂γ:=∂/∂γ. LetDθ:= (∂1,...,∂d)⊤. We write the Kronecker product of two matrices AandBasA⊗B. The Kronecker product of rcopies of matrix Ais denoted by A⊗r. We define A(r,s)(θ,γ):=E/bracketleftbig D⊗r θ(∂γ)slogp(X1;θ,γ)/bracketrightbig (r,s= 0,1,2,...), c(θ,γ):=A(0,1)(θ,γ) =E[∂γlogp(X,θ,γ)] =−∂γψ(θ,γ), ˆA(r,s):=1 nn/summationdisplay i=1D⊗r θ(∂γ)slogp(Xi,ˆθML,ˆγML) (r,s= 0,1,2,...), ˆc:=1 nn/summationdisplay i=1∂γlogp(Xi;ˆθML,ˆγML). A(r,s)(θ,γ) andˆA(r,s)aredr-dimensional vectors. Each component of A(r,s)(θ,γ) is written as A(1,s) i(θ,γ):=E[∂i(∂γ)slogp(X1;θ,γ)], A(2,s) ij(θ,γ):=E[∂i∂j(∂γ)slogp(X1;θ,γ)], A(3,s) ijk(θ,γ):=E[∂i∂j∂k(∂γ)slogp(X1;θ,γ)] fori,j,k= 1,...,dands= 0,1,2,.... We use similar notation for the components of ˆA(r,s). We sometimes omit the arguments as A(r,s), cforA(r,s)(θ,γ), c(θ,γ) when the arguments are clear from context. Furthermore, we abbreviate the transposed vector/parenleftbig A(r,s)/parenrightbig⊤asA(r,s)⊤. 4 We introduce the notation of information geometry to examine the g eometric aspects of the two matching priors. We use the Riemannian metric an d theα- connections defined by Yoshioka and Tanaka (2023a) for the oTF P. The Riemannian metricgonPis defined by gij(θ,γ):=−E[∂i∂jlogp(X1;θ,γ)] (i,j= 1,...,d), giγ(θ,γ):= 0 (i= 1,...,d), gγγ(θ,γ):={∂γψ(θ,γ)}2. The submatrix gθ= (gij(θ,γ))1≤i,j≤d, consisting of the regular part of g, is the Fisher information matrix of the submodel {Pθ,γ:θ∈Θ}forγ∈I. Let ˆgθdenote a matrix consisting of ˆA(2,0) ijfori,j= 1,...,daccording to the relation gθ=/parenleftig −A(2,0) ij/parenrightig 1≤i,j≤d. Note that the full matrix ( gab) (a,b= 1,...,d,γ ) corresponds to the asymptotic covariance of the vector/parenleftig ˆθML,ˆγML/parenrightig (Yoshioka & Tanaka ,2023b). Let Γgdenote the Levi-Civita connection coefficients with respect to the metric g, given by Γg ab,c(θ,γ) =1 2(∂agbc(θ,γ)+∂bgca(θ,γ)−∂cgab(θ,γ)) fora,b,c= 1,...,d,γ . We also define the α-connections on Pwith the connection coefficients (α) Γab,c(θ,γ):=αE[(∂a∂blX1,θ,γ) (∂clX1,θ,γ)]+(1−α)Γg ab,c forα∈Randa,b,c= 1,...,d,γ , wherelX1,θ,γ= logp(X1;θ,γ). In particular, the symbol(1) Γab,c, also called the e-connection, is denoted by(e) Γab,c. Here, we ignore the null set{x=γ}where logp(x;θ,γ) is not differentiable in the above expectations. Only the regular parts(α) Γij,ksatisfy (α) Γij,k(θ,γ) = Γg ij,k(θ,γ)−α 2E[(∂ilX1,θ,γ)(∂jlX1,θ,γ)(∂klX1,θ,γ)] (2) fori,j,k= 1,...,d. We will use
https://arxiv.org/abs/2504.21363v1