text
string | source
string |
|---|---|
be represented using factor graphs, which are bipartite graphs connecting variable nodes to factor nodes. Each factor fjmaps a subset Xjof variables to a non-negative value. The joint distribution is then P(x) = 1 ZQm j=1fj(xj). While both representations are equivalent, factor graphs make inference structure explicit and are often used in algorithmic implementations. Exact inference in MRFs is known to be intractable in general, with complexity scaling exponentially in the treewidth of G[WJ08, KF09]. While the definition of AFEG is does notnecessitate the use of MRFs in general, any joint distribution Pthat is Markov with respect to G works, we find that it is a reasonable model in our disease testing application (Section 1.1) where real-world sex interaction graphs are often of low treewidth (see Section 4). Reinforcement learning (RL). Sequential decision-making is classically modeled using Markov decision processes (MDPs) [Put14], and solved using reinforcement learning (RL) techniques. Prominent algorithms include Q-learning [WD92], policy gradient methods [SMSM99], and deep RL approaches like deep Q-networks (DQN) [MKS+15]. In principle, AFEG can be cast as an MDP, but doing so leads to an exponentially large state space: the agent must track both which nodes have been selected and their revealed labels. This complexity makes direct application of off-the-shelf RL methods impractical in our setting with customization and heavy finetuning. Multi-armed bandits (MAB), Gittins index, and branching bandits. AFEG bears some resemblance to Bayesian multi-armed bandits (MABs), where Gittins index policies are optimal under assumptions like arm independence and infinite-horizon discounted rewards [Git79, GGW11]. However, key differences prevent a reduction of AFEG to a standard MAB: (i) each arm (node) can be selected at most once; (ii) the set of available actions changes dynamically due to the frontier constraint; and (iii) action outcomes are correlated through the graph structure and joint distribution P. As such, a closer abstraction is the branching bandit model [Wei88, Tsi94, KO03], where actions dynamically activate new options, closely mirroring frontier expansion inAFEG . While Gittins index policies are known to be optimal for branching bandits [KO03], no efficient method has been proposed to compute them in general. Indeed, Gittins indices are underused in practice due to perceived computational intractability in all but simple settings [Sco10, MKLL12, Edw19]. Our work addresses this gap by presenting the first efficient implementation of Gittins-based policies in discrete branching bandits with history-dependent rewards, enabling their use in structured settings like network-based disease testing. Related index-based techniques for classic MABs are reviewed in [CM14], but do not extend to branching structures. Active search on graphs. As discussed in Section 1, AFEG is related to active search on graphs [GKX+12, WGS13, JMC+17, JMA+18], which aims to identify as many target nodes as possible under a budget. However, these works do not impose a frontier constraint while searching over binary labels, and typically assume a relaxed Gaussian random field model [ZLG03] for tractable inference. In contrast, our formulation models the joint distribution explicitly as an MRF. While exact inference is generally intractable, many real-world graphs — especially sexual contact networks — have low treewidth, making structured modeling
|
https://arxiv.org/abs/2505.21671v1
|
with MRFs feasible in practice (see Section 4). 4 Influence maximization. Another well-studied sequential decision problem on graphs is influence maxi- mization, where the goal is to select seed nodes to maximize influence spread under stochastic propagation models such as the independent cascade or linear threshold [KKT03]. This framework has been applied to health interventions, such as selecting peer leaders to disseminate information in HIV prevention efforts among homeless youth [YWR+17, WOVH+18]. While both influence maximization and AFEG involve decisions on graphs, their objectives differ: the former focuses on maximizing long-term diffusion, whereas AFEG empha- sizes label-driven, reward-maximizing sequential actions under uncertainty and exploration constraints. Network-based HIV testing and transmission modeling. HIV transmission dynamics have been ex- tensively studied through network-based models, where individuals are represented as nodes and edges denote reported sexual or social contacts [Rot09, MGDGO25]. Such networks can constructed from contact tracing, respondent-driven sampling (RDS), or molecular surveillance data [Hec97, GS09, ADGR+19]. While existing research have used methods such as generalized estimating equations, mixed effects regression, graph attention networks have been used to fit transmission probabilities using parameters [BMAK+21, WLK+23, XFL+21], they do not model and consider the sequentiality of assigning tests to individuals. 3 A Gittins index-based policy for the AFEG problem We propose a policy, Gittins , for the AFEG problem that is based on Gittins indices. In Section 3.1, we show that when the input graph Gis a forest, AFEG reduces exactly to the branching bandit framework [Wei88, Tsi94, KO03], under which Gittins is provably optimal. While [KO03] established the existence of an optimal Gittins index policy, they did not characterize the index explicitly nor provide an efficient method for computing it. In Section 3.2, we prove that the key recursive functions involved in computing the index are piecewise linear, which enables practical and efficient computation. The implementation of Gittins and its runtime analysis is given in Section 3.3. 3.1 Reduction to branching bandits for tree-structured instances In the branching bandit model [Wei88, Tsi94, KO03], a project is represented as a rooted tree, where each node corresponds to an action. Selecting a node yields a stochastic immediate reward and activates its children to be available for future selection. A node is available if it is the root or a descendant of a previously selected node. Under the frontier exploration constraint of AFEG , a forest Gnaturally induces a collection of rooted trees; see Fig. 2. The problem then reduces to a branching bandit instance by treating each leaf as if it can be selected infinitely many times with zero reward, or equivalently, by appending an infinite chain of zero- reward nodes to each leaf. At each timestep, the available actions correspond to the current frontier: nodes adjacent to those already selected. Crucially, due to the Markov property of P, selecting a node Xonly affects posterior beliefs over its descendants, preserving the conditional independence structure required by the branching bandit formulation. When Gis a forest, the problem decomposes across connected components, allowing Gittins indices to be computed independently for each rooted tree. X1 X2 X3X4 X5X6 X7X8
|
https://arxiv.org/abs/2505.21671v1
|
FrontierX1 X2 X3 X4 X5 X6 X7 X8 Figure 2: Reduction to a branching bandit on 8 nodes with root X1. After acting on {X1, X3}, the frontier is {X2, X4, X6}. Note that we have P(x2|x1, x3) =P(x2|x1) by the Markov property. To define the Gittins index, we assume the reward r(X, v) for revealing label v∈Σat node Xis bounded by some finite ¯ r, i.e.,−∞<−¯r≤r(X, v)≤¯r <∞, and introduce a retirement option with one-off reward m.3 3There are several equivalent ways of proving the optimality of Gittins indices in the classic non-branching setting, e.g., the 5 This enables a recursive characterization of the index: at any step, the policy can choose to continue exploring, or stop acting and receive an one-off fixed reward of m. Since rewards are upper bounded by ¯ r, the maximum attainable reward is at most ¯ r+β¯r+β2¯r+. . .=¯r 1−β. So, when m >¯r 1−β, any optimal policy should choose the retirement action and quit. To define the Gittins index, let us first define two recursive functions ϕand Φ, as per [KO03]. For any non-root node X∈X, label b∈Σ, and value 0 ≤m≤¯r 1−β, ϕX,b(m) = max m,X v∈ΣP(X=v|Pa(X) =b)· r(X, v) +β·ΦCh(X),v(m) (1) IfXis the root, we define ϕX,∅(m) = max m,P v∈ΣP(X=v)· r(X, v) +β·ΦCh(X),v(m) . For any subset of nodes S∈X, label b∈Σ, and value 0 ≤m≤¯r 1−β, ΦS,b(m) =( ¯r 1−β−R¯r 1−β mQ Y∈S∂ϕY,b(k) ∂kdkifS̸=∅ m ifS=∅(2) We will only invoke Eq. (2) with S= Ch( X) for some node X. The interpretation here is that ϕX,brepresents the total expected value of this subtree rooted at Xwhen Pa( X) =b, while accounting for option of taking the retirement option mat each step, and Φ Ch(X),brepresents the value of the collection of subtrees rooted at the children of X. For example, for X1in Fig. 2, this refers to the subtrees rooted at X2,X3, and X4. Using these notation, the Gittins index g(X, b) for node Xgiven Pa( X) =bis then defined as g(X, b) = min m∈ 0,¯r 1−β :ϕX,b(m)≥m (3) That is, g(X, b) represents the “fair value” of the subtree rooted at X, given that its parent has label b∈Σ. The parent’s label matters because the posterior distribution over X’s label is updated conditionally based on the value of b. Theorem 3, which follows from Theorem 1 of [KO03] with appropriate changes in notation, establishes that the Gittins policy is optimal when Gis a forest. Definition 2 (TheGittins policy; Algorithm 1) .TheGittins policy pre-computes all g(X, b) values given GandP, then repeatedly acts on the node in the frontier with the largest index value. Theorem 3. Gittins is optimal for the AFEG problem when Gis a forest. 3.2 Properties of discrete branching bandits We prove that the recursive functions ϕX,b(m) and Φ Ch(X),b(m) are piecewise linear in m. This facilitates an efficient implementation of Gittins , which we give in Algorithm 1. Lemma 4. For any node X∈Xand label b∈Σ,ϕX,b(m)is a non-decreasing piecewise linear function over m∈[0,¯r/(1−β)]. The proof intuition behind Lemma 4 is to perform induction from the leaves
|
https://arxiv.org/abs/2505.21671v1
|
to the root while recalling that piecewise functions on a fixed domain range are closed under addition, multiplication, differentiation, and integration. This later allows us to bound the running time for computing our Gittins policy in Theorem 6 as the number of pieces in the function changes additively as we combine piecewise functions, e.g. #pieces( f1+ f2)≤#pieces( f1) + #pieces( f2). We also prove additional properties of the Φ function which may be of independent interest to resaerchers of Gittins index. Proposition 5. For any non-leaf node Xand label b: •ΦCh(X),b(m) = Φ Ch(X),b(0) + h(m)for some piecewise linear h. original stopping problem formulation [Git74], retirement option process formulation [Whi80], restart-in-state formulation [KVJ87], prevailing charge formulation [Web92], state space reduction [Tsi94], etc. The branching bandit optimality proof of [KO03] builds on [Whi80]’s retirement option formulation. 6 •ΦCh(X),b(m) =mif and only if m≥max Y∈Ch(X)g(Y, b). The first term in the expression Φ Ch(X),b(m) = Φ Ch(X),b(0)+ hCh(X),b(m) can be interpreted as the original maximized reward for the descendants of Xwhile the second term is the additional reward afforded by the retirement option m. Meanwhile, we observe that the minimum msuch that Φ Ch(X),b(m) =mis the essentially the largest Gittins index among nodes in Ch( X). Full proofs of Lemma 4 and Proposition 5 are deferred to Appendix B.1. 3.3 Extension to general graphs On general graphs where Gis not a forest, dependencies across frontier nodes may violate the branching bandit assumptions. Nevertheless, we heuristically apply Gittins by treating each connected component as a tree (e.g., a minimum spanning tree) and ignoring edges that violate the acyclicity requirement. This restricts the frontier to a subtree at each step. Algorithm 1 provides a pseudocode describing this, where Line 5 is the heuristic which drops edges when Gis not a forest graph. As any tree restriction works here, one natural option is to compute the breadth-first search (BFS) tree. The runtime complexity of Algorithm 1 is given in Theorem 6. Algorithm 1 Setting up the Gittins policy. 1:Input : Graph G= (X,E), Joint distribution PoverXthat is Markov to G 2:Output : Gittins index value g(X, b) for all X∈Xandb∈Σ 3:foreach connected component HofGdo 4: Compute root node Xrootfrom priority rule ▷e.g.,Xroot= argmaxXi∈V(H)P(Xi= 1) 5: Compute anytree of Hrooted at Xroot ▷Heuristic for non-trees 6: fornode Xi∈V(H) from leaf towards root, and label b∈Σ do ▷ForXroot, setb=∅ 7: ifXiis a leaf then 8: Compute ϕ(Xi, b)(m) = max m, βm +P v∈ΣP(X=v|Pa(X) =b)·r(X, v) 9: else 10: Compute Φ Ch(Xi),b(m) via Eq. (2), then compute ϕXi,b(m) via Eq. (1) 11: end if 12: Store ϕ(Xi, b) for future computation 13: end for 14:end for 15:Compute {g(X, b)}X∈X,b∈Σaccording to Eq. (3), using previously stored ϕvalues 16:return {g(X, b)}X∈X,b∈Σ ▷Reminder: For root nodes, we set b=∅ At first glance, one may think that the running time of Algorithm 1 would dependent exponentially on the maximum depth dof the induced rooted trees, even if all operations involving piecewise linear functions can be done in O(1) time. This is because the definitions of Eq. (1) and Eq. (2) tell us that
|
https://arxiv.org/abs/2505.21671v1
|
the function ϕX,bdepends on all ϕZ,vfunctions, for all descendants ZofXand labels v∈Σ. However, our next result show that we can in fact obtain a polynomial run time that is independent of the maximum depth dof the induced rooted trees; this is why anytree restriction works on Line 5. Theorem 6. Given graph G= (X,E)and oracle access to joint distribution P, the Gittins indices can be computed in O(n2· |Σ|2)time while using O(n· |Σ|2)oracle calls to Pvia Algorithm 1. The space complexity isO(n2· |Σ|)space for storing O(n· |Σ|)intermediate piecewise linear functions. The proof outline of Theorem 6 is as follows: we first use induction (from the leaves towards the root) to argue that, for any node X, the set of functions {ϕX,b}b∈Σcan be computed using O(|Σ|2) oracle calls toPandO(|Σ| ·max{1,|Ch(X)|}) operations on piecewise linear functions, as long as we store intermediate functions {ϕY,b}Y∈Ch(X),b∈Σalong the way. Then, we argue that the maximum time to perform any piecewise linear function operation in Algorithm 1 is upper bounded by O(n· |Σ|). Our claim follows by summing over all nodes Xand using the upper bound cost of operating on piecewise linear function. See Appendix B.1 for the full proof. 7 4 Experiments We benchmark our proposed Gittins policy against several natural baselines — Random ,Greedy ,DQN , andOptimal — on both synthetic and real-world graphs to evaluate performance on AFEG . To reflect the network-based disease testing application discussed in Section 1.1, we consider binary node labels, and define the immediate reward to be 1 if and only if the revealed label is positive. As such, it is natural to define the first node in every connected component as the node with the highest marginal probability of being positive amongst all nodes in that connected component. Benchmarked policies. Given a problem instance ( G,P, β), a state in AFEG consists of the current set of frontier nodes and the revealed labels of previously tested nodes. •Random : Selects a random node from the frontier without using any state information. •Greedy : Selects the frontier node with the highest posterior probability of being positive, conditioned on the currently observed labels. •DQN : Implements a deep Q-network baseline [MKS+15], using the NNConv architecture from PyTorch Geometric [FL19]. This model applies a message-passing GNN with edge-conditioned weights [GSR+17] to capture graph structure and node covariates. •Optimal : Computes the action that maximizes the expected total discounted reward for each possible state via brute-force dynamic programming. This method is tractable only on small graphs due to the combinatorial explosion of the state space. •Gittins : Our proposed method, described in Algorithm 1, which is provably optimal when the underlying graph Gis a forest. We use breadth-first search (BFS) trees in Line 5. Since AFEG and these policies are agnostic to how the joint distribution Pis defined, we defer the details of how Pis defined to Appendix C. That is, one may read the experimental section assuming access to some Poracle. For reproducibility, all code is provided at https://github.com/cxjdavin/adaptive-frontier-exploration-on-graphs . 4.1 Experiment 1: On tree inputs, Gittins works well even in finite
|
https://arxiv.org/abs/2505.21671v1
|
horizon settings We evaluated the policies against a family of randomly generated synthetic trees on n∈ {10,50,100}nodes across various discount factors β∈ {0.5,0.7,0.9}. We only run Optimal for small n= 10 instances, where the plots for Optimal andGittins exactly overlap as expected. For n= 10, we exactly compute the expectation by weighting accumulated discounted rewards of each of the 210realizations by its probability. Forn∈ {50,100}, we compute Monte Carlo estimates by sampling 200 random realizations from P. For each setting of ( n, β), we generated 10 random trees and plot the mean ( ±std. err.) of the expected accumulated discounted rewards over time for each policy. Fig. 3 shows a subset of our results for β= 0.9; see Appendix C for the full experimental results. Interestingly, while Gittins is only proven to be optimal with respect to the expected accumulated dis- counted rewards, i.e., the rightmost slice of each plot, we see that it consistently outperforms all other baselines at every fixed timestep. For example, if we only have a fixed budget of being only to act on half the nodes (visualized by drawing a vertical line at the midpoint of the experiment), the Gittins plot lies above the others. 4.2 Experiment 2: Gittins performance degrades gracefully for non-trees Here, we investigate the degradation of Gittins as a heuristic as the input graph deviates from a tree in a controlled manner. Using the same setup as Section 4.1, we see that the attained expected accumulated discounted reward of Gittins degrades relative to Greedy as we add more edges to a tree, as expected. Note that in the rightmost graph of Fig. 4, 10 additional edges is more than 20% additional edges compared to the original n−1 = 49 tree edges. 8 Figure 3: Subset of synthetic tree input results. Gittins consistently beats other baselines at every fixed budget, e.g., vertical line indicates performance when only half the nodes can be acted upon. Figure 4: Synthetic experiment: the initial performance gains of Gittins overGreedy andDQN diminishes as we progressively add edges to 10 random 50-node trees with discount factor β= 0.9. 4.3 Experiment 3: Real-world sexual interaction graphs Beyond experiments on synthetic graphs, we also evaluated the policies on real-world sexual interaction net- works derived from a de-identified, public-use dataset released by ICPSR [MR11]4. This dataset was originally collected to examine how partnership networks influence the transmission of sexually transmitted and blood- borne infections. It includes reported sexual edges, covariates of each individuals (e.g., whether the individual is unemployed or homeless, etc), and reported statuses for 5 sexually transmitted diseases: Gonorrhea, Chlamy- dia, Syphilis, HIV, and Hepatitis. For each disease, we fit parameters as described in Appendix A and use these parameters to define Pfor eachAFEG instance on these real-world sexual interaction graphs. As the graphs are large, we do not run Optimal and use 200 Monte Carlo simulations to estimate the expected performance of each policy. Since the role of the discount factor β∈(0,1) inAFEG is to encourage early identification of infected individuals, any value in that range is technically valid.
|
https://arxiv.org/abs/2505.21671v1
|
To better reflect practical scenarios where timely detection is important across the entire population, we use β= 0.99 in our experiments and report results under both discounted and undiscounted objectives. In Appendix C, we explain in further detail how we pre-process the dataset, produce joint distributions Pfor each graph for the experiments, and provide our full experimental results. In Fig. 5, we present representative results from our experiments. For each of three diseases, we selected random connected components from the disease dataset until he combined subgraph contained at least 300 nodes. Across all cases, Gittins consistently outperforms other baselines, even when the input is not a forest. For example, when limited to testing only half the population in HIV the experiment, Gittins identifies nearly all infected individuals whereas other baselines detect only about 80%, in expectation. In terms of running time5,DQN incurs significant training overhead due to fitting a graph neural network for each ( G,P, β) instance while Greedy is computationally expensive during rollout, requiringPn i=1(n−i+ 1)∈O(n2) calls to the P oracle per Monte Carlo sample. In contrast, Gittins is efficient in both policy training (index computation) and rollout (selecting the frontier node with highest index), making it highly practical for real-world instances. 4This de-identified, public-use dataset is publicly available for download at https://www.icpsr.umich.edu/web/ICPSR/studies/ 22140 upon agreeing to ICPSR’s terms of use. IRB is not required; see Appendix C. 5All experiments were performed on a personal laptop (Apple Macbook 2024, M4 chip, 16GB memory). 9 Policy One MC training rollout Random 0 0 ±0 Greedy 0 80 ±0 DQN 2590 3 ±0 Gittins 30 0 ±0 Time (mean ±std. err.) in nearest second Policy One MC training rollout Random 0 0 ±0 Greedy 0 272 ±0 DQN 3035 3 ±0 Gittins 29 0 ±0 Time (mean ±std. err.) in nearest second Policy One MC training rollout Random 0 0 ±0 Greedy 0 598 ±1 DQN 2117 2 ±0 Gittins 9 0 ±0 Time (mean ±std. err.) in nearest second Figure 5: Experimental results for Hepatitis, HIV, and Chlamydia. See Table 1 for subgraph statistics. 5 Conclusion and discussion We introduced and studied the adaptive frontier exploration on graphs problem (Definition 1), a framework for sequential decision-making with label-dependent rewards under a frontier exploration constraint. Our Gittins index-based policy (Algorithm 1) is provably optimal on trees, runs in polynomial time, and demonstrates strong empirical performance on general graphs. Broader impact. A central motivation of this work is the urgent need for resource-efficient strategies in global public health. In the face of constrained resources and diminishing funding for disease control programs [UNA25], optimizing the allocation of testing efforts is increasingly critical. Our framework enables targeted, adaptive exploration of interaction networks, and is particularly well-suited to settings where prior data can inform transmission structure through a learned distribution P, possibly handcrafted by domain experts. This balance between data-driven structure and real-time adaptivity makes the AFEG framework a compelling tool for improving public health decision-making. Limitations. While our framework makes several modeling assumptions to enable tractable inference and principled decision making, these also define the
|
https://arxiv.org/abs/2505.21671v1
|
scope within which our results apply. Each of these extensions below presents a well-motivated and technically rich research challenge building on the foundation we establish. 10 Table 1: Summary statistics of subsampled real-world sexual interaction graphs from [MR11]. CC stands for connected component, Max. depth refers to the maximum BFS tree depth for Gittins , and approximate treewidth is obtained by using networkx ’streewidth minfill in. Disease # Nodes # Edges Forest? Diameter # CC Max. depth Apx. treewidth Hepatitis 304 404 ✗ 24 1 20 7 HIV 302 351 ✗ 13 65 10 8 Chlamydia 300 191 ✓ 8 109 5 1 First, we assume that the interaction graph Gis known and fixed. This is a reasonable assumption in many structured public health applications, such as contact tracing or intervention planning, where the network is elicited or constructed from prior data. However, generalizing our methods to handle uncertain or dynamically evolving graphs is a promising direction for future work. Second, our paper is focused on Poracles. One way to model this is via MRFs: in Appendix A.1, we discuss how to model network-based disease testing using pairwise Markov Random Fields with shared parameters, which provide a flexible yet interpretable model class for encoding local dependencies in infection status. A more complicated model could be designed and used based on further inputs from domain experts. It is also worth noting that while exact inference of MRFs is tractable on sparse graphs (as motivated by real-world sexual networks), scaling to larger or denser networks may require approximate inference or amortized learning approaches. One can also consider relaxing to Gaussian random fields [ZLG03], as per the literature of active searching on graphs. Third, our theoretical guarantees for the Gittins policy are currently limited to tree-structured graphs; nonetheless, our empirical results demonstrate that it performs competitively even on graphs with low to moderate treewidth. Finally, while exact inference is tractable on sparse graphs (as motivated by real-world sexual networks), scaling to larger or denser networks may require approximate inference or amortized learning approaches. Fairness considerations. A notable advantage of our AFEG framework is its flexibility to accommodate fairness constraints or objectives in sequential decision making. Since our model maintains posterior beliefs over individual infection risks via a probabilistic graphical model, fairness-aware modifications can be naturally incorporated at the policy level. For example, one can enforce demographic parity by requiring individuals from different subpopulations to have equal testing probabilities over time, or impose group-specific constraints on exposure or false negative rates. More generally, fairness can be encoded through soft constraints or regularization terms in the policy objective, or via hard constraints within the action-selection mechanism. Notably, fairness interventions can also be incorporated directly through the reward function. To prioritize historically underserved groups, one could upweight successful identifications among protected populations — for instance, by defining r(X, b) =b·α·Iprotected for some α > 1. Since Gittins policies depend only on the reward structure and not on group identity per se, such node-dependent reward shaping modifications preserve optimality guarantees on trees and maintain empirical performance on general graphs.
|
https://arxiv.org/abs/2505.21671v1
|
Furthermore, our Bayesian formulation allows dynamic reweighting or calibration as more data is revealed, enabling adaptive policies that balance efficiency and equity. Exploring how to systematically integrate such fairness interventions into frontier-constrained graph exploration is a promising direction for future work, particularly in public health settings where equitable resource allocation is essential. Acknowledgements The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the WHO. Davin would like to thank Bryan Wilder, Amulya Yadav, and Chun Kai Ling for their thought-provoking technical discussions, and Eric Rice, Geoff Garnett, Samuel R. Friedman, and Ashley Buchanan for generously sharing their domain expertise on HIV testing and transmission. References [ADGR+19] Lucie Abeler-D¨ orner, Mary K. Grabowski, Andrew Rambaut, Deenan Pillay, and Christophe Fraser. PANGEA-HIV 2: Phylogenetics And Networks for Generalised Epidemics in Africa. Current Opinion in HIV and AIDS , 14(3):173–180, 2019. 11 [AKN06] Pieter Abbeel, Daphne Koller, and Andrew Y. Ng. Learning Factor Graphs in Polynomial Time & Sample Complexity. Journal of Machine Learning Research (JMLR) , 7:1743–1788, 2006. [Bes75] Julian Besag. Statistical Analysis of Non-Lattice Data. Journal of the Royal Statistical Society Series D: The Statistician , 24(3):179–195, 1975. [BMAK+21] Basmattee Boodram, Mary Ellen Mackesy-Amiti, Aditya Khanna, Bryan Brickman, Harel Dahari, and Jonathan Ozik. A meta-analysis of 20 years of data on people who inject drugs in metropolitan Chicago to inform computational modeling. medRxiv preprint , 2021. [BPP+18] Benjamin R Bavinton, Angie N Pinto, Nittaya Phanuphak, Beatriz Grinsztejn, Garrett P Prestage, Iryna B Zablotska-Manos, Fengyi Jin, Christopher K Fairley, Richard Moore, Nor- man Roth, and et al. Viral suppression and HIV transmission in serodiscordant male couples: an international, prospective, observational, cohort study. The Lancet HIV , 5(8):e438–e447, 2018. [CCM+11] Myron S. Cohen, Ying Q. Chen, Marybeth McCauley, Theresa Gamble, Mina C. Hosseinipour, Nagalingeswaran Kumarasamy, James G. Hakim, Johnstone Kumwenda, Beatriz Grinsztejn, Jose H. S. Pilotto, and et al. Prevention of HIV-1 Infection with Early Antiretroviral Therapy. New England journal of medicine , 365(6):493–505, 2011. [Cli90] Peter Clifford. Markov Random Fields in Statistics. Disorder in physical systems: A volume in honour of John M. Hammersley , pages 19–32, 1990. [CLJ+24] Annabelle Choong, Yi Ming Lyu, Cheryl C. Johnson, Rachel Baggaley, Magdalena Barr- DiChiara, Muhammad S. Jamil, Nandi L. Siegfried, Christopher K. Fairley, Eric P. F. Chow, Virginia Macdonald, and Jason. Ong. Social network-based approaches to HIV testing: a systematic review and meta-analysis. Journal of the International AIDS Society (JIAS) , 27(9):e26353, 2024. [CM14] Jhelum Chakravorty and Aditya Mahajan. Multi-Armed Bandits, Gittins Index, and Its Cal- culation. Methods and Applications of Statistics in Clinical Trials: Planning, Analysis, and Inferential Methods , 2:416–435, 2014. [Edw19] James Edwards. Practical Calculation of Gittins Indices for Multi-armed Bandits. arXiv preprint arXiv:1909.05075 , 2019. [FHA+98] Homayoon Farzadegan, Donald R Hoover, Jacqueline Astemborski, Cynthia M Lyles, Joseph B Margolick, Richard B Markham, Thomas C Quinn, and David Vlahov. Sex differences in HIV-1 viral load and progression to AIDS. The Lancet , 352(9139):1510–1514, 1998. [FL19] Matthias Fey and Jan E. Lenssen. Fast Graph Representation Learning with PyTorch Geo- metric. In ICLR
|
https://arxiv.org/abs/2505.21671v1
|
Workshop on Representation Learning on Graphs and Manifolds , 2019. [GGW11] John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed Bandit Allocation Indices . John Wiley & Sons, 2011. [Git74] John Gittins. A dynamic allocation index for the sequential design of experiments. Progress in Statistics , pages 241–266, 1974. [Git79] John C. Gittins. Bandit Processes and Dynamic Allocation Indices. Journal of the Royal Statistical Society Series B: Statistical Methodology , 41(2):148–164, 1979. [GK11] Daniel Golovin and Andreas Krause. Adaptive Submodularity: Theory and Applications in Ac- tive Learning and Stochastic Optimization. Journal of Artificial Intelligence Research (JAIR) , 42:427–486, 2011. [GKX+12] Roman Garnett, Yamuna Krishnamurthy, Xuehan Xiong, Jeff Schneider, and Richard Mann. Bayesian Optimal Active Search and Surveying. In International Conference on Machine Learning (ICML) , page 843–850, 2012. 12 [GS09] Sharad Goel and Matthew J. Salganik. Respondent-driven sampling as Markov chain Monte Carlo. Statistics in Medicine , 28(17):2202–2229, 2009. [GSR+17] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural Message Passing for Quantum Chemistry. In International Conference on Machine Learning (ICML) , pages 1263–1272, 2017. [HC71] John M. Hammersley and Peter Clifford. Markov fields on finite graphs and lattices. Unpub- lished manuscript , 46, 1971. [Hec97] Douglas D. Heckathorn. Respondent-Driven Sampling: A New Approach to the Study of Hidden Populations. Social Problems , 44(2):174–199, 1997. [Jay57] Edwin T. Jaynes. Information Theory and Statistical Mechanics. Physical review , 106(4):620, 1957. [JMA+18] Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley, and Roman Garnett. Efficient nonmyopic batch active search. In Advances in Neural Information Processing Systems (NeurIPS) , pages 1099–1109, 2018. [JMC+17] Shali Jiang, Gustavo Malkomes, Geoff Converse, Alyssa Shofner, Benjamin Moseley, and Ro- man Garnett. Efficient Nonmyopic Active Search. In International Conference on Machine Learning (ICML) , pages 1714–1723, 2017. [JPC+19] Makhahliso Jubilee, Faith Jiyeong Park, Knowledge Chipango, Kenoakae Pule, Albert Machinda, and Noah Taruberekera. HIV index testing to improve HIV positivity rate and link- age to care and treatment of sexual partners, adolescents and children of PLHIV in Lesotho. PLoS One , 14(3):e0212762, 2019. [JSK+17] David Juher, Joan Salda˜ na, Robert Kohn, Kyle Bernstein, and Caterina Scoglio. Network- Centric Interventions to Contain the Syphilis Epidemic in San Francisco. Scientific reports , 7(1):6464, 2017. [KF09] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques . MIT press, 2009. [KFH06] Don Klinkenberg, Christophe Fraser, and Hans Heesterbeek. The Effectiveness of Contact Tracing in Emerging Epidemics. PloS One , 1(1):e12, 2006. [KK14] Matan Keidar and Gal A. Kaminka. Efficient frontier detection for robot exploration. The International Journal of Robotics Research (IJRR) , 33(2):215–236, 2014. [KKT03] David Kempe, Jon Kleinberg, and ´Eva Tardos. Maximizing the Spread of Influence through a Social Network. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pages 137–146, 2003. [KO03] Godfrey Keller and Alison Oldale. Branching bandits: a sequential search process with corre- lated pay-offs. Journal of Economic Theory , 113(2):302–315, 2003. [KVJ87] Michael N. Katehakis and Arthur F. Veinott Jr. The Multi-Armed Bandit Problem: Decom- position and Computation. Mathematics of Operations Research , 12(2):262–268, 1987. [LCH+25] Chun Kai Ling, Jakub Cerny,
|
https://arxiv.org/abs/2505.21671v1
|
Chin Hui Han, Christian Kroer, and Garud Iyengar. How Deep Is Your Defense-in-Depth? Hardening Cybersecurity Network Control Against Adaptive Attackers. AAAI 2025 Workshop on Artificial Intelligence for Cyber Security (AICS) , 2025. [MGDGO25] Heather Mattie, Ravi Goyal, Victor De Gruttola, and Jukka-Pekka Onnela. A Review of Network Models for HIV Spread. Journal of Acquired Immune Deficiency Syndromes (JAIDS) , pages 309–320, 2025. [MKLL12] Benedict C. May, Nathan Korda, Anthony Lee, and David S. Leslie. Optimistic Bayesian Sampling in Contextual-Bandit Problems. Journal of Machine Learning Research (JMLR) , 13(1):2069–2106, 2012. 13 [MKS+15] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, and et al. Human-level control through deep reinforcement learning. Nature , 518(7540):529–533, 2015. [MR11] Martina Morris and Richard Rothenberg. HIV Transmission Network Metastudy Project: An Archive of Data From Eight Network Studies, 1988–2001 (ICPSR 22140). ICPSR Data Holdings , 2011. Available at https://www.icpsr.umich.edu/web/ICPSR/studies/22140/ summary . [MWBDM+25] Aliza Monroe-Wise, Magdalena Barr-DiChiara, Antons Mozalevskis, Busisiwe Msimanga, Maeve Brito de Mello, Kafui Senya, Niklas Luhmann, Cheryl Case Johnson, and Rachel Bag- galey. Can network-based testing services have an impact beyond testing for HIV? Sexual Health , 22(2), 2025. [Nat] United Nations. Goal 3: Good Health and Well-being – Targets and Indicators. Available at https://sdgs.un.org/goals/goal3#targets_and_indicators . [oAD19] National Institute of Allergy and Infectious Diseases. HIV Undetectable=Untransmittable (U=U), or Treatment as Prevention, 2019. Available at https://www.niaid.nih.gov/ diseases-conditions/treatment-prevention . [OG20] Rose McKeon Olson and Robert Goldstein. U=U: Ending stigma and empowering people living with HIV, 2020. Available at https://www.health.harvard.edu/blog/ uu-ending-stigma-and-empowering-people-living-with-hiv-2020042219583 . [Org24a] World Health Organization. Consolidated guidelines on differentiated HIV testing services . World Health Organization, 2024. Available at https://www.who.int/publications/i/ item/9789240096394 . [Org24b] World Health Organization. HIV and AIDS: Fact Sheet, 2024. Available at https://www. who.int/news-room/fact-sheets/detail/hiv-aids . [Put14] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming . John Wiley & Sons, 2014. [RCB+16] Alison J. Rodger, Valentina Cambiano, Tina Bruun, Pietro Vernazza, Simon Collins, Jan van Lunzen, Giulio Maria Corbelli, Vicente Estrada, and et al. Sexual Activity Without Condoms and Risk of HIV Transmission in Serodifferent Couples When the HIV-Positive Partner Is Using Suppressive Antiretroviral Therapy. JAMA , 316(2):171–181, 2016. [RN21] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach . Pearson Educa- tion, 4th edition, 2021. [Rot09] Richard Rothenberg. HIV transmission networks. Current Opinion in HIV and AIDS , 4(4):260–265, 2009. [Sco10] Steven L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry , 26(6):639–658, 2010. [Scu18] Eileen P. Scully. Sex Differences in HIV Infection. Current HIV/AIDS Reports , 15:136–146, 2018. [SMSM99] Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Advances in Neural Information Processing Systems (NeurIPS) , page 1057–1063, 1999. [Tsi94] John N. Tsitsiklis. A Short Proof of the Gittins Index Theorem. The Annals of Applied Probability , pages 194–199, 1994. [UNA22] UNAIDS. Political Declaration on HIV and AIDS: Summary of 10 Targets, 2022. Available at https://www.unaids.org/en/resources/documents/2022/political-declaration_ summary-10-targets . 14 [UNA24] UNAIDS. 2024 global AIDS report — The Urgency of
|
https://arxiv.org/abs/2505.21671v1
|
Now: AIDS at a Cross- roads, 2024. Available at https://www.unaids.org/en/resources/documents/2024/ global-aids-update-2024 . [UNA25] UNAIDS. Impact of US funding cuts on the global AIDS response – 28 March 2025 update, 2025. Accessed: 28 April 2025. [WD92] Christopher J.C.H. Watkins and Peter Dayan. Q-Learning. Machine Learning , 8:279–292, 1992. [Web92] Richard Weber. On the Gittins Index for Multiarmed Bandits. The Annals of Applied Proba- bility , pages 1024–1033, 1992. [Wei88] Gideon Weiss. Branching Bandit Processes. Probability in the Engineering and Informational Sciences , 2(3):269–278, 1988. [WGS13] Xuezhi Wang, Roman Garnett, and Jeff Schneider. Active Search on Graphs. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pages 731–738, 2013. [Whi80] Peter Whittle. Multi-Armed Bandits and the Gittins Index. Journal of the Royal Statistical Society: Series B (Methodological) , 42(2):143–149, 1980. [WJ08] Martin J. Wainwright and Michael I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends ®in Machine Learning , 1(1–2):1–305, 2008. [WLK+23] Leslie D. Williams, Eunhye Lee, Kathleen Kristensen, Mary Ellen Mackesy-Amiti, and Basmat- tee Boodram. Community-, Network-, and Individual-level Predictors of Uptake of Medication for Opioid Use Disorder among Young People who Inject Drugs and Their Networks: A Mul- tilevel Analysis. Drug and Alcohol Dependence , 244:109782, 2023. [WOVH+18] Bryan Wilder, Laura Onasch-Vera, Juliana Hudson, Jose Luna, Nicole Wilson, Robin Petering, Darlene Woo, Milind Tambe, and Eric Rice. End-to-End Influence Maximization in the Field. InInternational Conference on Autonomous Agents and Multiagent Systems (AAMAS) , pages 1414–1422, 2018. [Wu12] Nailong Wu. The Maximum Entropy Method , volume 32. Springer Science & Business Media, 2012. [XFL+21] Yang Xiang, Kayo Fujimoto, Fang Li, Qing Wang, Natascha Del Vecchio, John Schneider, Degui Zhi, and Cui Tao. Identifying influential neighbors in social networks and venue affilia- tions among young MSM: a data science approach to predict HIV infection. AIDS , 35:S65–S73, 2021. [YWR+17] Amulya Yadav, Bryan Wilder, Eric Rice, Robin Petering, Jaih Craddock, Amanda Yoshioka- Maxwell, Mary Hemler, Laura Onasch-Vera, Milind Tambe, and Darlene Woo. Influence Max- imization in the Field: The Arduous Journey from Emerging to Deployed Application. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS) , pages 150–158, 2017. [ZLG03] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Combining Active Learning and Semi- Supervised Learning Using Gaussian Fields and Harmonic Functions. ICML 2003 Workshop on The Continuum from Labeled to Unlabeled Data , 2003. 15 A Application to network-based disease testing In this section, we explore the application of network-based disease testing motivated in Section 1 where the goal is to identify infected individuals given knowledge of their interaction network; see Fig. 1 for an illustration. Here, each node represents an individual with a binary infection status (infected or not), and edges represent sexual interactions. Frontier testing is a natural operational constraint: test outcomes significantly influence beliefs about neighboring individuals, making it efficient to expand testing along the observed frontier. Building upon our notations in Section 2, a joint distribution Pθparameterized by θis written as Pθ(x) = Pθ(X1=x1, . . . , X n=xn) forx∈Σn. A.1 A MRF-based joint model of infection status We model the joint distribution over
|
https://arxiv.org/abs/2505.21671v1
|
nindividuals’ infection statuses using a pairwise MRF defined over the interaction graph G= (X,E), where each node Xirepresents an individual with a binary latent variable Xi∈ {0,1}indicating its HIV status, and each edge {Xi, Xj} ∈Eindicates a reported sexual interaction. Each individual also has associated covariates c(i)∈Rdand the joint distribution over all statuses X= (X1, . . . , X n) is defined in terms of unary and pairwise potential functions ϕi(xi) for each individual i, and ϕi,j(xi, xj) for each edge {i, j} ∈E, with 1 ≤i < j≤n6: ϕi(xi) = exp θ⊤ 1f1(xi,c(i)) and ϕi,j(xi, xj) = exp θ⊤ 2f2(xi, xj,c(i),c(j)) for some feature mapping functions f1:{0,1} ×Rd→R2+2dandf2:{0,1}2×R2d→R4+5d, and parameters θ1∈R2+2dandθ2∈R4+5d. We adopt the maximum entropy principle [Jay57, WJ08, Wu12] to parameterize these factors, with feature maps f1andf2defined as monomials up to quadratic terms of the covariate variables while respecting symmetry: f1(xi,c(i)) = 1, xi,c(i) 1, xic(i) 1, . . . ,c(i) d, xic(i) d⊤ ∈R2+2d(4) f2(xi, xj,c(i),c(j)) = 1, xixj,(1−xi)xj+xi(1−xj),(1−xi)(1−xj), v(xi, xj,c(i) 1,c(j) 1), . . . ,v(xi, xj,c(i) d,c(j) d)⊤ ∈R4+5d(5) where v:{0,1}2×R2→R5is defined as v(a, b, c, d ) = c+d, ab(c+d), a(1−b)c+ (1−a)bd,(1−a)bc+a(1−b)d,(1−a)(1−b)(c+d) The joint probability is then: Pθ1,θ2(X=x) =1 z(θ1, θ2)exp nX i=1θ⊤ 1f1(xi,ci) +X {i,j}∈Eθ⊤ 2f2(xi, xj,ci,cj) (6) where z(θ1, θ2) is the partition function. This formulation encodes both individual risk via covariates and dependency via pairwise interactions, capturing correlation in infection status across the network. To reduce model complexity and reflect data limitations, we use parameter sharing, i.e. all unary (resp. pairwise) factors share the same parameters θ1(resp. θ2). Although exact inference in general MRFs is intractable, the sexual contact networks we consider are typically sparse, bipartite, or even tree-structured — as in contact tracing studies — where efficient inference algorithms apply [KF09]. Therefore, we assume access to an inference oracle, and focus on the primary challenge: adaptive sequential testing under frontier constraints. At each time step t={1,2, . . . , n }, we select an untested individual to test on the frontier and observe their HIV status. Recalling the definition of AFEG (Definition 1), we can model the interaction network as G, and each test as acting on a node in G. The reward function for testing individual Xand revealing status b∈ {0,1} 6For notational simplicity, we write f2(xj, xi,c(j),c(i)) to mean f2(xi, xj,c(i),c(j)) for any i < j . 16 is simply r(X, b) =b, and any discount factor β∈(0,1) would encourage identifying HIV+ individuals as early as possible. Importantly, discounting reflects both practical constraints – such as sudden funding cuts [UNA25] – and clinical importance of early diagnosis, which improves patient outcomes and limits transmission [CCM+11]. See also [RN21] for other natural justifications for using discount factors βin modeling long-term policy rewards. Finally, to apply the infinite horizon framework of AFEG in our finite testing setting, we simply zero subsequent rewards after every individual has been already tested. A.2 Learning the distributional parameters from data To apply our model to a new population with unknown HIV statuses, we must first estimate the parameters of the joint distribution described
|
https://arxiv.org/abs/2505.21671v1
|
in Appendix A.1. We assume access to a historical dataset in which both the covariates and true HIV statuses are known. Classical approaches to MRF parameter learning such as [AKN06] typically assume access to multiple independent samples drawn from a fixed graphical model. Unfortunately, in our case, we only have access to only a single observed realization of infection statuses in our past data. This means that the maximum likelihood estimation (MLE) distribution Pthat describes the dataset is simply the degenerate point distribution that places full probability mass on the single realization. To learn a meaningful but non-degenerate transmission probabilities, we consider an intuitive way to model the joint probabilities based on a factor graph induced by the input graph structure. More specifically, we define unary factor potentials for each individual node and pairwise factor potentials for each edge present in the graph, governed by global parameters θ1andθ2respectively. The hope is that this simple formulation serves as a regularization allows us to recover meaningful disease-specific parameters so that we can define joint distributions Pon new interaction graphs for the same disease. We adopt a maximum likelihood estimation (MLE) approach to learn these θ1andθ2parameters with respect to this single realization under the MRF model: θ∗= arg max θ1,θ2logP(x;θ1, θ2). However, exact MLE is intractable in general due to the partition function z(θ1, θ2), whose computation requires summing over all 2nconfigurations of node labels. To sidestep this difficulty, we instead optimize the pseudo-likelihood [Bes75], which approximates the joint likelihood by the product of conditional distributions for each node given its neighbors: ePθ1,θ2(x) =nY i=1Pθ1,θ2(Xi=xi|x−i) =nY i=1Pθ1,θ2(x) Pθ1,θ2(Xi= 0,x−i) +Pθ1,θ2(Xi= 1,x−i)(7) This objective is tractable and differentiable with respect to θ1andθ2, and can be efficiently optimized using gradient-based methods. Once learned, these parameters can be used to define a new factor graph for any unseen population with known covariates and network structure, thereby guiding the adaptive testing policy. In general, a closed-form solution for the maximum pseudolikelihood estimator is unlikely to exist due to the nonlinear dependence of the local conditional distributions on the parameters. However, as the following lemma shows, we can derive closed-form gradients and rely on gradient-based optimization methods to compute parameter estimates for θ1andθ2. Lemma 7. LetG= (X,E)be a graph over nnodes, where each node Xi∈Xhas binary label xi∈ {0,1}, covariates ci∈Rd, and neighborhood N(Xi). Define feature maps f1(xi,ci)andf2(xi, xj,ci,cj)as per Eq. (4) and Eq. (5)respectively, shared parameters θ1∈R2+2dandθ2∈R4+5d, and the joint probability as per Eq. (6). Then, the log-pseudolikelihood gradients are: ∂logePθ1,θ2(x) ∂θ1=nX i=1αi· f1(1,c(i))−f1(0,c(i)) ∂logePθ1,θ2(x) ∂θ2=nX i=1αi·X Xj∈N(Xi) f2(1, xj,c(i),c(j))−f2(0, xj,c(i),c(j)) where the common coefficient αi=xi− Pθ1,θ2(Xi= 1|x−i)can be computed efficiently without computing z(θ1, θ2)for all i∈[n]. The full proof of Lemma 7 is given in Appendix B.2. Note that parameter fitting may not necessary recover the exact dynamics of the underlying real-world problem in general. That is, Pbθ1,bθ2̸=Pin general, where bθ1andbθ2are produced parameter estimates assuming the the MRF model defined in Appendix A.1 and P 17 is the true unknown underlying joint probabilities (which may even lie outside the model class defined by Appendix A.1). As such, we
|
https://arxiv.org/abs/2505.21671v1
|
also provide an error analysis in Appendix B.3 to bound the loss of the attainable discounted accumulated reward of an optimal policy computed on Pbθ1,bθ2while being executing on P. B Deferred proof details B.1 Gittins proofs Lemma 4. For any node X∈Xand label b∈Σ,ϕX,b(m)is a non-decreasing piecewise linear function over m∈[0,¯r/(1−β)]. Proof. Let us induct on the nodes from the leaf towards the root while recalling definitions of ϕand Φ from Eq. (1) and Eq. (2) respectively. Base case ( Xis a leaf): For any m∈[0,¯r 1−β], we have Φ Ch(X),b(m) = Φ ∅,b(m) =m, and so ϕX,b(m) = max m,X v∈ΣP(X=v|Pa(X) =b)· r(v) +β·ΦCh(X),v(m) = max m,X v∈ΣP(X=v|Pa(X) =b)· r(v) +βm = max m, βm +X v∈ΣP(X=v|Pa(X) =b)·r(v) SinceP v∈ΣP(X=v|Pa(X) =b)·r(v) is a constant with respect to m, we see that ϕX,b(m) is non-decreasing with respect to m. Furthermore, since mandβm+P v∈ΣP(X=v|Pa(X) =b)·r(v) are both linear functions inm, combining them via the max operator into ϕX,b(m) yields a piecewise linear function of mwith at most 2 pieces. Inductive case ( Xis a not a leaf): SetS= Ch( X) in Eq. (2). Consider∂ϕY,b(k) ∂kfor an arbitrary Y∈Ch(X) and label b∈Σ. By induction hypothesis, we know that ϕY,b(k) is a piecewise linear function in k, and so∂ϕY,b(k) ∂kis a piecewise constant function in k. Thus, the product ofQ Y∈Ch(X)∂ϕY,b(k) ∂kis a piecewise constant function, and the integralR¯r 1−β mQ Y∈Ch(X)∂ϕY,b(k) ∂kdkis a piecewise linear function of m. Therefore, since¯r 1−βis a constant with respect to m, we have that Φ Ch(X),b(m) is a piecewise linear function of m. Finally, similar to the base case argument above, we see that ϕX,b(m) is non-decreasing with respect to mandϕX,b(m) is a piecewise linear function of mbecause the P(X=v|Pa(X) =b) and r(v) terms are constants with respect to m. As a remark, if Xis the root, we simply replace P(X=v|Pa(X) =b) with P(X=v) in the above argument. Proposition 5. For any non-leaf node Xand label b: •ΦCh(X),b(m) = Φ Ch(X),b(0) + h(m)for some piecewise linear h. •ΦCh(X),b(m) =mif and only if m≥max Y∈Ch(X)g(Y, b). Proof. For the first item, recall that we showed thatR¯r 1−β mQ Y∈Ch(X)∂ϕY,b(k) ∂kdkis a piecewise linear function of min the proof in Lemma 4. Defining this function as hCh(X),b(m), Eq. (2) yields Φ S,b(m) =¯r 1−β+hCh(X),b(m). So, Φ S,b(0) =¯r 1−β+hCh(X),b(0), which is a constant since hCh(X),b(0) with respect to m. For the second item, we recall the following fact from [Whi80] that∂ϕX,b(k) ∂k=E[βT], where the expectation is taken under the optimal policy when given a fallback option mandTis the optimal stopping time for node X. Since β∈(0,1), we see that E[βT]≤1 with equality if and only if when T= 0, which happens only when m≥g(X, b). So, the product in Eq. (2)Q Y∈Ch(X)∂ϕY,b(k) ∂kdk≤1 with equality if and only if k≥max Y∈Ch(X)g(Y, b). Meanwhile, when the product equals to 1, we get Φ Ch(X),b(m) =¯r 1−β−R¯r 1−β m1dk=m. Therefore, Φ Ch(X),b(m) =mif and only if m≥max Y∈Ch(X)g(Y, b). For convenience of proving Theorem 6, let us recall Eq. (1) and Eq. (2) from the main text: 18 To define the Gittins
|
https://arxiv.org/abs/2505.21671v1
|
index, let us first define two recursive functions ϕand Φ, as per [KO03]. For any non-root node X∈X, label b∈Σ, and value 0 ≤m≤¯r 1−β, ϕX,b(m) = max m,X v∈ΣP(X=v|Pa(X) =b)· r(X, v) +β·ΦCh(X),v(m) (1) IfXis the root, we define ϕX,∅(m) = max m,P v∈ΣP(X=v)· r(X, v) +β·ΦCh(X),v(m) . For any subset of nodes S∈X, label b∈Σ, and value 0 ≤m≤¯r 1−β, ΦS,b(m) =( ¯r 1−β−R¯r 1−β mQ Y∈S∂ϕY,b(k) ∂kdkifS̸=∅ m ifS=∅(2) We will only invoke Eq. (2) with S= Ch( X) for some node X. Theorem 6. Given graph G= (X,E)and oracle access to joint distribution P, the Gittins indices can be computed in O(n2· |Σ|2)time while using O(n· |Σ|2)oracle calls to Pvia Algorithm 1. The space complexity isO(n2· |Σ|)space for storing O(n· |Σ|)intermediate piecewise linear functions. Proof. Without loss of generality, we may assume that there is only one rooted tree since the computation for each connected component is independent For instance, if there are kcomponents and the i-th component has ninodes, then the overall complexity is O(Pk i=1n2 i· |Σ|2)⊆ O(n2· |Σ|2). Throughout this proof, we assume that any conditional probability value can be obtained in constant time via oracle access to P. Now, recalling the definitions of Eq. (1) and Eq. (2), and Lemma 4, we know that the computation of ϕand Φ involve manipulating piecewise linear functions. The proof outline is as follows: we first use induction to argue that for any node X, the set of functions {ϕX,b}b∈Σcan be computed using O(|Σ|2) oracle calls to PandO(|Σ| ·max{1,|Ch(X)|}) operations on piecewise linear functions. Then, we argue that the maximum time to perform any piecewise linear function operation in Algorithm 1 is upper bounded by O(n· |Σ|). Our claim follows by summing over all nodes Xand using the upper bound cost of operating on piecewise linear function. By inducting on the nodes from the leaf towards the root, we first show that the set of functions {ϕX,b}b∈Σ can be computed using O(|Σ|2) oracle calls to PandO(|Σ| ·max{1,|Ch(X)|}) operations on piecewise linear functions, as long as we store intermediate functions {ϕY,b}Y∈Ch(X),b∈Σalong the way. As a reminder, in this part of the proof, we are abstracting away the computation cost for manipulating pieces of piecewise functions and focus on counting the number of operations on piecewise functions; we will later upper bound the computational cost for each of these operations. Base case ( Xis a leaf): Recall from the proof of Lemma 4 that the function ϕX,bis defined as ϕX,b(m) = max m, βm +X v∈ΣP(X=v|Pa(X) =b)·r(X, v) for any label value b∈Σ. SinceP v∈ΣP(X=v|Pa(X) =b)·r(X, v) can be computed in O(|Σ|) oracle calls toP, the function ϕX,bcan be computed with O(1) further operations on piecewise linear functions. So, the set of functions {ϕX,b}b∈Σcan be computed in O(|Σ|2) oracle calls to PandO(|Σ|) operations on piecewise linear functions. Inductive case ( Xis not a leaf, i.e. Ch(X)̸=∅):Suppose all children nodes Y∈Ch(X) ofXhave computed and stored their piecewise linear functions ϕY,vfor all possible values v∈Σ. Fix an arbitrary label b∈Σ. To compute Φ X,bwe need O(|Ch(X)|) operations on piecewise linear functions to
|
https://arxiv.org/abs/2505.21671v1
|
differentiate each ϕY,bfunction, and then multiply them together. Integrating this resultant function and subtracting it from the constant¯r 1−βfunction requires only an additional O(1) operations on piecewise linear functions. Thus, computing all functions {ΦCh(X),b}b∈Σcosts O(|Ch(X)| · |Σ|) operations on piecewise linear functions. Fix an arbitrary label b∈Σ. To compute ϕX,b, we need to manipulate the set of functions {ΦCh(X),v}v∈Σ. More precisely, we use O(1) operations to scale each Φ Ch(X),vfunction by a constant β, add the constant r(X, v) function to each of them, and multiply again by the constant value P(X=v|Pa(X) =b). Note that 19 eachP(X=v|Pa(X) =b) can be obtain in a single call to the Poracle, i.e. a total of O(|Σ|) calls. A further O(|Σ|) operations on piecewise linear functions suffice to sum these manipulated functions up and take the maximum against the linear mfunction. So, the entire set of functions {ϕX,b}b∈Σcan be computed in O(|Σ|2) oracle calls to PandO(|Σ| · |Ch(X)|) operations on piecewise linear functions. Intermediate conclusion: By the inductive argument above, we showed the set of functions {ϕX,b}b∈Σ can be computed using O(|Σ|2) oracle calls to PandO(max{1,|Ch(X)|} · |Σ|) operations on piecewise linear functions for any node X. Summing across all nodes, this incurs O(n· |Σ|2) oracle calls to PandO(|Σ| ·P X∈Xmax{1,|Ch(X)|}) operations on piecewise linear functions. It remains to argue that any operation on piecewise linear function in Algorithm 1 requires O(n· |Σ|) time. First, observe that the computation time for any piecewise function operation depends on the number of pieces. Let us denote the number of pieces in a piecewise function fby #pieces( f). Any addition or multiplication operation of two piecewise linear functions fandgcreates a new piecewise linear function h such that #pieces( h)≤#pieces( f) + #pieces( g). Differentiating and integrating a piecewise linear function do not change the number of pieces. Finally, taking the max of a piecewise linear against a linear function can at most increase the number of pieces by 1. Since the computation for any node Xonly involve piecewise functions of its children, we see that #pieces( ϕ(X, b))≤1 +P Y∈Ch(X),v∈Σ#pieces( ϕ(Y, v)), where the +1 is due to taking the maximum against the linear mfunction. That is, the maximum number of pieces in any piecewise linear operation involving in the computation of Algorithm 1 is at most O(n·|Σ|) since the base case of functions at the leaves have 2 pieces for each label b∈Σ. Putting together everything, we see that Algorithm 1 incurs O(n· |Σ|2) oracle calls to PandO(|Σ| ·P X∈Xmax{1,|Ch(X)|}) operations on piecewise linear functions. Since each operation on piecewise linear functions costs at most O(n· |Σ|), Algorithm 1 runs in O(n2· |Σ|2) time while using O(n· |Σ|2) oracle calls to P. B.2 Parameter fitting For convenience of proving Lemma 7, let us recall Eq. (7) from Appendix A.2: ePθ1,θ2(x) =nY i=1Pθ1,θ2(Xi=xi|x−i) =nY i=1Pθ1,θ2(x) Pθ1,θ2(Xi= 0,x−i) +Pθ1,θ2(Xi= 1,x−i) where Pθ1,θ2(X=x) =1 z(θ1, θ2)exp nX i=1θ⊤ 1f1(xi,ci) +X {i,j}∈Eθ⊤ 2f2(xi, xj,ci,cj) and ϕi(xi) = exp θ⊤ 1f1(xi,c(i)) and ϕi,j(xi, xj) = exp θ⊤ 2f2(xi, xj,c(i),c(j)) Lemma 7. LetG= (X,E)be a graph over nnodes, where each node Xi∈Xhas binary
|
https://arxiv.org/abs/2505.21671v1
|
label xi∈ {0,1}, covariates ci∈Rd, and neighborhood N(Xi). Define feature maps f1(xi,ci)andf2(xi, xj,ci,cj)as per Eq. (4) and Eq. (5)respectively, shared parameters θ1∈R2+2dandθ2∈R4+5d, and the joint probability as per Eq. (6). Then, the log-pseudolikelihood gradients are: ∂logePθ1,θ2(x) ∂θ1=nX i=1αi· f1(1,c(i))−f1(0,c(i)) ∂logePθ1,θ2(x) ∂θ2=nX i=1αi·X Xj∈N(Xi) f2(1, xj,c(i),c(j))−f2(0, xj,c(i),c(j)) where the common coefficient αi=xi− Pθ1,θ2(Xi= 1|x−i)can be computed efficiently without computing z(θ1, θ2)for all i∈[n]. 20 Proof. Let us define the terms AandBy,bfory∈[n] and b∈ {0,1}: A= log z(θ1, θ2)· Pθ1,θ2(x) =nX i=1θ⊤ 1f1(xi,ci) +X {i,j}∈Eθ⊤ 2f2(xi, xj,c(i),c(j)) By,b= log z(θ1, θ2)· Pθ1,θ2(Xy=b,x−y) =θ⊤ 1f1(b,c(y)) +X {i,j}∈E y=iθ⊤ 2f2(b, xj,c(y),c(j)) +nX i=1 i̸=yθ⊤ 1f1(xi,c(i)) +X {i,j}∈E y̸∈{i,j}θ⊤ 2f2(xi, xj,c(i),c(j)) Observe that exp(B1 y) exp(B0y) + exp( B1y)=Pθ1,θ2(Xy= 1|x−y) (8) Let us consider their partial differentation with respect to θ1andθ2. We will use these later. ∂A ∂θ1=nX i=1θ⊤ 1f1(xi,c(i)) ∂θ1(9) =nX i=1f1(xi,c(i)) ∂A ∂θ2=X {i,j}∈E∂θ⊤ 2f2(xi, xj,c(i),c(j)) ∂θ2(10) =X {i,j}∈Ef2(xi, xj,c(i),c(j)) ∂Bb y ∂θ1=∂θ⊤ 1f1(b,cy) ∂θ1+nX i=1 i̸=y∂θ⊤ 1f1(xi,c(i)) ∂θ1(11) =f1(b,cy) +nX i=1 i̸=yf1(xi,c(i)) ∂Bb y ∂θ2=X {i,j}∈E y=i∂θ⊤ 2f2(b, xj,c((y),c(j)) ∂θ2+X {i,j}∈E y̸∈{i,j}∂θ⊤ 2f2(xi, xj,c(i),c(j)) ∂θ2(12) =X {i,j}∈E y=if2(b, xj,c(y),c(j)) +X {i,j}∈E y̸∈{i,j}f2(xi, xj,c(i),c(j)) Now, re-expressing ePθ1,θ2(x) in terms of AandBb y, we have ePθ1,θ2(x) =nY i=1Pθ1,θ2(x) Pθ1,θ2(Xi= 0,x−i) +Pθ1,θ2(Xi= 1,x−i)=nY y=1exp(A) exp(B0y) + exp( B1y) Fix an index y∈[n] and define Wy=exp(A) exp(B0y)+exp( B1y)so that ePθ1,θ2(x) =Qn y=1Wy, i.e. log ePθ1,θ2(x) =Pn y=1logWy. 21 Let us differentiate with respect to θ1. For any y∈[n], we see that ∂logWy ∂θ1 =∂A ∂θ1−∂log(exp( B0 y) + exp( B1 y)) ∂θ1 =∂A ∂θ1−1 exp(B0y) + exp( B1y) ∂exp(B0 y) ∂θ1+∂exp(B1 y) ∂θ1! =∂A ∂θ1−exp(B0 y) exp(B0y) + exp( B1y)∂B0 y ∂θ1−exp(B1 y) exp(B0y) + exp( B1y)∂B1 y ∂θ1 =∂A ∂θ1−(1− Pθ1,θ2(Xy= 1|x−y))∂B0 y ∂θ1− Pθ1,θ2(Xy= 1|x−y))∂B1 y ∂θ1(By Eq. (8)) =nX i=1f1(xi,ci)−(1− Pθ1,θ2(Xy= 1|x−y)) f1(0,cy) +nX i=1 i̸=yf1(xi,ci) − Pθ1,θ2(Xy= 1|x−y) f1(1,cy) +nX i=1 i̸=yf1(xi,ci) (By Eq. (9) and Eq. (11)) =f1(xy,cy)−(1− Pθ1,θ2(Xy= 1|x−y))·f1(0,cy)− Pθ1,θ2(Xy= 1|x−y)·f1(1,cy) = (xy− Pθ1,θ2(Xy= 1|x−y))·(f1(1,cy)−f1(0,cy)) (Since xy∈ {0,1}) Summing over all y∈[n], we get ∂logePθ1,θ2(x) ∂θ1=nX y=1∂logWy ∂θ1=nX y=1(xy− Pθ1,θ2(Xy= 1|x−y))·(f1(1,cy)−f1(0,cy)) yielding the first statement as desired, where αy= (xy− Pθ1,θ2(Xy= 1|x−y)). For the second statement, we do the same analysis but use Eq. (10) and Eq. (12) instead of Eq. (9) and Eq. (11). Note that the second summationP {i,j}∈E y̸∈{i,j}gets cancelled out as in the above analysis while the first summationP {i,j}∈E y=iis exactly corresponds toP Xj∈N(Xy). B.3 Robustness to model misspecification We analyze how errors in the learned graphical model impact the quality of the resulting policy in an AFEG instance. Specifically, we derive a bound on the discrepancy between the optimal state–action value functions of the learned and the true Markov decision processes (MDPs), capturing how inaccuracies in the estimated joint distribution ˆPaffect decision quality. Recall that an AFEG instance is defined by a triple ( G,P, β), where G= (X,E) is a graph, Pis a joint distribution over binary node labels Markov with respect to G, and β∈(0,1) is a discount factor. At each time step t, the state Strecords the current frontier and the set of revealed labels. A policy πselects a node from
|
https://arxiv.org/abs/2505.21671v1
|
the frontier and receives a label-dependent reward. We consider two MDPs defined over the same state space S, action mapping A:S → 2Xthat restricts valid actions to untesed frontier nodes, and discount factor β, but differing in the distribution used to infer infection status: •Thelearned MDP M= (S,A,ˆP,ˆR, β) is defined using an estimated model ˆPobtained via pseudo- likelihood. The expected reward for testing node a∈ A(s) is ˆR(s, a) =ˆP(Xa= 1|s), and the transition kernel ˆPuses this posterior to sample a binary outcome. 22 •Thetrue MDP M′= (S,A, P, R, β ) is induced by the ground-truth distribution P, with R(s, a) =P(Xa= 1|s), and transitions Pdiffering from ˆPonly in the Bernoulli parameter used for Xa. We define the maximum reward and transition discrepancies: εR= max s∈S, a∈A(s) ˆR(s, a)−R(s, a) , ε P= max s∈S, a∈A(s), s′∈S ˆP(s′|s, a)−P(s′|s, a) . Since rewards are probabilities, we set Rmax= 1. We are interested in bounding the worst-case deviation in optimal Q-values: ∥Q∗ M−Q∗ M′∥∞= max s∈S, a∈A(s)|Q∗ M(s, a)−Q∗ M′(s, a)|. This is because the Q∗-function encodes the expected total discounted reward starting from state sand taking action a, and thus directly characterizes the long-term value of testing each node. Therefore, a small bound on∥Q∗ M−Q∗ M′∥∞ensures that the policy derived from the learned model will perform nearly as well as the optimal policy under the true model, in terms of accumulated reward. The following bound shows that the suboptimality of the learned policy is controlled by the maximum error in posterior infection probabilities and transition dynamics, with greater sensitivity as β→1. Lemma 8. LetMandM′be defined as above, and let εR, εPdenote the maximal reward and transition discrepancies. Then: ∥Q∗ M−Q∗ M′∥∞≤εR 1−β+βRmax (1−β)2εP. In particular, since Rmax= 1, ∥Q∗ M−Q∗ M′∥∞≤1 1−β εR+β 1−βεP . Proof. LetTMandTM′denote the Bellman optimality operators for MDPs MandM′respectively. For any function Q:S × A → R, define: (TMQ)(s, a) =R(s, a) +βX s′∈SP(s′|s, a) max a′Q(s′, a′) The same holds for the operator TM′. We also know that the operators TMandTM′are both β-contractions under the supremum norm: ∥TMQ−TMQ′∥∞≤β∥Q−Q′∥∞and∥TM′Q−TM′Q′∥∞≤β∥Q−Q′∥∞ Now, let Q∗ MandQ∗ M′be the fixed points of TMandTM′respectively. Then, ∥Q∗ M−Q∗ M′∥∞=∥TMQ∗ M−TM′Q∗ M′∥∞ (Q∗ MandQ∗ M′are fixed points) ≤ ∥TMQ∗ M−TM′Q∗ M∥∞+∥TM′Q∗ M−TM′Q∗ M′∥∞ (Triangle inequality) ≤ ∥TMQ∗ M−TM′Q∗ M∥∞+β∥Q∗ M−Q∗ M′∥∞ (Contraction property of TM′) Rearranging, we get (1−β)∥Q∗ M−Q∗ M′∥∞≤ ∥TMQ∗ M−TM′Q∗ M∥∞ (13) As the difference ∥TMQ∗ M−TM′Q∗ M∥∞captures how much the two MDPs’ rewards and transitions differ, we will analyze and bound for ( TMQ)(s, a)−(TM′Q)(s, a) for some fixed Q: (TMQ)(s, a)−(TM′Q)(s, a) = R(s, a)−ˆR(s, a) +βX s′ P(s′|s, a)−ˆP(s′|s, a) max a′Q(s′, a′). 23 Taking absolute values and using the definition of εRandεP, we get |(TMQ)(s, a)−(TM′Q)(s, a)| ≤εR+βεP∥Q∥∞. Therefore, ∥TMQ−TM′Q∥∞≤εR+βεP∥Q∥∞. In particular, setting Q=Q∗ M′and using the bound ∥Q∗ M′∥∞≤Rmax/(1−β), we see that ∥TMQ∗ M′−TM′Q∗ M′∥∞≤εR+βRmax 1−βεP Plugging into Eq. (13), we get (1−β)∥Q∗ M−Q∗ M′∥∞≤εR+βRmax 1−βεP Thus, ∥Q∗ M−Q∗ M′∥∞≤εR 1−β+βRmax (1−β)2εP as desired. C Experimental details and more experimental results While we do not release the ICPSR dataset, in accordance with its terms
|
https://arxiv.org/abs/2505.21671v1
|
of use, interested researchers may independently access it via https://www.icpsr.umich.edu/web/ICPSR/studies/22140 . To facilitate repro- ducibility, our experimental scripts and the code used for data preprocessing and parameter estimation are available at https://github.com/cxjdavin/adaptive-frontier-exploration-on-graphs . C.1 Defining joint probability distribution Pin our experiments As described in Section 4, all of our experiments are modeled after the network-based disease testing application discussed in Section 1.1, where node labels are binary and a reward of 1 is received if and only if the revealed label indicates a positive diagnosis. Accordingly, the joint distribution Pover node labels follows the pairwise MRF formulation described in Appendix A.1. Since the covariates in our real-world dataset [MR11]7are categorical, we apply one-hot encoding to transform them into binary vectors. For consistency, our synthetic experiments also use binary covariates. Synthetic experiments in Section 4.1 and Section 4.2. For each instance, we sample random parameters θ1∈R2+2dandθ2∈R4+5d, and assign each node a random binary covariate vector of dimension d. The joint distribution Pis then defined according to Eq. (6). Since all policies are agnostic to the choice of d, we fix d= 5 to keep computation time manageable. Constructing Pfrom the real-world dataset in Section 4.3. The real-world dataset comprises a collec- tion of network-based surveys across eight studies, each recording disease statuses for five sexually transmitted infections (Gonorrhea, Chlamydia, Syphilis, HIV, and Hepatitis). We first filtered the data to retain only edges denoting reported sexual interactions and excluded individuals with missing or ambiguous disease status labels. Because the covariates are shared across diseases but transmission dynamics differ, we aggregated data across studies and split it by disease. See Table 2 for dataset summary statistics. For covariates, we used cat- egorical survey responses capturing demographic and behavioral factors relevant to disease transmission such as gender8, homelessness, sex work involvement, etc. A total of 17 categorical variables were one-hot encoded into binary vectors of dimension d= 72. We then applied the parameter fitting procedure from Appendix B.2 to estimate θ1∈R2+2dandθ2∈R4+5dfor each disease-specific dataset, and constructed the corresponding pairwise MRF Pusing Eq. (6). 7This de-identified, public-use dataset is available at https://www.icpsr.umich.edu/web/ICPSR/studies/22140 under ICPSR’s terms of use. No IRB approval was required for our use of this data. 8See [FHA+98, Scu18] for evidence on gender differences in HIV susceptibility. 24 Table 2: Summary statistics of real-world sexual interaction graphs [MR11]. Approximate treewidth is obtained by computing a tree decomposition using networkx ’streewidth minfill in. The graphs for Gonorrhea and Chlamydia are identical but the infection rates are different, i.e., not everyone is infected with both diseases. Sexually transmitted disease Gonorrhea Chlamydia Syphilis HIV Hepatitis Number of infected 66 963 39 80 117 Number of individuals 2079 2079 409 643 1732 Number of edges 1326 1326 325 594 1260 Diameter of graph 8 8 9 13 24 Approximate tree width 1 1 7 8 6 C.2 Full results for Section 4.1 As described in Section 4.1, we evaluated the policies against a familly of randomly generated synthetic trees onn∈ {10,50,100}nodes across various discount facotrs β∈ {0.5,0.7,0.9}. Fig. 3 in Section 4.1 shows only the figures with discount factor
|
https://arxiv.org/abs/2505.21671v1
|
β= 0.9. See Fig. 6 for the full 3 ×3 plot. Figure 6: Full experimental results for synthetic tree experiments. C.3 Full results for Section 4.2 As described in Section 4.2, we progressively add random non-tree edges to the synthetic trees to observe the change in relative performance of our policies. Across all experiments, we consider a discount factor of β= 0.9 and add {0,2,4,6,8,10}extra edges to each graph. Fig. 4 in Section 4.2 shows only the figures for n= 50; see Fig. 7 for results on n∈ {10,100}. While 10 edges may seem like a small number, recall that trees have n−1 25 edges. So, adding 10 edges correspond to adding roughly 100%, 20%, and 10% adding additional edges to each graph for n∈ {10,50,100}respectively. Figure 7: Full experimental results for synthetic non-tree experiments. Observe that the relative performance gains of Gittins over other baselines decreases as we deviate from a tree. Furthermore, in the small n= 10 (top row) instances where we can run Optimal , we see that Gittins is no longer optimal as more non-tree edges are added, as expected. C.4 Full results for Section 4.3 As shown in Fig. 5, Greedy andDQN become computationally intractable on large real-world graphs. For instance, even if we optimize the implementation of Greedy to only recompute marginal positive probabilities for individuals in the same connected component as the previously tested individual, it still incurs a huge rollout time on graphs with large connected components. For example, even with this optimized implementation, Greedy in the third graph of Fig. 12 took 1689 seconds ( ∼28 minutes) on average per Monte Carlo rollout. In contrast, on this same instance, Gittins took about 25 seconds to compute the indices, and each Monte Carlo rollout was under 0 .1 seconds. To enable evaluation on larger graphs while preserving network structure, we applied a principled sub- sampling strategy based on connected components. For each disease-specific dataset, we randomly shuffled the connected components and then greedily aggregated them into a new graph until the total number of nodes exceeded a specified threshold τ. This approach preserves topological properties of the original network more faithfully than node-level subsampling. Due to the sparsity of the graphs (see Table 2), the resulting subsampled graphs contain approximately τnodes as desired. To balance computational feasibility9with dataset representativeness, we set τ= 300. Fig. 5 in Section 4 illustrates a representative subsample for Chlamydia, HIV, and Hepatitis. Below, in Fig. 8, Fig. 9, Fig. 10, Fig. 11, and Fig. 12, we report results on three independently subsampled graphs (using different random 9All experiments were performed on a personal laptop (Apple MacBook 2024, M4 chip, 16GB memory). 26 seeds10) for each of the five diseases, yielding a total of 15 experimental evaluations. We also computed the expected performance of Random ,Greedy , and Gittins on the full disease graphs of Syphilis (409 nodes) and HIV (643 nodes) in Fig. 13 and Fig. 14 respectively; it was computationally intractable to generate Monte Carlo samples of the full disease graphs for Gonorrhea (2079 nodes), Chlamydia (2079 nodes),
|
https://arxiv.org/abs/2505.21671v1
|
and Hepatitis (1732 nodes). Throughout all experiments, we consistently see that Gittins outperforms or is competitive with the other baselines both in terms of expected accumulated (un)discounted rewards and for any fixed timestep, e.g., the vertical dashed line in each plot indicates performance when only half the individuals can be tested. 10For simplicity, we just use the random seeds {0,1,2}for sampling these subgraphs. 27 Figure 8: Experimental results for Gonorrhea with τ= 300 and 3 different random seeds. 28 Figure 9: Experimental results for Chlamydia with τ= 300 and 3 different random seeds. 29 Figure 10: Experimental results for Syphilis with τ= 300 and 3 different random seeds. 30 Figure 11: Experimental results for HIV with τ= 300 and 3 different random seeds. 31 Figure 12: Experimental results for Hepatitis with τ= 300 and 3 different random seeds. 32 Figure 13: Experimental results for full Syphilis network on 409 nodes. Figure 14: Experimental results for full HIV network on 643 nodes. 33
|
https://arxiv.org/abs/2505.21671v1
|
arXiv:2505.21674v1 [cs.AI] 27 May 2025Make Planning Research Rigorous Again! Michael Katz IBM T. J. Watson Research Center Yorktown Heights, NY 10598 michael.katz1@ibm.comHarsha Kokel IBM Almaden Research Center San Jose, CA 95120 harsha.kokel@ibm.com Christian Muise Queen’s University Kingston, Canada christian.muise@queensu.caShirin Sohrabi IBM T. J. Watson Research Center Yorktown Heights, NY 10598 ssohrab@us.ibm.com Sarath Sreedharan Colorado State University Fort Collins, CO 80523 sarath.sreedharan@colostate.edu Abstract In over sixty years since its inception, the field of planning has made significant contributions to both the theory and practice of building planning software that can solve a never-before-seen planning problem. This was done through established practices of rigorous design and evaluation of planning systems. It is our position that this rigor should be applied to the current trend of work on planning with large language models. One way to do so is by correctly incorporating the insights, tools, and data from the automated planning community into the design and evaluation of LLM-based planners. The experience and expertise of the planning community are not just important from a historical perspective; the lessons learned could play a crucial role in accelerating the development of LLM-based planners. This position is particularly important in light of the abundance of recent works that replicate and propagate the same pitfalls that the planning community has encountered and learned from. We believe that avoiding such known pitfalls will contribute greatly to the progress in building LLM-based planners and to planning in general. 1 Introduction With the increased interest in language models came the increased interest in the problems where language models do not perform well. One such problem is planning, which could be informally described as sequential decision-making in the presence of information about the model, i.e., infor- mation about the task and the environment. While more efforts are being made to tackle the problems using tools built around large language models (LLMs), it is worth keeping in mind that planning is one of the oldest established subfields of artificial intelligence. The field has taken its roots in the 60’s of the last century, with the development of best-first search algorithms [ 43] as part of the famous Shakey project, as well as of a formal language to represent planning problems [ 30]. Since then, the planning community has been developing both the representation languages and the generic solvers that handle problems represented in these languages. Instrumental in the development of these tools was the International Planning Competition (IPC), a series of roughly bi-annual events that put the state of the art to the test. The IPC had an immense effect on the development of planning systems, Preprint. but also on the languages used to describe planning tasks, evaluation metrics used, the language features supported, and the development of other tools, such as parsers, grounders, and validators. Since its inception, the field has made significant contributions to both the theory and practice of building domain-independent planners, i.e., planning software that can solve a never-before-seen planning problem. The main reason it could be achieved is the rigor with which the research was conducted on the topic,
|
https://arxiv.org/abs/2505.21674v1
|
with methodologies developed over the years through trial and error. This brings us to our primary position: “Make Planning Research Rigorous Again!” We believe that this rigor is the cornerstone of thoughtful and reproducible research that can be built upon. Therefore, our belief is that insights, methodologies, tools, and data from the automated planning community should be correctly incorporated into the design and evaluation of LLM-based planners . We strongly believe that the planning community’s contributions are significant in ways that extend well beyond their historical context. The lessons learned by the community over the decades can help accelerate the development of LLM-based planners. This position is particularly important in light of the abundance of recent works that replicate and perpetuate errors previously made and subsequently addressed by the planning community. Helping the greater AI community to avoid known pitfalls will contribute greatly to the progress in building LLM-based planners and to planning in general. A position paper, by its very nature, cannot meaningfully summarize all the lessons that have been identified by a field that has existed for more than half a century. Instead, our goal with this paper is to lay out some pointers and to establish the basic vocabulary that would allow the curious reader to identify the correct resources they can utilize for their own research. We also lay out some general guidelines for evaluating planners and list some widely used tools from the community that might be helpful for researchers who develop LLM-based planners. Further, we believe that this paper can serve as a guide for reviewers tasked with assessing such works. 2 Fundamental Planning Terminology In this section, we provide a brief overview of different flavors, formalism, and representations of planning problems commonly used by the AI Planning community. We start from the most basic formalism, commonly referred to as classical planning. A classical planning problem includes the initial state of the world, desired goals, and a set of possible actions. The objective of a planner is to synthesize a plan that is guaranteed to generate a state which contains the desired goals. More formally, the classical planning model is defined as a tuple S=⟨S, s 0, SG, A, f, c ⟩, where •Sis a finite and discrete state space •s0∈Sis a known initial state •SG⊆Sof is a set of goal states •Ais a set of actions; A(s)⊆Aapplicable in each s∈S •fis a deterministic transition function, where s′=f(a, s)fora∈A(s) •cis a non-negative action costs, c(a, s) The solution to a classical planning problem is a sequence of applicable actions that maps s0intoSG. In the classical setting, the initial state is known, actions are deterministic, the system is fully observable, actions are instantaneous, goals are specified as final states, and the planning horizon is finite. Relaxing these assumptions introduces multiple variants of planning, including but not limited to: • Conformant planning [102, 10] - where initial state is uncertain. •Probabilistic planning [ 116,23,72,94] - where the action dynamics is probabilistic and state spaces do not have to be discrete or finite. •Non-deterministic planning [ 68], and fully
|
https://arxiv.org/abs/2505.21674v1
|
observable non-deterministic planning (FOND) [80, 85, 86] - where the action dynamic is non-deterministic. • Temporal Planning [32, 37, 75] -where actions have durations and temporal constraints. 2 • Numeric Planning [48, 90, 104] - where actions can affect numeric state variables. •Hierarchical Task Network (HTN) planning [ 29,87] - where the initial state and the goal are defined as a task network (a set of tasks and constraints), with recursive decomposition of high-level tasks into lower-level sub-tasks. •Planning with soft or hard constraints (preferences [ 8,103], temporally extended goals [4,7]) - where the goal is to additionally optimize for the soft and hard constraints, as well as partial satisfaction planning, such as net-benefit [ 110,66,1], where the goal is to optimize for difference between the utility and the cost of achieving the goal, and the and oversubscription planning [101, 24, 63]) - where the goal is to find a plan that achieves the best possible utility given resource constraints. Note that several of these flavors of planning can sometimes be compiled into classical planning for easier computation. Some examples include compiling conformant planning into classical planning [89], compiling a fragment of HTN planning into classical planning [ 2], and compiling away final state soft constraints/preferences [66]. While most of these models can be captured by the well-known (PO)MDP formalism, it is more efficient to capture these planning models in a compact way (e.g., with a factored representation), with a formal planning language. Multiple such formal languages for planning problems have been introduced. Starting with ground representations, the most popular are the simplest propositional language STRIPS [ 30], f-STRIPS [ 35], all the way to multi-valued variables based languages SAS/SAS+/FDR [5, 6, 52]. This variety, however, is a mixed blessing, as various existing solvers that were being developed supported one language or the other, making it more difficult to compare their performance. Con- sequently, with the birth of International Planning Competitions in 1998, a unified language for planning problems representation was born, named Planning Domain Definition Language (PDDL) [82]. This language has become the de facto standard and most commonly used representation language for planning. The planning knowledge is separated into two parts, the PDDL domain and the PDDL problem. PDDL domain contains knowledge of the constants, types, predicates, actions, their preconditions, and effects. PDDL problem consists of knowledge of the objects, the initial state, and the goal state. There are multiple extensions of PDDL, such as PDDL 2 [ 32], with the focus on durative actions, numeric fluents, and derived predicates, PDDL 3.0 [ 36], with the focus on capturing constraints. Linear Temporal Logic (LTL) and its variants are used to represent preferences, or constraints [ 100,7,19], and are partly captured in PDDL 3.0. Furthermore, RDDL [ 94] and PPDDL have been introduced to represent probabilistic planning tasks [117]. Representing a planning task compactly and accurately is challenging and often constitutes a bot- tleneck of using a planning system. As a result, learning planning models has become a growing area of interest over the past decade [ 113,56,54,55]. Learning planning representation,
|
https://arxiv.org/abs/2505.21674v1
|
particularly the PDDL representation, has gained special attention in the era of LLM. The existing work can be partitioned into learning the domain model [ 41,88,38,107] and learning the problem instance representation, assuming the domain model exists [73, 119]. References to Some Tutorials: Readers can use the following tutorials as starting points to read more about many of the topics discussed above: on classical planning, on the Python library Unified Planning that supports modeling and programmatically invoking planners, on RDDL, on methods to generate multiple plans, and on learning planning models. For additional material, we refer the curious reader to the textbooks [39, 40, 81, 45]. Common Pitfall 1. While planning problems can have infinite state spaces, they are well-defined mathematically. Problems with fuzzy and ill-defined state and action spaces are generally not considered by the planning community. Instead, one may identify a well-defined variant of the problem, allowing also to define what constitutes a solution. Common Pitfall 2. Just because a problem can be represented as a planning problem, it doesn’t mean that planning tools are a good fit for it. For example, many reasoning problems, like the canonical NP-complete problem of boolean satisfiability (SAT) [ 16] and puzzles like Sudoku [ 99,77], can be represented as planning problems. However, SAT and CSP solvers would be better suited to 3 serve as baselines for these problems. A good indication for a planning problem is its sequential nature, if the order in which decisions are made matters. 3 Computational Problems, Algorithms, and Complexity Even for the most restricted fragment of classical planning, there is a variety of decision and search problems one can think of. Two decision problems are the most popular, plan existence andbounded plan existence , a variant that asks whether the plan exists of quality under a given bound. There is a larger variety of search problems, including cost-optimal planning , asking to find a plan that minimizes summed action cost, or, in the case of unit-cost actions, minimizes plan length. Other search problems include satisficing planning , asking to find any plan, while cheaper plans are better; agile planning , where the only thing that matters is how quickly the plan is found; top-k planning , asking to find k plans such that no cheaper plans exist; top-quality planning , asking to find all plans up to a certain cost; and diverse planning , aiming at obtaining diverse set of plans, considering plan quality as well. A planner that is guaranteed to only return a valid plan is a sound planner, and a planner that is guaranteed to return a solution if one exists is a complete planner. While all the problems described previously are PSPACE-hard [ 12] in general, when represented in a planning language such as PDDL, for some domains, the complexity of some of these search problems can significantly vary from the others. One prominent example is the BlocksWorld domain, where cost-optimal planning is NP-complete, while agile or satisficing planning are in P. To see that, one can think of a simple two-stage policy that
|
https://arxiv.org/abs/2505.21674v1
|
can solve any BlocksWorld instance - simply unstack all blocks and put them on the table in the first stage and incrementally build the requested goal state in the second stage. Such policies are called generalized policies and are dealt with in the generalized planning subfield. Generalized policies solve planning problems without any search, and while it may be useful to be able to find these policies, possibly with the help of language models [ 98], one should be careful generalizing the evidence obtained on these problems to the entire field of planning. When looking at individual domains or problems, one must ask oneself, how difficult is it to solve? In cases when the semantics of the domain are known, some computational complexity results exist for individual domains, e.g., [ 50,18,26,47]. In other cases, a large body of research investigated the computational complexity of different fragments of planning, mostly based on the structure of the ground planning task, such as causal graph structure, variables domains size and structure, reversibility/invertability of actions, action preconditions size, and the mixture of those, e.g., [ 5,6,12,59,49,61,25]. The rationale behind the investigation, in addition to understanding where the boundary between easy and hard problems lies, was in using the poly-time fragments for automatically deriving poly-time computable distance to goal approximations, also called heuristic functions [53,27,62]. Other difficulty approximations include various notions of width , meant to capture how hard it is to solve a problem, with the goal of exploiting the property algorithmically [13, 71, 22, 79]. In order to solve these problems, a variety of methods were introduced, starting with the famous STRIPS algorithm [ 30], a total ordering planning backward from the goal, keeping a stack of subgoals, iteratively resolving a top subgoal. The algorithm is sound but incomplete for satisficing planning, but a powerful planner from the 70s nonetheless. The 80s gave rise to methods based on Partial Order Planning, a search in the space of partial plans (sequences of actions), a sound and complete approach to satisficing planning. The 90s were the Golden Age of classical planning, introducing four different approaches to planning: GraphPlan, SAT-plan, heuristic search planning, and model checking planning. GraphPlan [ 9] is based on constructing a graph containing all possible parallel plans up to certain length in a forward manner and then extracting a plan by searching the graph backward from the goal. This is a sound approach to satisficing planning, with the completeness obtained by restarting with an increased the length bound. The other three approaches were not just both sound and complete. Importantly, these approaches could produce optimal solutions. SAT-plan [ 65] exploited SAT solvers, transforming the planning task into a boolean formula. By iteratively increasing the plan length bound, it allowed to optimize for plan length. Model Checking Planning [ 15] constructed a layered graph, compactly representing multiple states in a layer with compact data structures for representing boolean functions. Then, a backward search is performed from the goal to find a plan. Finally, the most popular approach to date is heuristic search planning, a forward
|
https://arxiv.org/abs/2505.21674v1
|
search in the problem state space, with automatically extracted from the problem heuristic function. Over the years, a large variety of search algorithms and heuristic functions were introduced, 4 some guaranteeing the obtained solutions to be optimal. One such famous algorithm is A∗, a best-first search algorithm that expands the search nodes in the order of their f=g+hvalues, where gis the cost of getting to the node from the initial state and his the heuristic function value for the node. When the heuristic function is admissible (guaranteed not to over-estimate the true cost of getting to the goal), the algorithm is guaranteed to produce cost-optimal plans. One additional property of the algorithm is that it is optimally efficient [ 20], meaning there cannot be any algorithm in its class that can do better than A∗. When going beyond classical planning problems, popular approaches include the aforementioned transformations to classical planning and use of classical planners, as well as the use of more complex heuristic search algorithm families, such as MCTS/UCT or And/Or best-first search. Another important aspect is the complexity of solution validation. In classical planning, solutions (aka plans) are applicable to the initial state sequences of actions that result in a goal state. Validating whether a sequence of actions is a plan is poly-time, but in order to validate that a plan is cost-optimal, one needs to prove that there is no plan of smaller cost, which is PSPACE-complete. In HTN planning, any solution validation is NP-hard. For non-deterministic planning, solutions are typically represented as policies mapping states to actions or compactly as controllers, and validating their complexity is poly-time in the solution size. Common Pitfall 1. Historically, proposed planning algorithms were at least sound, albeit sometimes incomplete, meaning when they produce a solution, this solution is guaranteed to be correct. Naturally, an implementation may include bugs, and therefore it is a good practice to validate the solutions produced. Unsound algorithms do not have the guarantee to output only correct solutions, and therefore must be followed by a validation. Common Pitfall 2. It is a common practice to compare planners that target the same computational problems. In most cases, a comparison between planners built for different computational problems— and therefore providing different guarantees on the produced plans—is less meaningful. Common Pitfall 3. It is important to know the properties of the algorithms used and of the resulting solutions. One common error is to use A∗search with a heuristic function that has no admissibility guarantees as a planner. In such cases, there is no guarantee for a produced plan to be cost-optimal, and validating cost-optimality can be extremely resource consuming. Further, while A∗is optimally efficient for cost-optimal planning, it is extremely inefficient for satisficing planning, when no optimality guarantees are needed. This is due to the fact that it spends most of its efforts attempting to prove that there are no better plans. Common Pitfall 4. While regression from goal [ 92] is a valuable tool in planning, one must know that the process creates so-called spurious states, which can be invalid
|
https://arxiv.org/abs/2505.21674v1
|
or simply unreachable from the initial state. In both cases, one should take great care with such states. 4 The Data The majority of datasets used for evaluation of LLM-based planners fall into one of the following categories: (a) existing games repurposed for multi-step planning/search problems, (b) natural language planning problems generated specifically for LLM evaluations, (c) existing PDDL domains, and (d) natural language datasets generated from PDDL domains. Figure 1 provides an overview of various benchmarks. In this section, we offer insights into the PDDL benchmarks creation and highlight a few common pitfalls and misconceptions in benchmark selection for evaluations. We propose a few key considerations to guide a robust and meaningful assessments of planning systems. Common Pitfall 1. As previously mentioned, International Planning Competitions (IPC) played a pivotal role in the development of planning systems. The intention behind IPC was to test the planners on unknown, previously unseen benchmarks. The participants therefore submit their planners, which are run by the IPC organizers on a collection of planning problems, using the same hardware, under the same time and memory restrictions. The competition organizers therefore needed to come up with a collection of new PDDL domains every time the competition was run. These domains were meant to capture some abstraction of a real life problem, but often also were crafted to double down 5 Benchmarks PDDL-based Natural Language PlanBench [ 109]TRAC [ 46]AutoPlanBench [ 105]ActionReasoningBench [ 42]ACPBench [ 67]PDDL IPC [ 76,3,28,36,70,108,106]TextWorld [ 17]Alfworld [ 97]Text2World [ 58]Natural Language Travel Planner [ 112]Natural Plan [ 118]Existing Games Game of 24 [ 114]Mini Crosswords [ 114]Agent-style multi-turn BA-CALENDAR [ 11]AgentBench [ 74]AgentBoard [ 78] Figure 1: Benchmarks used in the literature for LLM-based Planning evaluations on known limitations of popular approaches. One example is the Gripper domain, which introduces many symmetries into the state space, making it challenging for pure forward (heuristic) search based methods. This pushed the research towards investigating search pruning techniques [ 31,91,14]. Another aspect the competition organizers need to consider is the problem instances created for each domain. These problem instances should be challenging enough for the current state of the art, but also not too challenging, so a meaningful comparison would be possible. Consequently, a collection of problem instances of increasing difficulty was created. While initially domains varied significantly in the number of instances created for that competition, the later editions were more uniform. That way, a good performance on one domain would not swipe the scales when aggregating the performance across all instances. To automate the process, a tool to automatically scale the instances was developed, allowing us to create a uniform size benchmark set of instances of existing domains that are challenging for the current state of the art, for either cost-optimal, satisficing, or agile planning. Note that instances that are challenging for cost-optimal planning might be very easy for satisficing or agile planning, where no optimality is required. Common Pitfall 2. A critical concern of evaluating models on publicly available planning domains and problem instances, such as BlocksWorld, Logistics, and Grid-based IPC benchmarks, is that
|
https://arxiv.org/abs/2505.21674v1
|
the solutions are accessible through planning resources (c.f. Muise [83]). This issue is most aggravated when the benchmark is generated by scraping the data from the internet. For instance, the “Game of 24" and the “Mini crosswords" datasets [ 114] was generated by scraping https://4nums.com andhttps://www.goobix.com/crosswords/0505/ , respectively. This raises the risk that answers may be included in the training data, leading to potential memorization and artificially inflated performance estimates. Indeed the contamination study by Hu et al. [58] shows that frontier models have memorized the PDDL. This underscores the need for careful attention to data provenance when selecting benchmarks. Ideally, evaluation should be conducted on novel domains and problem instances that are guaranteed to be unseen during training. Previously, similar concerns were addressed in the planning literature by introduction of “mystery domains", where predicates and object names were replaced with random words. The use of semantically uninformative or misleading 6 names may introduce ambiguity for language models, potentially impairing their performance. Consequently, such domains may not be suitable for reliably evaluating the performance of language models. Hence, our practical proposal is to build generators that generate random instances for evaluations. Such generators are readily available for over 60 PDDL domains. Common Pitfall 3. Another challenge in developing planning benchmarks is the lack of reliable ground truth and difficulty of verifying the correctness of the answers. In some domains, especially those involving open-ended reasoning, the correctness of a solution may not be binary. To overcome that challenge, certain datasets use LLMs for evaluation. For example, Butt et al. [11] and [ 42] use LLM as a judge for evaluating the generative tasks. While this has become acceptable approach for evaluation for some Question Answering tasks, it is not clear if such evaluation approach is reliable for planning problems. Another source of uncertainty in the ground truth also arises from the fact that some datasets are generated by scraping the internet. In such cases, individual examples and their ground truth are rarely verified. The unreliability of the data is most prolonged when they are generated using LLMs. For example: In an earlier version of AutoPlanBench [ 105], ‘(get-pop ?x)’ action in the Movie domain is translated to "get population of an object" instead of getting a soda. Common Pitfall 4. Greater care is required to split the data as equivalence and hardness is not immediately apparent from the questions. It is standard practice to randomly split generated data into a train-set and test-set for evaluating the performance of a model. While it may work in most cases, for data generated using PDDL-generators, a random split might not work. The PDDL-generator doesn’t ensure that the generated problem instances are all unique. The generator might produce multiple problem instances that are structurally or functionally equivalent (i.e., isomorphic). Meaning that two instances might differ only in superficial aspects such as object naming or variable ordering, while preserving the same underlying structure and hence share the same solution strategy. Hence, selecting syntactically unique problem instances is not sufficient to evaluate the underlying reasoning ability of the model.
|
https://arxiv.org/abs/2505.21674v1
|
To overcome this issue, some prior works partition the test and train split based on the number of objects or the plan-length. For example, train set might include problems with 3–7 objects where as test set might include problems with 7–20 objects. This approach ensures that the test instances are structurally different and systematically more complex then the train instances. However, caution is warranted when using such partitioning strategies, as they can give a misleading impression of “correctness". For instance, consider a BlocksWorld domain where test instances are said to include 7–20 blocks. If, in practice, the plan length is clipped at 10 steps, and assuming a 4-operator BlocksWorld domain, the planner can only move a maximum of 5 blocks. In such cases, despite the apparent increase in object count, the solutions for the test problems are structurally similar to those in the training set, undermining the intended evaluation [93]. 5 Tools From The Planning Community Recent years have seen a surge in development of tools for working with planning problems. From off-the-shelf planners to online services to debugging software, there is a whole host of software and libraries that researchers can take advantage of. Here, we detail some of what is now available. For the manual specification of PDDL, an online editor is available for quick prototyping and solving of various planning formalisms, and an extended interface is available through the VSCode plugin for PDDL. Both of these services contain an integration with a suite of planners hosted in the cloud for free use (with limited resources). The service is open for programmatic access for anyone wishing to solve simple problems. The service is also available as an open source project for those who wish to host a version with different resources or planners [21]. For running planners and planning software directly, the planutils offers turn-key access to dozens of existing software, all pre-compiled [ 84]. This allows researchers to forgo the often fraught process of getting planners compiled and running. Not only does this include many state-of-the-art planning systems for various formalisms, but it also includes general planning software, such as model debugging [ 60] and plan validation [ 57,44]. While planutils is a convenient way to access many tools, it is worth mentioning some of the more popular tools directly. First and foremost, is the most popular planning system Fast Downward [ 51], that incorporates implementations of many search algorithms, heuristics, pruning techniques, as well as methods for storing and accessing search queues. Many state-of-the-art planners are built on top of Fast Downward and find their way into the 7 core code. As such, invoking Fast Downward with different parameters will result in planners for different formalisms, such as cost-optimal, satisficing, agile, etc. Another popular planning system is Pyperplan. Its popularity, however, stems from the ease of use rather than its power. It is important to know that Pyperplan was created for educational purposes and should not be used for experimental comparison. One category of particularly useful planning formalisms focuses on finding multiple solutions to planning problems. Planners for
|
https://arxiv.org/abs/2505.21674v1
|
these formalisms are Forbid Iterative, K∗, and SymK. All three can be conveniently invoked from Python. The former two are offered as PyPi packages. The latter is accessible via Unified Planning library. Among the most useful tools for data generation are PDDL problems generators, offering customizable software to generate problem instances for over 50 unique planning benchmark domains [95]. Last, but not least, PDDL parsers and grounders allow one to parse, modify, and generate PDDL representations, as well as progress and regress through state spaces. Among the most commonly used are the pddl Python library , the Tarski parser [ 33], which can also be used via a lightweight wrapper, as well as CPDDL planning library, which implements a variety of tools for task manipulation. These libraries are absolutely invaluable when one must create PDDL content programmatically. Common Pitfall 1. It is a good practice to mention which configuration is used, and it is absolutely necessary to mention what search problem is solved. For instance, saying that Fast Downward was used does not provide sufficient information about the planner, it could be a cost-optimal configuration, it could be a satisficing or agile one. It could also be a nonsensical configuration like A∗search with inadmissible heuristic, which results in a highly inefficient way of finding non-optimal plans (refer to Common Pitfall 3. in Section 3). Common Pitfall 2. Packages that allow to progress from one state to another via action application are based on the process of grounding a planning task. As the tools are oriented towards creating planning tasks for an efficient search, they often get rid of an information that is redundant for the search process. Such information includes so-called static information, that does not change from one state to another, like game maps or object types. While symbolic planners do not need the information once the task is grounded, LLM-based planners might find this information crucial. Being aware of how the tools work may help preserve the needed information. 6 On Evaluating Planners In this section, we put everything together to describe what typically constitute a rigorous scientific evaluation of a planning method, or simply a planner . When proposing a new planner, it is important to clearly articulate the assumptions made in terms of the characteristics of the planning problems being addressed. As we discussed in Section 2, planning problems vary widely, each consider different assumptions. Clearly stating these assumptions allows the authors to accurately position their contribution within the relevant literature, select appropriate baselines for evaluation, and most importantly avoid claims that are too general and are not supported by the scope of the work. Our overview in Section 2 is by no means complete, but can serve as a useful starting point for researchers to better understand the extensive AI Planning literature and to properly situate their contributions. Common Pitfall 1. It is critically important to have an understanding of the computational com- plexity of the problem at hand. For any newly proposed planner, it is standard practice to provide the complexity analysis of the planner, along
|
https://arxiv.org/abs/2505.21674v1
|
with formal properties such as soundness, completeness, and where applicable, optimality [ 64]. These considerations are particularly important when selecting baselines. In general, unsound methods should be avoided, or, at the very least, not compared directly to sound approaches, as they do not provide the same guarantees. For newly proposed domains/datasets, if no validator exists, it is important to provide a sound validator. It is usually a good practice not to provide both the new planner and the new dataset for the planner to be tested on. Common Pitfall 2. The most common evaluation metric is the overall aggregated coverage, where the coverage per instance is 1 if solved, otherwise 0. This is similar to the common success rate metric. While in principle this metric works for many computational problems, both optimal and non-optimal settings, the metric is more suitable for optimal settings, when all found plans should be of the same quality. For satisficing and agile settings a different metric was proposed, known 8 as the IPC score . This metric scores the method relatively to other competitors performance per instance. For satisficing setting, each instance gets the valuec∗ cwhere c∗is the best among the plan costs obtained by all the competitors, and cis the cost of the plan obtained by the method. For agile planning, the time to get the solution is used instead of the plan cost. In cases when a search component is proposed, a measure of search effort, such as the number of expanded nodes for optimal search algorithms and the number of generated nodes for satisficing search. It is a good practice to evaluate an approach on a large variety of domains, to reduce the bias to particular domains. Further, it is common to compare methods under the same hardware and resource restrictions. Common Pitfall 3. When proposing planners that do not have soundness guarantees, as it common in LLM-based planning literature [ 111,115,114,96,34], the planner must be paired with a sound validator, which can make the overall algorithm technically sound. It is important to separate the validation from the solution, reducing the possibility for error. Common Pitfall 4. In the cost-optimal track of the IPC, when the planners must provide optimal solutions, if a participant returns non-optimal solutions it is disqualified, since the planner is supposed to provide guaranteed cost-optimal solutions. Recall that it is hard to validate that a solution is cost-optimal, and therefore we are meant to trust the cost-optimal planner. It does not matter how often the solution produced happens to be cost-optimal, if no such guarantee exists, and therefore it is meaningless to measure [69]. Common Pitfall 5. The search effort comparison is meaningful only when the same search algo- rithm is used, evaluating the relative power of the proposed heuristic function or pruning technique compared to existing ones. Further, this should be a reasonable algorithm for the problem at hand. For example, A∗is not a reasonable choice of an algorithm for non-optimal planning, as it puts most of the effort in proving optimality rather than finding a plan. If, in
|
https://arxiv.org/abs/2505.21674v1
|
addition, the proposed heuristic function is not guaranteed to not over-estimate the true goal cost, then this additional effort is essentially wasted, as optimality of the found solution cannot be guaranteed. Comparing the search efforts across algorithms might be tricky, even if they provide the same plan quality guarantees. If the plan quality guarantees are different, the comparison is meaningless, as they solve different problems, sometimes of different complexity (recall the discussion of BlocksWorld in Section 3). Thus, one might want to avoid claims of “Better Planning” based on such observations [69]. Common Pitfall 6. As discussed in Section 4, the choice of datasets for evaluation is equally important. First, the dataset must be relevant to the problem at hand and suitable for evaluating the proposed approach as well as the baselines. Moreover, it is important to evaluate performance across a variety of domains and to show how the proposed method scales with increasing problem instance difficulty. For example, many existing datasets, such as those in [ 114], consists of instances of similar size and complexity (e.g., all instances of the 24 game are of the same size), which limits the generality of the evaluation. Second, care must be taken to decouple the proposed method from the dataset to ensure unbiased evaluation. Unfortunately, it has become common practice to introduce both a new dataset and a new method in the same paper, potentially leading to evaluation bias. IPC benchmarks were established to help mitigate this issue by providing standardized evaluation domains. Please refer to 4 for further discussion on pitfalls related to dataset selection. Common Pitfall 7. A key aspect of high-quality scientific paper is the transparency of the ex- perimental setup. This includes clearly specifying the computation resources used, time limits, the number of LLM calls, and all the tools involved in both the proposed approach or baseline approaches. A frequent issue is the insufficient specification of the planner (see Common Pitfall 1. in Section 5), or failing to mention time or memory constraints placed on the planner. Such oversights make it hard to replicate the results from the current paper. 7 Conclusions As we see more and more focus on solving planning problems using LLM-based tools, it is now more important than ever to approach this problem in a systematic and rigorous way. This would not only help us take an accurate stock of the state-of-the-art, but also prevent us from fooling ourselves into thinking we have made more progress than we actually have. One of our main proposals 9 towards this goal would be to adopt insights, methodologies, tools, and data from the automated planning community and incorporate them in the design and evaluation of our AI systems. As discussed in the paper, existing work from the automated planning community provides us with a useful framework to categorize and analyze the various flavors of planning, including identifying their computational complexities. The field has not only developed an extensive set of benchmarks that could be directly adopted by researchers working on LLM-based planners, but also developed helpful tools and established
|
https://arxiv.org/abs/2505.21674v1
|
useful experimental protocols. In addition to making this case, this paper also details basic terminology and pointers that could act as a starting point for researchers to learn more about existing work and the state of the art. In each section, we also laid out some advice for researchers less familiar with the field and some common pitfalls that should be avoided. Ultimately, we hope this paper can serve as a handbook of insight into how planning-based research can be rigorously conducted, both for researchers and reviewers alike. References [1]Meysam Aghighi and Peter Jonsson. Oversubscription planning: Complexity and compilabil- ity. In Carla E. Brodley and Peter Stone, editors, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014) , pages 2221–2227. AAAI Press, 2014. [2]Ron Alford, Ugur Kuter, and Dana Nau. Translating HTNs to PDDL: A small amount of domain knowledge can go a long way. In Craig Boutilier, editor, Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009) , pages 1629–1634. AAAI Press, 2009. [3] Fahiem Bacchus. The AIPS’00 planning competition. AI Magazine , 22(3):47–56, 2001. [4]Fahiem Bacchus and Froduald Kabanza. Planning for temporally extended goals. Annals of Mathematics and Artificial Intelligence , 22(1,1):5–27, 1998. [5]Christer Bäckström and Inger Klein. Planning in polynomial time: the SAS-PUBS class. Computational Intelligence , 7(3):181–197, 1991. [6]Christer Bäckström and Bernhard Nebel. Complexity results for SAS+planning. Computa- tional Intelligence , 11(4):625–655, 1995. [7]Jorge A. Baier and Sheila A. McIlraith. Planning with temporally extended goals using heuristic search. In Derek Long, Stephen F. Smith, Daniel Borrajo, and Lee McCluskey, editors, Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS 2006) , pages 342–345. AAAI Press, 2006. [8]Jorge A. Baier and Sheila A. McIlraith. Planning with preferences. AI Magazine , 29(4):25–36, 2008. [9]Avrim Blum and Merrick L. Furst. Fast planning through planning graph analysis. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI 1995) , pages 1636–1642. Morgan Kaufmann, 1995. [10] Blai Bonet and Héctor Geffner. Planning with incomplete information as heuristic search in belief space. In Steve Chien, Subbarao Kambhampati, and Craig A. Knoblock, editors, Proceedings of the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS 2000) , pages 52–61. AAAI Press, 2000. [11] Natasha Butt, Varun Chandrasekaran, Neel Joshi, Besmira Nushi, and Vidhisha Balachan- dran. Benchagents: Automated benchmark creation with agent interaction. arXiv preprint arXiv:2410.22584 , 2024. [12] Tom Bylander. The computational complexity of propositional STRIPS planning. Artificial Intelligence , 69(1–2):165–204, 1994. [13] Hubie Chen and Omer Giménez. Act local, think global: Width notions for tractable planning. In Mark Boddy, Maria Fox, and Sylvie Thiébaux, editors, Proceedings of the Seventeenth International Conference on Automated Planning and Scheduling (ICAPS 2007) , pages 73–80. AAAI Press, 2007. [14] Yixin Chen and Guohui Yao. Completeness and optimality preserving reduction for planning. In Craig Boutilier, editor, Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009) , pages 1659–1664. AAAI Press, 2009. 10 [15] Alessandro Cimatti, Fausto Giunchiglia, Enrico Giunchiglia, and Paolo Traverso. Planning via model checking: A decision procedure for AR. In Sam Steel and
|
https://arxiv.org/abs/2505.21674v1
|
Rachid Alami, editors, Recent Advances in AI Planning. 4th European Conference on Planning (ECP 1997) , volume 1348 of Lecture Notes in Artificial Intelligence , pages 130–142. Springer-Verlag, 1997. [16] Stephen A. Cook. The complexity of theorem-proving procedures. In Michael A. Harrison, Ranan B. Banerji, and Jeffrey D. Ullman, editors, Proceedings of the 3rd Annual ACM Symposium on the Theory of Computing (STOC 1971) , pages 151–158. ACM, 1971. [17] Marc-Alexandre Côté, Akos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler. Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, Held in Conjunction with the 27th International Conference on Artificial Intelligence, IJCAI 2018 , pages 41–75. Springer, 2019. [18] Joseph C. Culberson. Sokoban is PSPACE-complete. Technical Report TR 97-02, Department of Computing Science, The University of Alberta, Edmonton, Alberta, Canada, 1997. [19] Giuseppe De Giacomo and Moshe Y . Vardi. Linear temporal logic and linear dynamic logic on finite traces. In Francesca Rossi, editor, Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013) , pages 854–860. AAAI Press, 2013. [20] Rina Dechter and Judea Pearl. Generalized best-first search strategies and the optimality of A∗.Journal of the ACM , 32(3):505–536, 1985. [21] Yi Ding, Cam Cunningham, Christian Muise, and Nir Lipovetzky. Planning as a service. In ICAPS 2023 System Demonstrations and Exhibits , 2023. [22] Simon Dold and Malte Helmert. Novelty vs. potential heuristics: A comparison of hardness measures for satisficing planning. In Jennifer Dy and Sriraam Natarajan, editors, Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2024) , pages 20692– 20699. AAAI Press, 2024. [23] Carmel Domshlak and Jörg Hoffmann. Probabilistic planning via heuristic forward search and weighted model counting. Journal of Artificial Intelligence Research , 30:565–620, 2007. [24] Carmel Domshlak and Vitaly Mirkis. Deterministic oversubscription planning as heuristic search: Abstractions and reformulations. Journal of Artificial Intelligence Research , 52: 97–169, 2015. [25] Carmel Domshlak, Jörg Hoffmann, and Michael Katz. Red-black planning: A new systematic approach to partial delete relaxation. Artificial Intelligence , 221:73–114, 2015. [26] Dorit Dor and Uri Zwick. SOKOBAN and other motion planning problems. Computational Geometry , 13:215–228, 1999. [27] Stefan Edelkamp. Planning with pattern databases. In Amedeo Cesta and Daniel Borrajo, editors, Proceedings of the Sixth European Conference on Planning (ECP 2001) , pages 84–90. AAAI Press, 2001. [28] Stefan Edelkamp and Jörg Hoffmann. PDDL2.2: The language for the classical part of the 4th International Planning Competition. Technical Report 195, University of Freiburg, Department of Computer Science, 2004. [29] Kutluhan Erol, James A. Hendler, and Dana S. Nau. HTN planning: Complexity and expres- sivity. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI 1994) , pages 1123–1128. AAAI Press, 1994. [30] Richard E. Fikes and Nils J. Nilsson. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence , 2:189–208, 1971. [31] Maria Fox and Derek Long. The detection and exploitation of symmetry in planning problems. In Thomas Dean, editor, Proceedings of the Sixteenth International Joint
|
https://arxiv.org/abs/2505.21674v1
|
Conference on Artificial Intelligence (IJCAI 1999) , pages 956–961. Morgan Kaufmann, 1999. [32] Maria Fox and Derek Long. PDDL2.1: An extension to PDDL for expressing temporal planning domains. Journal of Artificial Intelligence Research , 20:61–124, 2003. [33] Guillem Francés, Miquel Ramirez, and Collaborators. Tarski: An AI planning modeling framework. https://github.com/aig-upf/tarski , 2018. 11 [34] Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, and Noah D. Goodman. Stream of Search (SoS): Learning to search in language. arXiv:2404.03683 [cs.LG], 2024. [35] Héctor Geffner. Functional Strips: A more flexible language for planning and problem solving. In Jack Minker, editor, Logic-Based Artificial Intelligence , volume 597 of Kluwer International Series In Engineering And Computer Science , chapter 9, pages 187–209. Kluwer, Dordrecht, 2000. [36] Alfonso E. Gerevini, Patrik Haslum, Derek Long, Alessandro Saetti, and Yannis Dimopou- los. Deterministic planning in the fifth international planning competition: PDDL3 and experimental evaluation of the planners. Artificial Intelligence , 173(5–6):619–668, 2009. [37] Alfonso E. Gerevini, Alessandro Saetti, and Ivan Serina. Temporal planning with problems requiring concurrency through action graphs and local search. In Ronen Brafman, Héctor Geffner, Jörg Hoffmann, and Henry Kautz, editors, Proceedings of the Twentieth International Conference on Automated Planning and Scheduling (ICAPS 2010) , pages 226–229. AAAI Press, 2010. [38] Elliot Gestrin, Marco Kuhlmann, and Jendrik Seipp. NL2Plan: Robust LLM-driven planning from minimal text descriptions. In ICAPS 2024 Workshop on Human-Aware and Explainable Planning (HAXP) , 2024. [39] Malik Ghallab, Dana S. Nau, and Paolo Traverso. Automated planning - theory and practice . Elsevier, 2004. ISBN 978-1-55860-856-6. [40] Malik Ghallab, Dana S. Nau, and Paolo Traverso. Automated Planning and Acting . Cambridge University Press, 2016. ISBN 978-1-107-03727-4. [41] Lin Guan, Karthik Valmeekam, Sarath Sreedharan, and Subbarao Kambhampati. Leveraging pre-trained large language models to construct and utilize world models for model-based task planning. Advances in Neural Information Processing Systems , 36:79081–79094, 2023. [42] Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Chitta Baral, and Tran Cao Son. Actionrea- soningbench: Reasoning about actions with and without ramification constraints. In The Thirteenth International Conference on Learning Representations , 2025. [43] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. A formal basis for the heuristic determi- nation of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics , 4(2): 100–107, 1968. [44] Patrik Haslum. INV AL: the Independent PDDL plan Validator. https://github.com/ patrikhaslum/INVAL , 2016. Accessed May 2, 2025. [45] Patrik Haslum, Nir Lipovetzky, Daniele Magazzeni, and Christian Muise. An Introduction to the Planning Domain Definition Language , volume 13 of Synthesis Lectures on Artificial Intelligence and Machine Learning . Morgan & Claypool, 2019. [46] Weinan He, Canming Huang, Zhanhao Xiao, and Yongmei Liu. Exploring the capacity of pretrained language models for reasoning about actions and change. In ACL. Association for Computational Linguistics, 2023. [47] Robert A. Hearn and Erik D. Demaine. PSPACE-completeness of sliding-block puzzles and other problems through the nondeterministic constraint logic model of computation. Theoretical Computer Science , 343(1–2):72–96, 2005. [48] Malte Helmert. Decidability and undecidability results for planning with numerical state variables. In Malik Ghallab, Joachim Hertzberg, and Paolo Traverso, editors, Proceedings of
|
https://arxiv.org/abs/2505.21674v1
|
the Sixth International Conference on Artificial Intelligence Planning and Scheduling (AIPS 2002) , pages 303–312. AAAI Press, 2002. [49] Malte Helmert. Complexity results for standard benchmark domains in planning. Artificial Intelligence , 143(2):219–262, 2003. [50] Malte Helmert. New complexity results for classical planning benchmarks. In Derek Long, Stephen F. Smith, Daniel Borrajo, and Lee McCluskey, editors, Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS 2006) , pages 52–61. AAAI Press, 2006. 12 [51] Malte Helmert. The Fast Downward planning system. Journal of Artificial Intelligence Research , 26:191–246, 2006. [52] Malte Helmert. Concise finite-domain representations for PDDL planning tasks. Artificial Intelligence , 173:503–535, 2009. [53] Jörg Hoffmann and Bernhard Nebel. The FF planning system: Fast plan generation through heuristic search. Journal of Artificial Intelligence Research , 14:253–302, 2001. [54] Chad Hogg, Héctor Muñoz-Avila, and Ugur Kuter. HTN-MAKER: Learning HTNs with minimal additional knowledge engineering required. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI 2008) , pages 950–956. AAAI Press, 2008. [55] Chad Hogg, Ugur Kuter, and Héctor Muñoz-Avila. Learning hierarchical task networks for nondeterministic planning domains. In Craig Boutilier, editor, Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009) , pages 1708–1714. AAAI Press, 2009. [56] Chad Hogg, Ugur Kuter, and Héctor Muñoz-Avila. Learning methods to generate good plans: Integrating HTN learning and reinforcement learning. In Maria Fox and David Poole, editors, Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2010) , pages 1530–1535. AAAI Press, 2010. [57] Richard Howey and Derek Long. V AL’s progress: The automatic validation tool for PDDL2.1 used in the International Planning Competition. In Stefan Edelkamp and Jörg Hoffmann, editors, Proceedings of the ICAPS 2003 Workshop on the Competition: Impact, Organisation, Evaluation, Benchmarks , 2003. [58] Mengkang Hu, Tianxing Chen, Yude Zou, Yuheng Lei, Qiguang Chen, Ming Li, Yao Mu, Hongyuan Zhang, Wenqi Shao, and Ping Luo. Text2world: Benchmarking large language models for symbolic world model generation, 2025. URL https://arxiv.org/abs/ 2502.13092 . [59] Peter Jonsson and Christer Bäckström. Tractable plan existence does not imply tractable plan generation. Annals of Mathematics and Artificial Intelligence , 22(3,4):281–296, 1998. [60] Lucas Galery Käser, Clemens Büchner, Augusto B. Corrêa, Florian Pommerening, and Gabriele Röger. Machetli: Simplifying input files for debugging. In ICAPS 2022 System Demonstrations and Exhibits , 2022. [61] Michael Katz and Carmel Domshlak. New islands of tractability of cost-optimal planning. Journal of Artificial Intelligence Research , 32:203–288, 2008. [62] Michael Katz and Carmel Domshlak. Implicit abstraction heuristics. Journal of Artificial Intelligence Research , 39:51–126, 2010. [63] Michael Katz and Vitaly Mirkis. In search of tractability for partial satisfaction planning. In Subbarao Kambhampati, editor, Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016) , pages 3154–3160. AAAI Press, 2016. [64] Michael Katz, Harsha Kokel, Kavitha Srinivas, and Shirin Sohrabi. Thought of search: Planning with language models through the lens of efficiency. In Proceedings of the Thirty- Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) , pages 138491–138568, 2024. [65] Henry Kautz and Bart Selman. Pushing the envelope: Planning, propositional logic, and stochastic search. In
|
https://arxiv.org/abs/2505.21674v1
|
Proceedings of the Thirteenth National Conference on Artificial Intelli- gence (AAAI 1996) , pages 1194–1201. AAAI Press, 1996. [66] Emil Keyder and Héctor Geffner. Soft goals can be compiled away. Journal of Artificial Intelligence Research , 36:547–556, 2009. [67] Harsha Kokel, Michael Katz, Kavitha Srinivas, and Shirin Sohrabi. ACPBench: Reasoning about action, change, and planning. In Julie Shah and Zico Kolter, editors, Proceedings of the Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2025) . AAAI Press, 2025. [68] Ugur Kuter, Dana S. Nau, Elnatan Reisner, and Robert P. Goldman. Using classical planners to solve nondeterministic planning problems. In Jussi Rintanen, Bernhard Nebel, J. Christopher Beck, and Eric Hansen, editors, Proceedings of the Eighteenth International Conference on Automated Planning and Scheduling (ICAPS 2008) , pages 190–197. AAAI Press, 2008. 13 [69] Lucas Lehnert, Sainbayar Sukhbaatar, DiJia Su, Qinqing Zheng, Paul McVay, Michael Rabbat, and Yuandong Tian. Beyond a∗: Better planning with transformers via search dynamics bootstrapping. In Proceedings of the First Conference on Language Modeling (COLM 2024) , 2024. [70] Carlos Linares López, Sergio Jiménez Celorrio, and Angel García Olaya. The deterministic part of the seventh international planning competition. Artificial Intelligence , 223:82–119, 2015. [71] Nir Lipovetzky and Hector Geffner. Width and serialization of classical planning problems. In Luc De Raedt, Christian Bessiere, Didier Dubois, Patrick Doherty, Paolo Frasconi, Fredrik Heintz, and Peter Lucas, editors, Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012) , pages 540–545. IOS Press, 2012. [72] Michael L. Littman. Probabilistic propositional planning: Representations and complexity. InProceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI 1997) , pages 748–754. AAAI Press, 1997. [73] Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. LLM+P: empowering large language models with optimal planning proficiency. CoRR , abs/2304.11477, 2023. [74] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents. CoRR , abs/2308.03688, 2023. [75] Derek Long and Maria Fox. Exploiting a graphplan framework in temporal planning. In Enrico Giunchiglia, Nicola Muscettola, and Dana Nau, editors, Proceedings of the Thirteenth International Conference on Automated Planning and Scheduling (ICAPS 2003) , pages 51–62. AAAI Press, 2003. [76] Derek Long, Henry Kautz, Bart Selman, Blai Bonet, Héctor Geffner, Jana Koehler, Michael Brenner, Jörg Hoffmann, Frank Rittinger, Corin R. Anderson, Daniel S. Weld, David E. Smith, and Maria Fox. The AIPS-98 planning competition. AI Magazine , 21(2):13–33, 2000. [77] Inês Lynce and Joël Ouaknine. Sudoku as a sat problem. In AI&M . Citeseer, 2006. [78] Chang Ma, Junlei Zhang, Zhihao Zhu, Cheng Yang, Yujiu Yang, Yaohui Jin, Zhenzhong Lan, Lingpeng Kong, and Junxian He. Agentboard: An analytical evaluation board of multi-turn LLM agents. CoRR , abs/2401.13178, 2024. [79] Jiayuan Mao, Tomas Lozano-Perez, Joshua B. Tenenbaum, and Leslie Pack Kaelbing. What planning problems can a relational neural network solve? In Proceedings of the Thirty- Seventh Annual Conference on
|
https://arxiv.org/abs/2505.21674v1
|
Neural Information Processing Systems (NeurIPS 2023) , pages 59522–59542, 2023. [80] Robert Mattmüller, Manuela Ortlieb, Malte Helmert, and Pascal Bercher. Pattern database heuristics for fully observable nondeterministic planning. In Ronen Brafman, Héctor Geffner, Jörg Hoffmann, and Henry Kautz, editors, Proceedings of the Twentieth International Con- ference on Automated Planning and Scheduling (ICAPS 2010) , pages 105–112. AAAI Press, 2010. [81] Mausam and Andrey Kolobov. Planning with Markov Decision Processes: An AI Perspective . Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool, 2012. [82] Drew McDermott, Malik Ghallab, Adele Howe, Craig Knoblock, Ashwin Ram, Manuela Veloso, Daniel Weld, and David Wilkins. PDDL – The Planning Domain Definition Language – Version 1.2. Technical Report CVC TR-98-003/DCS TR-1165, Yale Center for Computational Vision and Control, Yale University, 1998. [83] Christian Muise. Planning.Domains. In ICAPS 2016 System Demonstrations and Exhibits , 2016. https://api.planning.domains. [84] Christian Muise, Florian Pommerening, Jendrik Seipp, and Michael Katz. Planutils: Bringing planning to the masses. In ICAPS 2022 System Demonstrations and Exhibits , 2022. 14 [85] Christian J. Muise, Sheila A. McIlraith, and J. Christopher Beck. Improved non-deterministic planning by exploiting state relevance. In Lee McCluskey, Brian Williams, José Reinaldo Silva, and Blai Bonet, editors, Proceedings of the Twenty-Second International Conference on Automated Planning and Scheduling (ICAPS 2012) , pages 172–180. AAAI Press, 2012. [86] Christian J. Muise, Sheila A. McIlraith, and Vaishak Belle. Non-deterministic planning with conditional effects. In Steve Chien, Alan Fern, Wheeler Ruml, and Minh Do, editors, Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014) , pages 370–374. AAAI Press, 2014. [87] Dana S. Nau, Tsz-Chiu Au, Okhtay Ilghami, Ugur Kuter, J. William Murdock, Dan Wu, and Fusun Yaman. SHOP2: An HTN planning system. Journal of Artificial Intelligence Research , 20:379–404, 2003. [88] James Oswald, Kavitha Srinivas, Harsha Kokel, Junkyu Lee, Michael Katz, and Shirin Sohrabi. Large language models as planning domain generators. In Sara Bernardini and Christian Muise, editors, Proceedings of the Thirty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2024) , pages 423–431. AAAI Press, 2024. [89] Hector Palacios and Hector Geffner. Compiling uncertainty away in conformant planning problems with bounded width. Journal of Artificial Intelligence Research , 35:623–675, 2009. [90] Chiara Piacentini, Maria Fox, and Derek Long. Planning with numeric timed initial fluents. In Blai Bonet and Sven Koenig, editors, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015) , pages 4196–4197. AAAI Press, 2015. [91] Nir Pochter, Aviv Zohar, and Jeffrey S. Rosenschein. Exploiting problem symmetries in state- based planners. In Wolfram Burgard and Dan Roth, editors, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2011) , pages 1004–1009. AAAI Press, 2011. [92] Raymond Reiter. Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems . MIT Press, 2001. [93] Swarnadeep Saha, Archiki Prasad, Justin Chen, Peter Hase, Elias Stengel-Eskin, and Mohit Bansal. System 1.x: Learning to balance fast and slow planning with language models. In Proceedings of the Thirteenth International Conference on Learning Representations (ICLR 2025) . OpenReview.net, 2025. [94] Scott Sanner. Relational dynamic influence diagram language (RDDL): Language description,
|
https://arxiv.org/abs/2505.21674v1
|
2010. [95] Jendrik Seipp, Álvaro Torralba, and Jörg Hoffmann. PDDL generators. https://doi. org/10.5281/zenodo.6382173 , 2022. [96] Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Al- gorithm of thoughts: Enhancing exploration of ideas in large language models. CoRR , abs/2308.10379, 2023. [97] Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In Proceedings of the Nineth International Conference on Learning Representations (ICLR 2021) . OpenReview.net, 2021. [98] Tom Silver, Soham Dan, Kavitha Srinivas, Josh Tenenbaum, Leslie Pack Kaelbling, and Michael Katz. Generalized planning in PDDL domains with pretrained large language mod- els. In Jennifer Dy and Sriraam Natarajan, editors, Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2024) , pages 20256–20264. AAAI Press, 2024. [99] Helmut Simonis. Sudoku as a constraint problem. In CP Workshop on modeling and refor- mulating Constraint Satisfaction Problems , volume 12, pages 13–27. Citeseer Sitges, Spain, 2005. [100] Aravinda Prasad Sistla and Edmund Melson Clarke. The complexity of propositional linear temporal logics. Journal of the ACM , 32(3):733–749, 1985. [101] David E. Smith. Choosing objectives in over-subscription planning. In Shlomo Zilberstein, Jana Koehler, and Sven Koenig, editors, Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (ICAPS 2004) , pages 393–401. AAAI Press, 2004. 15 [102] David E. Smith and Daniel S. Weld. Conformant graphplan. In Charles Rich and Jack Mostow, editors, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI 1998) , pages 889–896. AAAI Press, 1998. [103] Shirin Sohrabi, Jorge A. Baier, and Sheila A. McIlraith. HTN planning with preferences. In Craig Boutilier, editor, Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009) , pages 1790–1797. AAAI Press, 2009. [104] Siddharth Srivastava, Shlomo Zilberstein, Neil Immerman, and Hector Geffner. Qualitative numeric planning. In Wolfram Burgard and Dan Roth, editors, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2011) , pages 1010–1016. AAAI Press, 2011. [105] Katharina Stein, Daniel Fišer, Jörg Hoffmann, and Alexander Koller. Automating the genera- tion of prompts for llm-based action choice in pddl planning. In Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (ICAPS 2025) . AAAI Press, 2025. [106] Ayal Taitler, Ron Alford, Joan Espasa, Gregor Behnke, Daniel Fišer, Michael Gimelfarb, Florian Pommerening, Scott Sanner, Enrico Scala, Dominik Schreiber, Javier Segovia-Aguas, and Jendrik Seipp. The 2023 International Planning Competition. AI Magazine , 45(2):280–296, 2024. doi: 10.1002/aaai.12169. [107] Marcus Tantakoun, Xiaodan Zhu, and Christian Muise. LLMs as planning modelers: A survey for leveraging large language models to construct automated planning models. CoRR , abs/2503.18971, 2025. [108] Mauro Vallati, Lukáš Chrpa, and Thomas L. McCluskey, editors. The Eighth International Planning Competition: Description of Participant Planners of the Deterministic Track , 2014. [109] Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. In Proceedings of the Thirty-Seventh Annual Conference on Neural Information Processing Systems (NeurIPS 2023) , pages 38975–38987, 2023. [110] Menkes van den Briel, Romeo Sanchez,
|
https://arxiv.org/abs/2505.21674v1
|
Minh B. Do, and Subbarao Kambhampati. Effec- tive approaches for partial satisfaction (over-subscription) planning. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI 2004) , pages 562–569. AAAI Press, 2004. [111] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In Proceedings of the Thirty-Sixth Annual Conference on Neural Information Processing Systems (NeurIPS 2022) , pages 24824–24837, 2022. [112] Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, and Yu Su. Travelplanner: A benchmark for real-world planning with language agents. In ICML . OpenReview.net, 2024. [113] Qiang Yang, Kangheng Wu, and Yunfei Jiang. Learning action models from plan examples using weighted MAX-SAT. Artificial Intelligence , 171:107–143, 2007. [114] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. In Proceedings of the Thirty-Seventh Annual Conference on Neural Information Processing Systems (NeurIPS 2023) , pages 11809–11822, 2023. [115] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023) . OpenReview.net, 2023. [116] Sungwook Yoon, Alan Fern, Robert Givan, and Subbarao Kambhampati. Probabilistic planning via determinization in hindsight. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI 2008) , pages 1010–1016. AAAI Press, 2008. [117] Håkan L. S. Younes, Michael L. Littman, David Weissman, and John Asmuth. The first probabilistic track of the international planning competition. Journal of Artificial Intelligence Research , 24:851–887, 2005. 16 [118] Huaixiu Steven Zheng, Swaroop Mishra, Hugh Zhang, Xinyun Chen, Minmin Chen, Azade Nova, Le Hou, Heng-Tze Cheng, Quoc V . Le, Ed H. Chi, and Denny Zhou. NATURAL PLAN: benchmarking llms on natural language planning. CoRR , abs/2406.04520, 2024. [119] Max Zuo, Francisco Piedrahita Velez, Xiaochen Li, Michael L. Littman, and Stephen H. Bach. Planetarium: A rigorous benchmark for translating text to structured planning languages. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2025) , pages 11223– 11240, 2025. 17
|
https://arxiv.org/abs/2505.21674v1
|
arXiv:2505.21677v1 [cs.LG] 27 May 2025What happens when generative AI models train recursively on each others’ generated outputs? Hung Anh Vu, Galen Reeves, Emily Wenger∗ Department of Electrical and Computer Engineering Duke University Abstract The internet is full of AI-generated content while also serving as a common source of training data for generative AI (genAI) models. This duality raises the possibility that future genAI models may be trained on other models’ generated outputs. Prior work has studied consequences of models training on their own generated outputs, but limited work has considered what happens if models ingest content produced by other models. Given society’s increasing dependence on genAI tools, understanding downstream effects of such data-mediated model interactions is critical. To this end, we provide empirical evidence for how data-mediated interactions might unfold in practice, develop a theoretical model for this interactive training process, and show experimentally possible long-term results of such interactions. We find that data-mediated interactions can benefit models by exposing them to novel concepts perhaps missed in original training data, but also can homogenize their performance on shared tasks. 1 Introduction Since the release of ChatGPT in 2022, generative AI (genAI) models have exploded in popularity. Now capable of generating highly realistic text, images, and videos, these models have been widely adopted for various use cases, from creative idea generation [4, 28] to healthcare support [52] to national security settings [30, 33]. Given the significant uptick in genAI use across numerous industries, it is clear that this technology is here to stay. Consequently, interrogating potential ways genAI models could evolve—in positive or harmful ways—is critical. With few exceptions, today’s large-scale genAI models are trained on massive datasets sourced from the internet. Widely-accepted scaling laws for model performance say that training on more data aids learning [40], and the internet provides a rich, cheap, and ever-evolving source of training data. Although whitepapers for more recent genAI models withhold details about training set composition—potentially due to ongoing litigation about copyright concerns—evidence from earlier whitepapers indicates that scraped data was used to train models like Llama, Gemini, Phi, the GPT series, Claude, and others [3, 5, 7, 14, 27, 38, 58]. Beyond privacy and copyright concerns, training on scraped data could have other downsides. Prior work has noted genAI models trained recursively on their own generated outputs “collapse,” becoming unable to generate meaningful content [11, 34, 46, 56]. This scenario is feasible, since AI-generated content abounds online [57] and could be part of future training datasets. However, subsequent work has proposed ways to mitigate collapse via reuse of non-AI-generated data in subsequent training iterations [23, 24, 29, 41]. Model collapse remains an activate research area [55]. Yet, prior work studying the dynamics of model collapse has overlooked another reality: the internet teems with content from many genAI models. Today’s most popular models have millions of ∗Corresponding author: emily.wenger@duke.edu Preprint. Under review. users [8, 32, 53], who leverage generative AI tools to create online content like web pages and social media posts [1]. Recent work showed that up to 40% of content on popular sites like
|
https://arxiv.org/abs/2505.21677v1
|
Quora is now AI-generated [57]. Given the increasing availability of these models for a variety of public-facing uses [2, 4, 28], AI-generated content from many different models will continue to proliferate. The standard practice of training on scraped internet data and the increasing prevalence of AI- generated content online suggest the strong possibility that future generative AI models will be trained on other models’ outputs. Yet, this aspect of model training has received considerably less attention in prior work. Given the widespread adoption of generative AI models in critical settings like healthcare and national security, this phenomenon and its downstream effects ought to be vetted to ensure models remain helpful and trustworthy. Contributions. To address this need, this work theoretically derives and experimentally investigates the long-term evolutionary behavior of models trained on each other’s data. Specifically, we •Develop a framework, grounded in real-world evidence, describing data-mediated interactions between genAI models. • Derive concise formulas describing the dynamics of interactive training under varied regimes. •Run experiments on large language models to understand how data-mediated interactions affect model performance in practice. Both our theoretical analysis and experiments show that when training with a mixture of real and synthetic data, the implicit interaction between heterogeneous models and datasets can have both positive and negative impacts. At a high level, the reasons for this can be anticipated from well known concepts in statistical learning theory: recursive training on the same data is bad (e.g., overfitting and model “collapse”) but training on novel data, even if synthetic, can boost performance (e.g., transfer learning). Our experimental results provide concrete evidence that these phenomena can occur simultaneously. Our theory shows that the full range of observed behaviors can be modeled accurately via a linear dynamical system. 2 Related Work Model collapse is a recently observed phenomenon in large-scale generative text and image models. It referred—in it’s earliest form—to the phenomenon of models performing much worse when trained on their own generated outputs [11, 24, 29, 34, 46, 49, 56, 64]. Theoretical and empirical results from these work show that models, if trained on generated outputs from their prior versions, slowly degrade in performance as generations progress. One way this manifests is in models forgetting the tails of their original (real) training data, since generated content tends not to contain rare content from the original training data. Training repeatedly on truncated, synthetic data leads the model to forget the richness of its original distribution, resulting in degraded behavior (at best) and total failure (at worst). Mitigating model collapse. Despite the dire predictions of these works, subsequent work has proposed a simple mitigation strategy: instead of discarding all prior (human-generated) training data, retain some fraction of this while augmenting it with generated data. Numerous works have observed that this choice to augment instead of discard the original training dataset results in a bounded error in future models, avoiding collapse [31, 41, 45]. Although most of these results were discovered on small models, recent work claims that the observed bound in error π2/6exists for all models [23]. Further work [55] summarizes current
|
https://arxiv.org/abs/2505.21677v1
|
research on collapse and calls for greater clarity and precision in discussing this phenomenon. Transfer learning and other model interactions. Significant prior work has studied the phenomenon of transfer learning, in which information learned by one model is passed to another, often by reusing the trained weights of a “teacher” model to initialize a “student” model [71]. Some prior work has further considered the use of synthetic data in transfer learning [15, 42, 61]. Our work is distinct from transfer learning due to its focus on unintentional data-mediated interactions between models. Furthermore, limited work has specifically examined long-term effects of models intentionally training on each other’s data. One work [69] considers the setting where a generative model is trained on data generated by other models, but does not consider long-term effects of such interactions among multiple models. Another recent work [37] studies interacting Large Language Model agents (e.g. LLMs with extra capabilities like web browsing) through the lens of Bayesian social learning and microeconomics, but does not focus specifically on data-mediated interactions between models. 2 Model CommonCrawl WebText Github Wikipedia Books ArXiv StackExchange News Chinchilla [36] GLaM [26] GPT [50] GPT-2 [51] GPT-3 [16] LaMDA [60] Llama 1 [62] PaLM [17] Phi 2 [6] Table 1: Examples of training data listed for prominent language models. = use explicitly stated; = significant overlap expected (e.g. GLaM [26] and PaLM [17] are trained on Microsoft’s internal re-creation of WebText [16]). We only include models for which training data sources are explicitly stated. See Table 3 in Appendix for information on other prominent models. 3 How Today’s Large-Scale Generative AI Models Are Trained We first establish why we believe that data-mediated interactions between models—e.g. instances of models training on each other’s generated outputs—are realistic and worthy of study. To do this, we comb through academic literature and whitepapers describing today’s large-scale genAI models, focusing specifically on large language models, to understand how models are trained, what data they are trained on, and how data is collected and used for model updates. This section sets the stage for the formalization, theory, and experiments in the rest of the paper. Our literature review reveals that most of today’s models follow a 3 step update process. First, models arepretrained on a large corpus of data; then they are fine-tuned to teach specific behaviors and/or to align them with human preferences. Finally, they are later updated , either to teach new behaviors or update knowledge. As we describe these steps in detail below, we highlight specific realities or assumptions that have been largely overlooked or not made explicit by prior work. Step 1: Pretraining. Following well-established scaling laws linking model performance and dataset size [40], today’s large-scale generative AI models are trained on massive datasets often scraped from the internet. Early versions of GPT, Llama, and PaLM all report being trained on scraped datasets like Common Crawl, ArXiv, Github, Wikipedia, and/or Stack Exchange [16, 17, 62]. Table 1 summarizes publicly documented use of scraped internet data in model training. Reality : large-scale genAI models are pretrained on internet-scraped datasets.
|
https://arxiv.org/abs/2505.21677v1
|
Another striking fact emerges from the categorization of training data in Table 1: large-scale model training datasets overlap. For example, GPT, Jamba, Llama, PaLM, and Phi are all trained on subsets of CommonCrawl [21], while GPT, Llama, and PaLM are all trained on Wikipedia and Books datasets. Several other models have other points of training data overlap. This suggests that: (Previously overlooked) reality: internet-scraped AI training datasets overlap substantially. This overlap in training datasets has not been meaningfully explored in the literature and may have important implications for how models evolve. Step 2: Fine-tuning. Variously called fine-tuning or alignment, this phase leverages proprietary methods or data to tweak model behaviors in ways model providers believe are helpful. For example, the LLama fine-tuning phase [27] involves many rounds of reinforcement learning with human feedback (RLHF) to stamp out model negative behaviors, while Phi [5] was fine-tuned on proprietary synthetic data tailored to cover “gaps” in original corpus related to mathematical reasoning. Reality: large-scale genAI models are (typically) fine-tuned using proprietary datasets/methods. Step 3: Model updates. A key assumption of prior literature on model collapse is that models are updated . Updating generally involves training a new model using similar architecture initialized with 3 Model v1 v2 v3 v4 Llama[62]: ArXiv, Books , Common Crawl , C4, Wikipedia, StackExchange[63]: "A new mix of publicly available online data."[27]: "A variety of data sources containing knowledge until the end of 2023."[9]: "A mix of publicly available, licensed data and information from Meta’s products and services." GPT [50]: BooksCorpus [51]: WebText[16]: CommonCrawl , WebText2 , Books, Books2 , Wikipedia[7]: No info provided Phi[43]: The Stack , Stack Overflow , synthetic “textbook” data[6]: The Stack , Stack Overflow , synthetic “textbook” data, filtered Commmon Crawl[5]: "publicly available web data. . . and synthetic LLM-generated data"N/A Table 2: Evidence from LLama, GPT, and Phi suggests reuse of old training and collection of additional data to train new model generations. Datasets reused across models and generations are highlighted. We start with Phi 1.5, the first version of Phi designed for general NLP tasks. Phi 1 was designed for coding tasks. weights from the prior step, techniques, and datasets to prior versions of the model. This matches reality—the literature documents numerous “families” of models that directly descend from one another (c.f. LLama 1, 2, and 3 [27,62,63]; GPT 1, 2, 3, 4 [7,16,50,51]; Phi 1.5, 2, 3 [5,6,43]). Prior works on model collapse have proposed several ways training data from prior model generations could be re-used (or not) during model updates: •Thereplace scenario, first proposed in [56] assumes that a model trainer completely replaces their training data at each update, using only generated data output by the prior version of the model. As recent work has pointed out, this edge case is interesting but impractical [55]. •Theaccumulate scenario [31] assumes model trainers augment their original training data with additional data that may contain AI-generated content. Dataset grows linearly at each update. •Finally, the accumulate and subsample scenario [41] echoes the accumulate scenario but assumes a bounded dataset size. It subsamples a
|
https://arxiv.org/abs/2505.21677v1
|
fixed subset from the original and accumulated data at each model update. This scenario acknowledges the real-world compute limitations model trainers face. We believe that the accumulate-and-subsample paradigm best reflects reality, so we will leverage it in our work. We support this opinion with evidence from three well-documented model families: Llama, GPT, and Phi. Table 2 records the training data used in publicly disclosed generations of these models. As the table shows, trainers re-use some prior training data for model updates, supplementing this with additional content derived from the web. Whitepapers for models published after 2023 do not contain specific training data information, perhaps because of pending litigation over copyright concerns in training data use, e.g. [48], making it difficult to trace data re-use after this point. However, the language in these papers suggests collection of additional online data at each update. Reality: model trainers reuse data from prior generations and supplement this with new (internet- scraped) content when updating models. However, most prior work on model collapse still overlooks a fundamental reality in model updates: future models trained on internet-sourced content will be trained on outputs from other generative AI models , not merely their own. Already, the internet is filled with generated content from various models [57]. As model trainers collect new data to facilitate model updates, internet-sourced data will inevitably contain content from other generative models. (Previously overlooked) reality: at each update step, models may be trained on their own and other models’ generated outputs. 4 Formalizing An Iterative, Interactive Model Training Pipeline Section 3 provides empirical evidence for two realities overlooked by prior studies: internet-scraped training datasets used for initial training may have substantial overlap, and models may be updated using each others’ generated outputs . Both factors could significantly influence model evolution, and it is imperative to understand how they influence models long-term. To study the effect of these two factors on model evolution, we propose a general workflow in which multiple entities regularly update their models using a mix of private, public, and generated data. Based on the evidence of §3, we consider three basic types of training/update data: •D∗: Public data used during initial training/updates by multiple entities (real data only). 4 𝐷∗𝐷∗𝐷∗Initial training data 𝜃"#𝜃"$𝜃"%!𝐷! !𝐷"𝐷∗𝐷∗𝐷∗Update data: initial + new scraped content 𝛼= Proportion of new vs. initial data in update dataset 𝜃##𝜃#$𝜃#% !𝐷$…𝐷!𝐷##𝛽1−𝛽𝛼1−𝛼𝐷#%𝐷#$InternetInternet𝐷!𝛽1−𝛽𝛽= Proportion of public vs. proprietary data in initial dataset'𝐷#'𝐷$'𝐷%𝐷!𝐷!Figure 1: Our dataset update scheme, parameterized by αandβ.This paradigm best aligns with evidence from the literature given in §3 and strongly indicates that interactions between models, facilitating by training on others’ generated data, are an important consideration for empirical and theoretical work on model evolution. •˜Dk: Private data used only by entity kfor initial training/updates (real data only). •Dt={Dt1, Dt1, . . . D tk}: Public data used for updates at time tby multiple entities (synthetic data). Dtkis data generated by the kthentity based on model θt−1,k. Mapping these to realistic scenarios, D∗could be a public dataset like Common Crawl; ˜Dkcould be a private dataset of math problems curated by entity
|
https://arxiv.org/abs/2505.21677v1
|
k; and Dtcould be an internet scrape from after initial model training. We weight the relative impact of these data types by the ratios α, β. •β,0≤β≤1, is relative size of the initial public data set D∗compared to the initial private data set˜Dk. This fraction remains constant if/when initial data is reused for updates. •α,0≤α≤1is the fraction of new data introduced at generation t, relative to the amount of initial data reused (following the “accumulate and subsample” paradigm of prior work [41]). Interactive training workflow. We consider Kentities, each seeking to train or update their own generative AI model. In the initial phase of training, denoted by time t= 0, each entity ktrains its model based on a combination of a publicly available dataset D∗as well its own private dataset ˜Dk. The trained model is represented generally by a parameter ˆθt,k, i.e., ˆθ0,k= Φ 0,k(D∗,˜Dk), k= 1, . . . , K . where each Φ0,krepresents a generic training algorithm. For model updates at stages t >0, model parameters are updated via: 1.New public data Dtis generated uniformly at random using the most recent version of the models. Specifically, the data are sampled i.i.d. according to the mixture1 KPK k=1Pk,ˆθt−1,k where Pk,θdenotes the generative model used by k-th entity. 2. This data is placed online and collected by entities as training data for the next model update. 3.Each entity composes its training data for the next update, using a mix of the initial dataset (D∗,˜Dk)and newly collected data Dt. Contributions from each dataset are weighted by α, β. 4.Each entity k= 1, . . . , K updates it model parameters via ˆθt,k= Φ t,k(ˆθt−1,k, D∗,˜Dk, Dt) Here, Φ0,kis a training algorithm that depends on the previous model parameter ˆθk,t−1as well as the data. As before, training may employ subsampling, weighting, and randomization. In this workflow, entities interact through the release of publicly available synthetic data produced by prior generations of other entities’ models. Thus, even though initial private training data are never shared, it could end up positively impacting other entities’ models. This potential benefit of synthetic data sharing appears only in this interaction paradigm and has not be recognized in prior work. 5 Theory We theoretically analyze the behavior of the interactive workflow. Similar to prior work [23,24,31,41] we focus on the linear regression models where each data point consists of a feature-response pair (x, y)∈Rd×R. By the universality results of [23], the analysis of this settings also applies to generalized linear models satisfying appropriate asymptotic normality assumptions. 5 Notation. For a p×qmatrix A, we use A+to denote the Moore-Penrose pseudoinverse and vec(A) to denote the pq×1vector obtained by stacking the columns. ⊗denotes the Kronecker product. For α, β∈[0,1]we set ¯α= 1−αand¯β= 1−β. Training Workflow. We follow the training pipeline outlined in Section 4 in which Kdifferent models are trained on mixture of private, public, and generated data. At initialization, each entity k∈[K]combines its private data ˜Dk= (˜xki,˜yki)˜nk i=1with public data D∗= (x∗i, y∗i)n∗ i=1to produce an estimate ˆθk0by minimizing the empirical loss P (x,y)∈˜Dkβ0L(x, y, θ ) +P (x,y)∈˜D∗¯β0L(x,
|
https://arxiv.org/abs/2505.21677v1
|
y, θ ) where L(x, y, θ ):= (y−x⊤θ)2is the squared error loss and 0≤β0≤1controls the relative weight placed on the private data. Training then proceeds for generation stages t= 1,2,3, . . . as follows: 1.Each entity kuses its most recent parameter estimate ˆθt−1,tto generate new data Dtk= (xtki, ytki)ntk i=1according to the Gaussian model y|x∼N(x⊤θt−1,k, σ2). The entire collection of generated samples is combined into a single public data set Dt=∪K k=1Dtk. 2. Each entity kproduces a new estimate ˆθtkby minimizing the empirical loss P (x,y)∈Dk¯αtβtL(x, y, θ ) +P (x,y)∈D∗¯αt¯βtL(x, y, θ ) +P (x,y)∈Dtαt KL(x, y, θ ) with weights 0≤αt, βt≤1. Throughout our analysis we assume that all features are deterministic. We represent dataset ˜Dk with˜nk×dmatrix ˜Xk= [˜xk1, . . . , ˜xk˜nk]⊤and˜nk×1vector ˜yk= [˜yk1, . . . , y k˜nk]⊤, and use the same convention for the public data (X∗, y∗)and the generated data (Xtk, ytk). Data across different entities are then combined into “lifted” representations, which are denoted using boldface: ˜X="˜X1 ... ˜XK# ,y0="˜y1 ... ˜yK# ,Xt="Xt1 ... XtK# ,yt="ytk ... ytK# We note that information about which entity produced which sample is required for the analysis, but is not used during the training, where all data from the same generation are treated interchangeably. Bias-Variance Decomposition. We derive exact formulas for the mean and variance of the estimators at each stage of the workflow. Given the features (˜X, X∗,Xt)and learning weights (αt, βt)define ˜S= diag( ˜S1, . . . , ˜Sk):=˜X⊤˜X, S ∗:=X⊤ ∗X∗St= diag( St1, . . . , S tk):=X⊤ tXt Gt:= ¯αtβt˜S+ ¯αt¯βt(IK⊗S∗) +αt(IK⊗St) St:=1 KPK k=1Stk Pt:= ¯αtG+ t βt˜S¯βt(1K⊗S∗) Qt:=αtG+ tΠSt where Π:=1 K(1K×K⊗Id)is an orthogonal projection matrix and α0≡0. Theorem 1. Conditional on the initial data D0:= (˜D1, . . . , ˜DK, D∗), the estimates ˆθt= vec(ˆθt1, . . . , ˆθtK)are Gaussian with mean and variance E[ˆθt|D0] =Mt˜X+˜y X+ ∗y∗ , Cov(ˆθt|D0) =Ct where the matrices MtandCtare defined recursively with M0=P0andC0=0Kd×Kdand Mt=Pt+QtMt−1,Ct=Qt(σ2S+ t+Ct−1)Qt, t ≥1. Theorem 1 shows that the conditional mean of each estimate ˆθtkis a linear combination of the individual ordinary least squares (OLS) estimates ˜X+ 1˜y1, . . . , ˜X+ K˜yKandX+ ∗y∗for the private data and public data, respectively. For each generation t, similarity across entities can be assessed by comparing the rows of the K×(K+ 1) block partitioning of Mt. At initialization, the off-diagonal blocks for the private data are zeroed out, but in later stages, these blocks become nonzero thereby 6 allowing private data to be shared across entities. Homogenization (i.e., shrinkage towards a global consensus) occurs when row blocks are identical, and thus each entity has the same mean. For our next result we mimic the experimental setup in Section 6 and assume that the initial data are generated from a Gaussian model with a common ground truth parameter and the heterogeneity across datasets arises from the differences in the features, i.e., the matrices ˜S1, . . . , ˜SK. Theorem 2. Suppose that the initial data are generated independently according to the model y|x∼N(x⊤θ, σ2Id)where θ∈Rdis a fixed parameter.
|
https://arxiv.org/abs/2505.21677v1
|
If G1, . . . ,Gtare full rank then E[ˆθt] = I−Qt···Q1(I−G0G+ 0) (1K⊗θ),Cov(ˆθt) =Mt˜S+0 0S+ ∗ M⊤ t+Ct To help interpret this result, observe that if G0is full rank, then each initial estimate is unbiased, and unbiasedness persists throughout every stage of training. Conversely, if G0is rank deficient, then at least one (and possibly all) of the initial estimates is biased. Remarkably, Theorem 2 shows that it may still be possible for all entities have vanishing bias, provided that Qt···Qsconverges to zero. Specific conditions under which this occurs are considered in the next section. Asymptotic Variance. To provide a finer analysis of the training dynamics we now suppose that the weights and features satisfy αt=α,βt=β, andSt=Sfort≥1. Setting P=P1andQ=Q1, the matrices MtandCtdefined in Theorem 1 can be expressed explicitly as Mt=QtP0+t−1X s=0Qs P,Ct=σ2tX s=1QsS+ Qs⊤(1) Classical results in matrix analysis [35] imply that if the spectral radius of Qis strictly less than one, then these these matrices converge to well-defined limits MandCsatisfying M:= (I−Q)−1P, vec(C):=σ2(I−Q⊗Q)−1vec(QS+Q) (2) The following result provides a sufficient condition for convergence in terms of the triple (˜S, S∗,S). In particular, if Sis proportional to ˜Sthen the condition is satisfied for all 0≤α <1and0≤β≤1. Note that the boundary case α= 1corresponds to the recursive training setting in [56] where the variance increases linearly with the number of generations, and thus convergence does not occur. Lemma 1. Suppose that S∝λ˜S+ (1−λ)(IK⊗S∗)for some 0< λ≤1. Then, the spectral radius of Qis strictly less than one for all 0≤α <1and0< β≤λ. We summarize our findings with the following characterization of the asymptotic variance: Theorem 3. Consider the setting of Theorem 2 and suppose that αt=α,βt=β, andSt=S fort≥1. IfG=G1has full rank and Qhas spectral radius strictly less than one, then E[θt]t→∞− − − → 1K⊗θ, Cov(ˆθt)t→∞− − − → σ2M˜S+0 0S+ ∗ M⊤+C where MandCare given by (2). MSE and relative efficiency. The expression for the mean and variance in Theorems 2 and 3 provide explicit formulas for the mean squared error (MSE) E[∥ˆθtk−θ∥2]of entity kat each generation tand the mean squared prediction error (MSPE) E[∥˜Xm(ˆθtk−θ)∥2]for entity m’s private feature matrix. Figure 2 compares the asymptotic MSE for training workflow (obtained from Theorem 3) with the MSE of an idealized setting where each entity has access to the entire collection of real data (both private and public). Each curve represents the relative efficiency, i.e., the ratio of optimal MSE to entity-specific workflow MSE, with values close to one indicating near optimality. These results, which corresponds to a single realizations of i.i.d. standard Gaussian features in d= 15 dimensions, demonstrate that small to moderate values of αcan provide global improvements over the initialization ( α= 0) with higher diversity between entities (larger β) resulting in more dramatic improvements. 7 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.6Relative Efficiency = 0.1 Model 1 Model 2 Model 3 Model 4 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.6Relative Efficiency = 0.5 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.6Relative Efficiency = 0.9 Figure 2: Relative efficiency across αvalues for a K= 4model system at varying β.Each curve is the
|
https://arxiv.org/abs/2505.21677v1
|
ratio between the MSE of the minimum variance unbiased estimator based on all the real data (both private and public) and the asymptotic MSE obtained from Theorem 3. We use dimension 15and rank 5. 6 Experiments with Large Language Models To understand how our theoretical predictions bear out in practice, we perform experiments with K different LLMs. In all cases, the high-level task is to answer natural language questions with high fluency and accuracy. The “private” datasets associated with each of the models correspond to specific subdomains such as science based questions [18, 39] or grade school level math problems [20]. We follow the training workflow outlined in Section 4. Results for K= 3are presented Figure 4, and additional results are provided in the supplementary material. Setup. To produce the initial set of models, we fine-tune K= 3 instances of Facebook’s OPT- 350m [70] on task-specific datasets to simulate the effect of training on (D∗,˜Dk)at time t= 0. We use OPT-350m as the base model because its training data is publicly available, allowing us to know the contents of D∗=BookCorpus [67], CC-Stories [12], the English portion of CommonCrawl, and public Reddit data. For our experiment, we approximate D∗using only BookCorpus due to practical constraints. Our task specific datasets ˜Dkare SciQ [39], a dataset of physics, chemistry, and biology questions, OpenAI’s GSM8K, a dataset of grade school level math problems [20], and ai2_arc [18], a dataset of grade school science questions, chosen to given each model a distinct private task. The goal of each dataset is to train models that answer questions in the dataset domain with high accuracy and fluency, and we fine-tune one model on each dataset. After each generation, we use ˆθtkto produce synthetic data Dt+1,kfor use in the next update. To construct Dt+1, we randomly sample prompts from ˜Dkfor each kand feed them to ˆθtk, which completes the text. We use α∈ {0,0.5,1}andβ∈ {0,0.5,1}forT= 15 generations of training. For the t-th training generation, model parameter ˆθtkis obtained by finetuning on a dataset of size n= 12,500drawn i.i.d. from the datasets ˜Dk,D∗,Dtwith weights ¯αβ,¯α¯β, andα/K , respectively. Training and evaluation. We train each model for 100steps at each generation on a single NVIDIA H200 GPU using mixed-precision, the AdamW with a learning rate of 8 e−6and warmup ratio of 0.025, and gradient accumulation over 2 steps. Each fine-tuning generation was seeded based on the generation index to ensure reproducibility. After training each generation, we record evaluation loss (token-wise average cross-entropy loss) for each test set of private data ˜D. To do this, we feed the model prompts from the test set and evaluate the semantic “distance” between predicted and correct answer. This test evaluation processes helps us assess how data-mediated interactions affect models’ performance on the different tasks. Results. We show predicted and actual results of this experiment in Figures 5 and 6, respectively. Each figure plots the prediction error/loss of each model on its original task (first and third columns) and the other model’s task (second and fourth columns) to visualize how data-mediated interactions affect performance on each task.
|
https://arxiv.org/abs/2505.21677v1
|
The theoretical results (Figure 5) show that a variety of behaviors are possible. Models sometimes get slightly worse or become better at certain tasks. As α→1, the predicted error rises. These results are mirrored in the interactions of three OPT language models (Figure 4), and extend to interactions between two OPT models shown in Figure 6 in the appendix. Of particular note is the experimentally observed homogeneity in model behavior on each task for α= 0.5across all βs. In the α= 0 setting, each model performs worse at the other model’s task—e.g. model 1 has loss around 3 for task 2, while model 2 has loss around 2. However, once α > 0, the loss of model 1 drops to around 2 for task 2, matching that of the other model. This homogenization of task performance occurs for the other task as well, highlighting the benefits (improved task performance) and downsides (potential performance homogenization) of training on 8 =0.1 Mean Squared Prediction Error Model 1 on T ask 1 Model 2 on T ask 1 Model 3 on T ask 1 Model 1 on T ask 2 Model 2 on T ask 2 Model 3 on T ask 2 Model 1 on T ask 3 Model 2 on T ask 3 Model 3 on T ask 3=0.5 Mean Squared Prediction Error Model 1 on T ask 1 Model 2 on T ask 1 Model 3 on T ask 1 Model 1 on T ask 2 Model 2 on T ask 2 Model 3 on T ask 2 Model 1 on T ask 3 Model 2 on T ask 3 Model 3 on T ask 3 0 5 10 Model Fitting Generation=0.9 Mean Squared Prediction Error Model 1 on T ask 1 0 5 10 Model Fitting Generation Model 2 on T ask 1 0 5 10 Model Fitting Generation Model 3 on T ask 1 0 5 10 Model Fitting Generation Model 1 on T ask 2 0 5 10 Model Fitting Generation Model 2 on T ask 2 0 5 10 Model Fitting Generation Model 3 on T ask 2 0 5 10 Model Fitting Generation Model 1 on T ask 3 alpha=0.1 alpha=0.5 alpha=0.9 0 5 10 Model Fitting Generation Model 2 on T ask 3 alpha=0.1 alpha=0.5 alpha=0.9 0 5 10 Model Fitting Generation Model 3 on T ask 3 alpha=0.1 alpha=0.5 alpha=0.9Theoretical prediction error across generations for different , values, K=3 Figure 3: Predicted behavior over time for a K= 3model system with varying α,β.We use equations for MSE from theorem 2 and run simulations with K= 3, dimension 50, rank 15. 2345=0.0 T est Loss Model 1 on T ask 1 =0.0 =0.5 =1.0 model_base Model 2 on T ask 1 Model 3 on T ask 1 Model 1 on T ask 2 =0.0 =0.5 =1.0 model_base Model 2 on T ask 2 Model 3 on T ask 2 Model 1 on T ask 3 =0.0 =0.5 =1.0 model_base Model 2 on T ask 3 Model 3 on T ask 3 2345=0.5 T est Loss 0 5
|
https://arxiv.org/abs/2505.21677v1
|
10 Model Fitting Generations2345=1.0 T est Loss 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations Evaluation loss across generations for different and values (K=3) Figure 4: Actual behavior over time for interactions between OPT models ( K= 3) with varying α, β other models’ outputs. Additionally, when β >0, we see that learning information about task 1 helps task 3 because they are both science based questions, and vice versa. 7 Discussion Limitations. Our work has several limitations. First, we outline objections to our claims about the increasing presence of generated outputs in training datasets (e.g. [25]) and counterarguments in Appendix B. Second, our theoretical framework assumes that new data in model updates is purely synthetic. In reality, if internet scrapes are used to create model update datasets, they will contain both synthetic and real data. Finally, we assume that model trainers use new scraped data for each model update but only reuse data from initial training. This assumption may limit the range of outcomes. Broader Impacts. If data-mediated interactions homogenize generative models, causing them to coalesce on certain viewpoints, this could lead to pervasive bias in AI-generated content. Peterson [49] discusses this possibility, but focuses on the narrowing effect of AI rather than homogenization across models. Recent work from Wenger and Kenett showed homogeneity across creative outputs from many LLMs, suggesting these homogenization effects may already be felt by models [65]. Much future study is needed to evaluate the extent to which data-mediated interactions fuel homogeneity (as opposed to other causes) and develop mitigations. Conclusions and Future Work. We provide a first look at possible outcomes of genAI models trained on each others’ data and find mixed effects. Training on other models’ data exposes models to concepts possibly missed in their own training data, but can homogenize model behaviors. Future work could consider additional nuances of interactions between models, explore how these interactions evolve in other modalities like image generation, and investigate whether fixed points (e.g. like the universal π2/6pathway of [23]) exist under this paradigm. 9 References [1]How Generative AI is Changing Creative Work. Harvard Business Review , 2022. https: //hbr.org/2022/11/how-generative-ai-is-changing-creative-work . [2]Apple intelligence | writing tools | iphone 16, 2024. https://www.youtube.com/watch?v= 3m0MoYKwVTM . [3]Command r and command r plus model card, 2024. https://docs.cohere.com/docs/ responsible-use . [4]Use notion ai to write better, more efficient notes and docs, 2024. https://www.notion. com/help/guides/notion-ai-for-docs . [5]Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 , 2024. [6]Marah Abdin, Jyoti Aneja, Sebastien Bubec, Caio César, Teodoro Mendes, Weizhu Chen, et al. Phi-2: The surprising power of small language models, 2023. https://www.microsoft.com/en-us/research/blog/ phi-2-the-surprising-power-of-small-language-models/ . [7]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
|
https://arxiv.org/abs/2505.21677v1
|
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [8]Meta AI. The Future of AI: Built with LLama, 2024. https://ai.meta.com/blog/ future-of-ai-built-with-llama/ . [9]Meta AI. Llama 4 model card, 2025. https://github.com/meta-llama/llama-models/ blob/main/models/llama4/MODEL_CARD.md . [10] Open AI. Understanding the source of what we see and hear online, 2024. https://openai. com/index/understanding-the-source-of-what-we-see-and-hear-online/ . [11] Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, and Richard G Baraniuk. Self-consuming generative models go mad. Proc. of ICLR , 2024. [12] Benjamin Anderson. Cc-stories, 2022. https://huggingface.co/datasets/ andersonbcdefg/cc-stories-parquet/commits/main . [13] Anthropic. Dataset card for hh-rlhf. https://huggingface.co/datasets/Anthropic/ hh-rlhf . [14] Anthropic. Model card and evaluations for claude models, 2023. https: //www-cdn.anthropic.com/bd2a28d2535bfb0494cc8e2a3bf135d2e7523226/ Model-Card-Claude-2.pdf . [15] Marc Brinner, Tarek Al Mustafa, and Sina Zarrieß. Enhancing domain-specific encoder models with llm-generated data: How to leverage ontologies, and how to do without them. arXiv preprint arXiv:2503.22006 , 2025. [16] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, et al. Language models are few-shot learners. Proc. of NeurIPS , 2020. [17] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research , 2023. [18] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv:1803.05457v1 , 2018. [19] Nick Clegg. Labeling AI-Generated Images on Facebook, Instagram and Threads. Meta AI Blog , 2024. https://about.fb.com/news/2024/02/ labeling-ai-generated-images-on-facebook-instagram-and-threads/ . 10 [20] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [21] Common Crawl. Common Crawl - Open Repository of Web Crawl Data., 2025. https: //commoncrawl.org/ . [22] Sumanth Dathathri, Abigail See, Sumedh Ghaisas, Po-Sen Huang, Rob McAdam, Johannes Welbl, Vandana Bachani, Alex Kaskasoli, Robert Stanforth, Tatiana Matejovicova, et al. Scalable watermarking for identifying large language model outputs. Nature , 634(8035), 2024. [23] Apratim Dey and David Donoho. Universality of the π2/6pathway in avoiding model collapse. arXiv preprint arXiv:2410.22812 , 2024. [24] Elvis Dohmatob, Yunzhen Feng, and Julia Kempe. Model collapse demystified: The case of regression. Proc. of NeurIPS , 2025. [25] George Drayson, Emine Yilmaz, and Vasileios Lampos. Machine-generated text detection prevents language model collapse. arXiv preprint arXiv:2502.15654 , 2025. [26] Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, et al. Glam: Efficient scaling of language models with mixture-of-experts. In Proc. of ICML , 2022. [27] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [28] Matt Ellis. How to use ai to enhance your storytelling process, 2024. https://www. grammarly.com/blog/writing-with-ai/ai-story-writing/ . [29] Yunzhen Feng, Elvis Dohmatob, Pu Yang, Francois Charton, and Julia Kempe. Beyond model collapse: Scaling up with synthesized data requires reinforcement. In ICML
|
https://arxiv.org/abs/2505.21677v1
|
Workshop on Theoretical Foundations of Foundation Models , 2024. [30] US GAO. Science and tech spotlight - generative ai in health care, 2024. GAO-24-107634, https://www.gao.gov/products/gao-24-107634 . [31] Matthias Gerstgrasser, Rylan Schaeffer, Apratim Dey, Rafael Rafailov, et al. Is model collapse inevitable? breaking the curse of recursion by accumulating real and synthetic data. Proc. of COLM , 2024. [32] Kunal Handa, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, et al. Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations. arXiv preprint arXiv:2503.04761 , 2025. [33] Emily Harding. 2024 Priorities for the Intelligence Community. Center for Strategic and International Studies , 2024. https://www.csis.org/analysis/ 2024-priorities-intelligence-community-0 . [34] Ryuichiro Hataya, Han Bao, and Hiromi Arai. Will large-scale generative models corrupt future datasets? In Proc. of ICCV , 2023. [35] Nicholas J Higham. Accuracy and stability of numerical algorithms . SIAM, 2002. [36] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. Proc. of NeurIPS , 2022. [37] Adit Jain and Vikram Krishnamurthy. Interacting large language model agents. bayesian social learning based interpretable models. IEEE Access , 2025. [38] AQ Jiang, A Sablayrolles, A Mensch, C Bamford, DS Chaplot, D de las Casas, F Bressand, G Lengyel, G Lample, L Saulnier, et al. Mistral 7b (2023). arXiv preprint arXiv:2310.06825 , 2023. [39] Matt Gardner Johannes Welbl, Nelson F. Liu. Crowdsourcing multiple choice science questions. 2017. [40] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. 11 [41] Joshua Kazdan, Rylan Schaeffer, Apratim Dey, Matthias Gerstgrasser, Rafael Rafailov, David L Donoho, and Sanmi Koyejo. Collapse or thrive? perils and promises of synthetic data in a self-generating world. arXiv preprint arXiv:2410.16713 , 2024. [42] Yo-whan Kim, Samarth Mishra, SouYoung Jin, Rameswar Panda, Hilde Kuehne, Leonid Karlinsky, Venkatesh Saligrama, Kate Saenko, Aude Oliva, and Rogerio Feris. How transferable are video representations based on synthetic data? Advances in Neural Information Processing Systems , 35:35710–35723, 2022. [43] Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463 , 2023. [44] Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon, Tobin South, Jad Kabbara, and Sandy Pentland. Data authenticity, consent, and provenance for ai are all broken: What will it take to fix them? 2024. [45] Matteo Marchi, Stefano Soatto, Pratik Chaudhari, and Paulo Tabuada. Heat death of generative models in closed-loop learning. arXiv preprint arXiv:2404.02325 , 2024. [46] Gonzalo Martínez, Lauren Watson, Pedro Reviriego, José Alberto Hernández, Marc Juarez, and Rik Sarkar. Towards understanding the interplay of generative artificial intelligence and the internet. In International Workshop on Epistemic Uncertainty in Artificial Intelligence , pages 59–73. Springer, 2023. [47] Hana Matatov, Marianne Aubin Le Quéré, Ofra Amir, and Mor Naaman. Examining the preva- lence and dynamics of
|
https://arxiv.org/abs/2505.21677v1
|
ai-generated media in art subreddits. arXiv preprint arXiv:2410.07302 , 2024. [48] Faustine Ngila. The copyright battles against openai have begun. Quartz , 2023. https://qz. com/openai-lawsuit-copyright-books-chatgpt-generative-ai-1850609334 . [49] Andrew J Peterson. Ai and the problem of knowledge collapse. AI & SOCIETY , pages 1–21, 2025. [50] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [51] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. [52] Sandeep Reddy. Generative ai in healthcare: an implementation science informed translational path on application, integration and governance. Implementation Science , 19(1):27, 2024. [53] Reuters. OpenAI says ChatGPT’s weekly users have grown to 200 million, 2024. https://www.reuters.com/technology/artificial-intelligence/ openai-says-chatgpts-weekly-users-have-grown-200-million-2024-08-29/ . [54] Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, and Soheil Feizi. Can ai-generated text be reliably detected? arXiv preprint arXiv:2303.11156 , 2023. [55] Rylan Schaeffer, Joshua Kazdan, Alvan Caleb Arulandu, and Sanmi Koyejo. Position: Model collapse does not mean what you think. arXiv preprint arXiv:2503.03150 , 2025. [56] Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. Ai models collapse when trained on recursively generated data. Nature , 631(8022):755– 759, 2024. [57] Zhen Sun, Zongmin Zhang, Xinyue Shen, Ziyi Zhang, Yule Liu, Michael Backes, Yang Zhang, and Xinlei He. Are we in the ai-generated text world already? quantifying and monitoring aigt on social media. arXiv preprint arXiv:2412.18148 , 2024. [58] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [59] Jamba Team, Barak Lenz, Alan Arazi, Amir Bergman, Avshalom Manevich, Barak Peleg, Ben Aviram, Chen Almagor, Clara Fridman, Dan Padnos, et al. Jamba-1.5: Hybrid transformer- mamba models at scale. arXiv preprint arXiv:2408.12570 , 2024. 12 [60] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng- Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 , 2022. [61] Xinyu Tian and Xiaotong Shen. Generative distribution prediction: A unified approach to multimodal learning. arXiv preprint arXiv:2502.07090 , 2025. [62] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [63] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [64] Ze Wang, Zekun Wu, Jeremy Zhang, Navya Jain, Xin Guan, and Adriano Koshiyama. Bias amplification: Language models as increasingly biased media. arXiv preprint arXiv:2410.15234 , 2024. [65] Emily Wenger and Yoed Kenett. We’re different, we’re the same: Creative homogeneity across llms. arXiv preprint arXiv:2501.19361 , 2025. https://arxiv.org/abs/2501.19361 . [66] xAI. Grok 3 Beta - The Age of Reasoning Agents, 2025. https://x.ai/news/grok-3 . [67] Smuel Yang.
|
https://arxiv.org/abs/2505.21677v1
|
Bookcorpus dataset, 2022. https://huggingface.co/datasets/ SamuelYang/bookcorpus . [68] Hanlin Zhang, Benjamin L Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, and Boaz Barak. Watermarks in the sand: Impossibility of strong watermarking for generative models. arXiv preprint arXiv:2311.04378 , 2023. [69] Jinghui Zhang, Dandan Qiao, Mochen Yang, and Qiang Wei. Regurgitative training: The value of real data in training large language models. arXiv preprint arXiv:2407.12835 , 2024. [70] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models, 2022. [71] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE , 109(1):43–76, 2020. [72] Edward Zitron. There is No AI Revolution, 2025. https://www.wheresyoured.at/ wheres-the-money/ . 13 Appendix A Exact training data statements from LLM papers Table 3 lists statements made about model training and fine-tuning data for large-scale generative AI models that do not explicitly list training data sources. Model Pre-training data Fine-tuning data Claude 2 [14]Claude models are trained on a proprietary mix of publicly available information from the Internet, datasets that we license from third party businesses, and data that our users affirmatively share or that crowd workers provide.Publicly released on HuggingFace [13]. GPT4+ [7] No information provided No information provided Grok 3 [66] No information provided No information provided Jamba [59]Our pre-training dataset is a mixture of publicly available web documents, code, books and scientific articles.When performing supervised fine-tuning, we make heavy use of synthetic data. Llama 3 [27]We create our dataset for language model pre-training from a variety of data sources containing knowledge until the end of 2023. Much of the data we utilize is obtained from the webWe produce the aligned Llama 3 models by applying several rounds of post-training, or aligning the model with human feedback. Llama 4 [9]A mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI.No information provided. Phi 3 [5]Our training data of consists of heavily filtered publicly available web data . . . from various open internet sources, as well as synthetic LLM-generated data.[Supervised fine tuning ]leverages highly curated high-quality data across diverse domains, e.g., math, coding, reasoning, conversation, model identity, and safety. Table 3: Exact wording of training and fine-tuning data discussion from whitepapers in which data sources are not explicitly listed. B Counterarguments We argue that AI-generated content from a variety of sources will be increasingly prevalent online, resulting in future genAI models being regularly trained on each other’s outputs. We believe this vision of future data-mediated interactions between models is reasonable, based on evidence from academic literature and corporate reports. However, others may disagree with our argument. Here, we present possible counterarguments to our view to catalyze future work and discussion. Can’t we use watermarks to filter AI-generated content
|
https://arxiv.org/abs/2505.21677v1
|
from future internet-scraped datasets? Several companies have publicly stated that they watermark AI-generated content [10,19,22], making this argument plausible. Furthermore, [25] show that using watermark detection techniques can help avoid model collapse under certain circumstances. However, reliance on watermarking has two major issues. First, watermarks are difficult to reliably detect and/or easily removed from generated content [44, 54, 68]. Second, detection via watermark requires sharing of watermark detection information, which is essentially a game-theoretic problem that relies on other companies’ willingness to cooperate. Both issues make watermarks an unreliable mechanism to rid datasets of AI-generated content. What if there’s less AI-generated online content than we think? Some work suggests there may be less AI-generated content online than previously postulated [47]. However, other works consistently point to an uptick in AI-generated content online [57]. We believe the widespread adoption and use of generative AI models across industries [8, 32, 53], particularly for use in content creation [1], provides strong evidence that AI-generated content will become a regular part of online life. Further empirical work is needed to vet both claims. What if one model provider dominates online content? Although numerous generative AI models are available online, some evidence suggests that one or two companies may dominate the AI landscape. A market research firm estimated that in January 2025, Open AI’s ChatGPT had 340 14 million monthly active users, Microsoft Copilot had 11 million, Google Gemini had 80 million, and Anthropic’s Claude had 2 million [72]. If only one model/company dominates, our paradigm of training on other models’ data will no longer be relevant and collapses back to the single model setting of prior work, e.g. [23, 41, 56]. Currently, though, this market research suggests that there are several models used by millions of users, making our assumptions somewhat reasonable. Future research could analyze market trends in model use and adoption to determine realistic assumptions. 0 5 105.07.5=0.1 Mean Squared Prediction Error Model 1 on T ask 1 0 5 100.51.01.5 Model 2 on T ask 1 0 5 105.07.5 Model 2 on T ask 2 0 5 1012 Model 1 on T ask 2 0 5 10246=0.5 Mean Squared Prediction Error Model 1 on T ask 1 0 5 100.250.500.75 Model 2 on T ask 1 0 5 10246 Model 2 on T ask 2 0 5 100.51.0 Model 1 on T ask 2 0 5 10 Model Fitting Generation246=0.9 Mean Squared Prediction Error Model 1 on T ask 1 alpha=0.1 alpha=0.5 alpha=0.9 0 5 10 Model Fitting Generation0.40.6 Model 2 on T ask 1 0 5 10 Model Fitting Generation246 Model 2 on T ask 2 alpha=0.1 alpha=0.5 alpha=0.9 0 5 10 Model Fitting Generation0.40.60.8 Model 1 on T ask 2Theoretical prediction error across generations for different , values, K=2 Figure 5: Predicted behavior over time for a K= 2model system with varying α,β.We use equations for MSE from theorem 2 and run simulations with K= 2, dimension 50, rank 15. 24=0.0 T est Loss =0.0 =0.5 =1.0 model_base =0.0 =0.5 =1.0 model_base 24=0.5 T est Loss 0123456789 Model Fitting Generations24=1.0 T
|
https://arxiv.org/abs/2505.21677v1
|
est Loss 0123456789 Model Fitting Generations 0123456789 Model Fitting Generations 0123456789 Model Fitting Generations Model 1 on T ask 1 Model 2 on T ask 1 Model 1 on T ask 2 Model 2 on T ask 2Evaluation loss across generations for different and values (K=2) Figure 6: Actual behavior over time for interactions between OPT models ( K= 2) with varying α, β. C Code for experiments Code to generate the theory and experimental figures shown in the main paper body can be found at: https://anonymous.4open.science/r/multi-model-798E . D Proofs for results in Section 5 Before diving into the proofs, we recall the workflow setup and provide some preliminary results. The initial data are represented by matrix-vector pairs for the private data (˜Xk,˜yk), . . . , (˜XK,˜yK) and for the public data (X∗, y∗). At each generation t, the data (Xtk, ytk)produced by entity kis given by ytk=Xtkˆθt−1,k+wtk 15 where ˆθt−1,k∈Rdis the most recent parameter estimate and wtk∼N(0, σ2I)is Gaussian noise that is independent across entities and generations. To represent the notation compactly, we use the embedding ˜X="˜X1 ... ˜XK# ,y0="˜y1 ... ˜yK# Xt="Xt1 ... XtK# ,yt="ytk ... ytK# ,wt="wt1 ... wtK# , Linear dynamical system According to the workflow, each parameter estimate is obtained as the minimizer in θ∈Rdof the empirical loss: ¯αtβt∥˜yk−˜Xkθ∥2+ ¯αt¯βt∥y∗−X∗θ∥2+αt KKX j=1∥ytj−Xtjθ∥2, where α0≡0and so only the first two terms are present at initialization. The minimum norm solution is given in closed form by ˆθtk= ¯αtβt˜Sk+ ¯αt¯βtS⊤ ∗+αt KkX j=1Stk+ ¯αtβt˜X⊤ k˜yt+ ¯αt¯βtX⊤ ∗y∗+αt KkX j=1X⊤ tjytj where ˜Sk=˜X⊤ t˜Xk,S∗=X∗X⊤,Stk=X⊤ tkXtk, and (·)+denotes the Moore-Penrose pseu- doinverse. Stacking the estimates into a vector ˆθ=vec(ˆθ1, . . . , ˆθK)and using the identity A⊤=A⊤AA+, we can express all estimate updates simultaneously as ˆθt=G+ t ¯αtβt˜S˜X+˜y+ ¯αt¯βt(1K⊗S∗)X+ ∗y∗+αt K(1K⊗kX j=1StjX+ tjytj) , (3) where 1Kdenotes the K×1vector of ones and Gtis the block diagonal matrix given by Gt:= ¯αtβt˜S+ ¯αt¯βt(IK⊗S∗) +αt(IK⊗St),St=1 KKX k=1Skt. Defining the orthogonal projection matrix Π =1 K1K1⊤ K⊗Idwe can write 1 K(1K⊗kX j=1StjX+ tjytj) = ΠStX+ tyt. Introducing the matrices Pt:= ¯αtG+ t βt˜S¯βt(1K⊗S∗) andQt:=αtG+ tΠSt, can express (3) as ˆθt=Pt˜X+˜y X+ ∗y∗ +QtX+ tyt. Using yt=Xtθt−1+wtandQtX+ tXt=Qt, we obtain ˆθt=Pt˜X+˜y X+ ∗y∗ +Qtˆθt−1+QtX+ twt. (4) This expression shows that the estimates evolve according to a discrete-time linear dynamical system (also known as a Kalman filter model) with state variable ˆθt. D.1 Proof of Theorem 1 We now derive the distribution of ˆθtconditional on the initial data D0given by (˜X,˜y)and(X∗, y∗). Specifically, we show that the estimates are Gaussian with mean and variance E[ˆθt|D0] =Mt˜X+˜y X+ ∗y∗ , Cov(ˆθt|D0) =Ct 16 where the matrices MtandCtare defined recursively with M0=P0andC0=0Kd×Kdand Mt=Pt+QtMt−1,Ct=Qt(σ2S+ t+Ct−1)Qt, t ≥1. The proof is by mathematical induction. For the base case t= 0we invoke (4)along with Q0= 0to see that ˆθ0is a deterministic function of the initial data with M0=P0andC0=0Kd×Kd. For the inductive case, assume that the stated distribution holds up to generation t−1. From the definition of the workflow, (4)holds with wt∼N(0, σ2I)independent of everything else. Thus ˆθtis Gaussian with mean E[ˆθt|D0] =Pt˜X+˜y X+ ∗y∗ +QtE[ˆθt−1|D0] = (Pt+QtMt−1)| {z } Mt˜X+˜y
|
https://arxiv.org/abs/2505.21677v1
|
X+ ∗y∗ and covariance Cov(ˆθt|D0) =Cov(Qtˆθt−1|D0) +Cov(QtX+ tw) =QtCt−1Qt+σ2QtS+ tQt| {z } Ct. This concludes the proof of Theorem 1. D.2 Proof of Theorem 2 Under the assumptions of the theorem, we have that ˜X+˜y X+ ∗y∗ ∼N˜S˜S+0 0 S∗S+ ∗ (1K+1⊗θ), σ2˜S+0 0S+ ∗ . (5) The goal for this proof is to verify that if G1, . . . ,Gtare full rank then E[ˆθt] = I−Qt···Q1(I−G0G+ 0) (1K⊗θ),Cov(ˆθt) =Mt˜S+0 0S+ ∗ M⊤ t+Ct. We proceed by mathematical induction. Consider the case t= 0. By (4)along with Q0= 0, the mean is E[θ0] =Pt˜S˜S+0 0 S∗S+ ∗ (1K+1⊗θ) =G0G+ 0(1K⊗θ). Likewise, recalling that M0=P0, the variance is Cov(θ0) =M0Cov˜X+˜y X+ ∗y∗ M⊤ 0=σ2M0˜S+0 0S+ ∗ M⊤ 0. Next, suppose that G1, . . . ,Gtare full rank and the stated distribution holds up to time t−1. By Theorem 1 we know that ˆθtis Gaussian and so all that remains is to verify the given expressions for the mean and covariance. By the linearity of expectation and (4), the mean satisfies E[ˆθt] =Pt E[˜X+˜y] E[X+ ∗y∗] +QtE[ˆθt−1] =Pt(IK+1⊗θ) +Qt(IK⊗θ)−QtQt−1···Q1(I−G0G+ 0)(1K⊗θ), where the last follows from the inductive assumption applied to E[ˆθt−1]. Moreover, from the definitions of PtandQt, we have Pt(IK+1⊗θ) +Qt(IK⊗θ) =G+ t ¯αtβt˜S+ ¯αt¯βt(IK⊗S∗)) +αtΠSt (IK⊗θ) =G+ tGt(IK⊗θ) = IK⊗θ, where the second line follows from the identity ΠSt(1K⊗Id) = (I K⊗St)(1K⊗Id)and the last line holds because Gtis full rank. Combining the above displays gives the desired expression for the mean. The expression for the variance follows directly from (5) and Theorem 1. 17 D.3 Proof of Theorem 3 If the spectral radius of Qis strictly less than one, then Qt→0ast→ ∞ , and the Neumann series in(1)converge to the well-defined limits in (2). These limits can also be seen as the (necessarily unique) solutions to the fixed point equations M=P+MQ,C=Q(C+σ2S+)Q⊤, where the expression for the covariance is known as the discrete time Lyapunov equation. Combining these convergence results with Theorem 2 completes the proof. . D.4 Proof of Lemma 1 Ifα= 0or ifS=0thenQ=0and so the stated result holds. Henceforth, we assume 0< α < 1 andSis nonzero. Suppose that γS=λ˜S+ (1−λ)(IK⊗S∗)for some 0< β≤λ≤1andγ >0. Then, G= ¯αβ˜S+ ¯α¯β(IK⊗S∗) +α(IK⊗S) =¯αβ λ(γS−(1−λ)(IK⊗S∗)) + ¯α¯β(IK⊗S∗) +α(IK⊗S) =¯αβγ λS+ ¯αλ−β λ (IK⊗S∗) +α(IK⊗S). Hence, Q= δS+ IK⊗∆+ΠS, δ =¯αβγ αλ, ∆ =λ−β αλS∗+S. To proceed, observe that each diagonal block of S= (S1, . . . , S K)lies in the span of S= 1 KPK k=1Sk, and thus Slies in the span of IK⊗∆. Accordingly, we can write S δS+ IK⊗∆+= (I K⊗∆1/2)R δR+ I−1(IK⊗∆+/2), where (·)1/2denote the symmetric positive semidefinite square root of a positive semidefinite and R:= (I K⊗∆+/2)S(IK⊗∆+/2). To bound the spectral radius, denoted by ρ(·), we use that fact that the eigenvalues of ABandBAare the same for any square matrices AandBalong with that fact that IK⊗∆commutes with Πto write ρ(Q) =ρ δS+ IK⊗∆+ΠS =ρ S δS+ IK⊗∆+Π =ρ (IK⊗∆1/2)R δR+ I−1(IK⊗∆+/2)Π =ρ ΠR δR+ I−1Π =∥ΠR δR+ I−1Π∥, where ∥ · ∥ denotes the operator norm and the last equality holds because ΠR δR+ I−1Πis symmetric positive
|
https://arxiv.org/abs/2505.21677v1
|
arXiv:2505.21680v1 [cs.LG] 27 May 2025multivariateGPT: a decoder-only transformer for multivariate categorical and numeric data Andrew J. Loza1,2∗Jun Yup Kim4Shangzheng Song4Yihang Liu4 Joseph J. Y. Sung4R Andrew Taylor4Dennis L. Shung3 1Department of Biomedical Informatics and Data Science, Yale School of Medicine 2Department of Pediatrics, Yale School of Medicine 3Department of Medicine, Yale School of Medicine 4Yale School of Medicine 4Lee Kong Chian School of Medicine, Nanyang Technological University Abstract Real-world processes often generate data that are a mix of categorical and numeric values that are recorded at irregular and informative intervals. Discrete token- based approaches are limited in numeric representation capacity while methods like neural ordinary differential equations are not well suited for categorical data or informative sampling and require augmentation to handle certain classes of trajectories. Here, we present multivariateGPT, a single architecture for modeling sequences of mixed categorical (including tokenized text) and numeric data. This is accomplished with an autoregressive sequence decomposition, embedding scheme, and loss function that extend the next token prediction task to likelihood estimation of the joint distribution of next token class and value. We demonstrate how this approach can efficiently learn to generalize patterns in simple physical systems and model complex time series including electrocardiograms and multivariate electronic health record data. This work extends the utility of transformer based models to additional classes of data. 1 Introduction Data collected in complex systems are often a mixture of categorical and numeric values that are informatively and sparsely sampled across a range of classes. Each of these features presents technical challenges, and current approaches are often limited in modeling one or more of these aspects Shukla and Marlin [2020]. One class of models are autoregressive neural network-based sequence models, such as recurrent neural networks, long-short term memory networks Hochreiter and Schmidhuber [1997], and trans- formers Vaswani et al. [2017]. However, these models do not natively handle categorical data (including as text) and numeric data within a single model. Using language models as a starting point, four common approaches to create a multimodal model that processes numeric and categorical data simultaneously are (a) tokenization of strings representing numeric quantities Singh and Strouse [2024], Gruver et al. [2023] (b) alternate embedding strategy for numeric tokens such as XVal or MMD Zausinger et al. [2024], Alberts et al. [2024](c) discretization ∗Code available at: https://anonymous.4open.science/r/multivariateGPT_anon-4ED4/README. md Preprint. Under review. of numeric values Zhu et al. [2024] and (d) shared embedding space with a model designed for the specific type of numeric data used, such as with image data Sun et al. [2023]. The first three approaches operate within the existing language modeling framework. However, numeric data from different classes shares a common representation and rely on the context of the preceding token to differentiate meaning. Approaches (a) and (c) both rely on categorical representation of numeric quantities preventing shared information across related tokens: if tokens representing values of 21 and23are seen in training, there is no information shared with the token representing 22. Approach (d) introduces additional complexity of training a separate model for the numeric domain or having domain specific encoding and
|
https://arxiv.org/abs/2505.21680v1
|
decoding methods. Additional challenges occur when numeric data modalities can have variable size, often requiring resizing or alternative methods before encoding Han et al. [2022b], Tang et al. [2025]. A second class of models are based on neural ordinary differential equations which use a continuous representation of the timeseries Chen et al. [2018]. These models handle irregular sampling but cannot represent crossing trajectories. Augmentation addresses this issue Rubanova et al. [2019], but cannot handle stochasticity without conversion to a stochastic differential equation approach which increases complexityKidger et al. [2021]. Trajectory flow matching has improved this through simulation-free trainingZhang et al. [2024]. However, these models are limited in modeling categorical data and extracting information from sampling measurement timing. A concrete example of the critical importance of models with these capabilities is in healthcare. Data includes categorical values such as diagnostic codes or text as well as numeric data such as laboratory values, vital signs, and medication dosages. There are thousands of potential classes of data, but only a few are measured at any given time. Additionally, the timing of observations reflect a decision to record an observation, which itself contains information Getzen et al. [2023]. Here we present multivariateGPT, a decoder-only transformer that uses a single architecture to model mixed sequences of categorical and numeric data and has a likelihood-based loss function. This is a multimodal model; however, we refer to it as multivariate because all data is handled within a single model architecture without linking two models. We offer three main contributions: 1.We provide an auto-regressive decomposition for the joint distribution of multivariate time- series which captures information on the timing of measurements which can be approximated by a decoder-only transformer. 2.We present tokenization, embedding, and loss methods that allow for likelihood-based next token prediction for both categorical (including text) and numeric values in a single architecture. This method is compatible with a pure language modeling objective, such as if a "measurement" in the time series is a document. The loss function also allows for uncertainty estimation of numeric quantities. 3.We provide an implementation of this model and demonstrate its ability to (a) model second- order dynamics physical systems, generalizing to predict trajectories outside of its training data (b) model multivariate clinical data from electronic health records, and (c) model electrocardiogram timeseries which have stochasticity and substantial nonlinearities. 2 Methods 2.1 Multivariate Generative Modeling We consider the task of modeling the joint distribution of data collected in a dynamic system. These variables may be categorical (including tokenized text) or numeric and are recorded at specific times. They may represent measurements of the system or actions taken on the system. The joint distribution can be decomposed via the chain rule of probability into an auto-regressive sequence prediction task which uses the natural sequential ordering of the data elements: P({X, τ}i 0) =nY i=1P({X, τ}i|{X, τ}1, ...,{X, τ}i−1) (1) whereXis a set of observations recorded with the same time-stamp and τis the elapsed time since the prior data record. Each {X, τ}tuple represents a joint distribution over the specific events belonging 2 toXand the
|
https://arxiv.org/abs/2505.21680v1
|
value of τ. Each term on the right-hand-side of Eq. 1 can be further decomposed: P({X, τ}i|Si−1) =P(τi|Si−1)mY j=1P(xj,|x1, ..., x j−1, τi,Si−1) (2) WhereSi−1is shorthand for the the prior sequence of {X, τ}tuples, mis the number of data elements in the set X, andxjare individual data elements within X. In this multivariate framework, each xjvariable represents a tuple of class and value such thatxj= (cj, vj)forci∈Cand value vi∈Vci, where Care all possible classes of recorded data and Vciare categorical or numeric values specific to class ci. Concrete exam- ples from healthcare data are: numeric (creatinine, 1.21), categorical (diagnosis, Z 00.1), text: [(text, the ),(text, patient ),(text, is )]. A specific ordering can be imposed over the xjwithinX which respects the required sequence order (such as for text data) and otherwise uses a lexicographic sort over classes. This is similar to the pixel channel ordering in autoregressive image models. By framing the wait time τas its own class and value, we can rewrite Eq. 2 as a completely flattened autoregressive sequence of conditional probabilities: P(X) =kY i=1P(xi,|x1, ..., x i−1),or expanded: (3) P(X) =kY i=1P({ci, vi}|{c1, v1}, ...,{ci−1, vi−1}) (4) whereXis now the full sequence of all xi={ci, vi}tuples including the time class. As a secondary note, (ci, vi)tuples where viis categorical, will be transformed into a set of new classes where each new class is the combination of the original ciand each level of Vci. This expands the number of classes, but enforces that each categorical class has a single value which will allow for a simpler model design. This decomposition can then be estimated by any autoregressive sequence model which is modified to predict the joint distribution of P(ci, vi)for any prior input. We next describe an embedding scheme and loss function to achieve this with a decoder-only transformer. 2.2 Embeddings Each tuple {ci, vi}is considered a token and mapped to an embedding vector Ei∈Rewhere eis the embedding dimension as follows: Ei=Eci+fci(vi)forvi∈Rn Eci for categorical vi(5) Where Eciare class-specific learned embedding vectors, and fci(vi)is any function mapping Rn toRe. Note that because categorical classes are re-defined to contain a single value, only a single embedding vector is needed for each. 2.3 Loss function The sequence of tokens is modeled in an autoregressive fashion using a decoder-only generative pretrained transformer architecture. To maintain a single model without the use of a contrastive loss, we construct an output and loss function that allows for minimizing the negative log likelihood of the joint distribution P(ci, vi)of the next token: P(ci, vi) =P(ci)P(vi|ci) (6) The token embedding in the final layer of the model is projected via two heads to estimate P(ci)and P(vi|ci). The first head is equivalent to the standard language model head and projects Rde→Rdc followed by a softmax function to estimate the likelihood of the next token across each class. The second head is used to estimate P(vi|ci)and could take multiple forms; here we use a projection fromRde→Rdc×2where the output is used as µandσof normal distribution using softplus to 3 ensure positive σ. The per-sample loss can then be defined as
|
https://arxiv.org/abs/2505.21680v1
|
a joint log-likelihood: lci=−CX j=1cjlog(ˆcj), lvi=−CX j=1cjlogP(vj|ˆµj,ˆσj) (7) where lciis the per-token class loss, lviis the per-token conditional value loss, cjis a binary indicator of the correct class index, ˆcjare predicted probabilities across classes, vjis a vector with one non-zero element equal to the correct value at the index of the correct class and ˆµj,ˆσjare the predicted mean and variance for value of the jthclass. Categorical classes have only one value, such that P(v|c) = 1 andlvi= 0. Alternatively, a constant wcan be used as weighting. The batch loss is defined as: L=1 NNX i=1lci+lvi (8) 2.4 Algorithm This method is summarized in Alg. 1. This assumes that time deltas between x(i)have been converted to class-value tuples (c←time, v ←t(i)−t(i−1))fort(i)−t(i−1)>0in data preprocessing. Algorithm 1 Mutivariate autoregressive model Require: DataD={x(i)}N i=1, where x(i)= [(c1, v1), . . . , (ck, vk)] Require: Model fθwith parameters θ while training do x(j)∼ U(D) L ← 0 fort= 2tokdo (ˆpclass,ˆµc,ˆσc)←fθ(x(1:t−1)) Lclass← − log ˆpclass[ct] Lvalue← − logN(vt;µc, σc) L ← L +Lclass+Lvalue end for θ←Update( θ,∇θL) end while return fθ 2.5 Model Architecture We use a decoder-only transformer to approximate Eq. 1 by minimizing Eq. 8. The model archi- tecture follows the nanoGPT implementation of a decoder-only transformer Karpathy [2022] with modifications to the embedding scheme and loss function to implement methods described in Section 2. A schematic is shown in Figure 1. In this implementation, a single feed-forward layer is used for value mapping. The final linear layer consists of a Class Head and Value Head which are used to estimate the loss based on the joint probability of the next token’s class and value. 3 Experimental Results Experiments were run on a computing cluster with a heterogenous cluster of NVIDIA RTX4080, RTX5000 and A40 GPUs for approximately 5,000 GPU hours. Individual training and inference runs require approximately one GPU day. 3.1 Fitting and Generalization in Simple Harmonic Oscillators Trajectories of damped simple harmonic oscillators provide a toy system that requires the ability to model second order dynamics and crossing trajectories. Neural ordinary differential equation models, canonical conditional flow matching, and bridge matching cannot fit these trajectories due to the presence of crossing trajectories Zhang et al. [2024]. We compared the multivariateGPT model to discrete token transformer baselines on two tasks: trajectory reconstruction and generalization. 4 Input Tokens+Linear LayerLayer Norm Multi-Head AttentionGELU MLP Layer NormClass Target Layer NormN Blocks Position EmbeddingsValue Target Class/Value EmbeddingPr𝑐Pr (𝑣|𝑐)𝜇 Class HeadValue HeadσText/CategoricalNumericZ00.1HRfHR(115)EmbeddingModelLossFigure 1: Diagram of Model Architecture. Trajectory reconstruction: The first task was the reconstruction of trajectories from the training data using the seed of the first 5 points (Fig. 2, top row). The multivariate model reconstructs trajectories without accumulating error. The discrete transformer model with 10 bins lacks the numeric precision to resolve trajectories from the first 5 points, resulting in reproduction of a random trajectory from the training data. With 100 bins, numeric precision is high enough to differentiate trajectories from the first 5 points, however, reconstructions suffer from significant noise accumulation. Trajectory generalization: In the second task, the models are given the first 5 points from
|
https://arxiv.org/abs/2505.21680v1
|
a trajectory that was not present in the training data. The multivariate model generalizes to this unseen trajectory. The discrete model with 10 bins does not generalize and reproduces the orange trajectory from the training data. The discrete model with 100 bins initially approximates the orange trajectory and then further degrades as noise accumulates. Increasing training time by a factor of 10 improves the fit of the discrete model with 100 bins, but generalization still fails (Appendix A.1). Ablation studies demonstrate that allowing the model to learn variance is important for precise next-token value estimates that allow for reconstruction with minimal error and generalization (Appendix A.1). discrete n=10multivariatediscrete n=100trueseedpredicted trueseedpredicted trueseedpredicted trueseedpredicted trueseedpredicted trueseedpredicted Figure 2: Simple harmonic oscillator trajectories. Columns show results from each model type. First Row: reconstruction of training data. Second Row: generalization to unseen trajectories. 3.2 Clinical Data sets We evaluated the performance of multivariateGPT on real-world clinical data sets that encompass a range of time-series characteristics. 1.eICU Sepsis Data: subset of patients in the eICU Collaborative Research Database v2.0 with a diagnosis of sepsis, chosen to match Zhang et al. [2024]. Includes 3,362 patients, with 5 data on age, sex, heart rate, mean arterial pressure (MAP), and norepinephrine infusion rate during the first 24 hours of ICU admission. This was derived from the eICU collaborative database Pollard et al. [2018]. 2.MIMIC-IV Electrocardiogram Data: subset of 89,339 electrocardiograms from the MIMIC-IV electrocardiogram database. Each record includes patient age, sex, and voltages for the 12 standard leads of an electrocardiogram. This was taken from MIMIC-IV-ECG dataset Gow et al. [2023] derived from the MIMIC-IV database Johnson et al. [2023]. 3.Physionet ICU Data: data on 65,155 patients admitted to the the ICU from 3 hospital systems with demographics and hourly records of vital signs and lab values representing 36 unique classes Moor et al. [2021]. 3.2.1 eICU: Heart Rate and Blood Pressure Prediction in Sepsis We evaluated the multivariateGPT model compared to discrete-token transformer baselines, an open- source LLM, and TFM-ODE, which recently outperformed a range of neural ODE and SDE methods in this data set. This data set has irregularly and potentially informatively sampled heart rate and MAP measurements. As in Zhang et al. [2024], we focused on a subset of patients admitted with sepsis as their primary diagnosis. Two tasks were tested due to differences in what each model class was able to predict. While the multivariateGPT and discrete-token transformer models predict both measurements and the time between measurements, TFM-ODE requires specifying the time at which measurements occur. Value prediction: In the first task, data from the first 3, 6, 9, and 12 hours were given as seeds, and models were evaluated on the accuracy of predicting the remaining heart rate and MAP measurements. Measurement times were provided to allow comparison to TFM-ODE and autoregressive infilling of heart rate and MAP measurements were used for the transformer models. The multivariate GPT consistently achieved the lowest MSE among all compared models with error reductions of 40% to 60% compared to the next best performing model, TFM-ODE (Table 1). Most
|
https://arxiv.org/abs/2505.21680v1
|
models demonstrated improved performance with longer seed periods. Table 1: Mean Squared Error ( ±Standard Error) for MAP and Heart Rate from various seed lengths for each model. Model 3h 6h 9h 12h Gemma 1.1-7b 0.0999 ±0.0050 0.1246 ±0.0094 0.0461 ±0.0018 0.0299 ±0.0016 TFM_ODE 0.0268 ±0.0004 0.0236 ±0.0004 0.0166 ±0.0003 0.0146 ±0.0003 Discrete (n=10) 0.1183 ±0.0028 0.1108 ±0.0029 0.1053 ±0.0031 0.1016 ±0.0038 Discrete (n=50) 0.0308 ±0.0012 0.0295 ±0.0011 0.0279 ±0.0014 0.0276 ±0.0017 Multivariate GPT 0.0108 ±0.0002 0.0104 ±0.0003 0.0098 ±0.0003 0.0087 ±0.002 Value and timing prediction: In the second task, we evaluated the performance of the multivariate GPT model in predicting both the time and value of future observations. This ability is important in medicine because the timing of a measurement contains information about the state of a patient. This feature cannot be modeled by current neural ODE models or TFM-ODE. In this task, models were provided a seed of the first 3 hours, 6 hours, 9 hours, and 12 hours of data for a patient trajectory. The multivariate GPT model outperforms the discrete transformer model across all time windows and in both the value and time prediction task (Table 2). Error reduction for the multivariateGPT model ranged from 82% to 97% in value prediction and 78% to 99% in time prediction. Calibration: The multivariateGPT model and TFM-ODE model generate both expected value and variance for each observation. The calibration of the estimated variance is shown in quantile-quantile (QQ) plots comparing the standardized residuals (z-scores) of the true value to the theoretical values from a normal distribution Fig. 3. The multivariateGPT model and TFM-ODE are well calibrated for central theoretical quantiles while the discrete models show marked departure from linearity. Compared to TFM-ODE, the multivariateGPT model shows improved calibration for the leftmost quantiles in heart rate estimation. Coverage fractions for predicted 95% confidence intervals, defined as the fraction of the true values falling within predicted µ±1.96∗σintervals, are shown in Table 3. 6 Table 2: Mean squared error ( ±Standard Deviation) for value and time predictions. TFM-ODE is not included because it does not predict the timing of measurements. Model 3h 6h 9h 12h Value Prediction (MSE) Discrete n10 0.3892 ± 0.0745 0.2598 ± 0.0390 0.2581 ± 0.0399 0.3924 ± 0.0155 Discrete n50 0.0763 ± 0.0293 0.0560 ± 0.0161 0.0530 ± 0.0142 0.0717 ± 0.0355 Multivariate GPT 0.0114 ± 0.0013 0.0082 ± 0.0017 0.0078 ± 0.0016 0.0126 ± 0.0013 Time Prediction (MSE) Discrete n10 6.8673 ± 2.4835 4.1782 ± 0.2968 6.2214 ± 0.3843 7.9618 ± 0.3171 Discrete n50 0.1783 ± 0.0787 0.4365 ± 0.0926 0.8600 ± 0.3455 1.4893 ± 0.4038 Multivariate GPT 0.0376 ± 0.0051 0.0230 ± 0.0008 0.0279 ± 0.0020 0.0341 ± 0.0035 multivariateGPT TFM-ODEDiscrete n=10Discrete n=50MAPHeart RateSample QuantilesSample QuantilesTheoretical QuantilesTheoretical QuantilesTheoretical QuantilesTheoretical Quantiles Figure 3: QQ Plots. Theoretical quantiles are plotted against the sample quantiles for each model (columns). The top row are plots for MAP predictions and the bottom row are plots for heart rate predictions. Dashed bands are located at z-scores of −1.96and+1.96. 3.2.2 MIMIC-IV Electrocardiogram Data Electrocardiogram signals are quasi-periodic, noisy, and have significant nonlinear behavior, which creates
|
https://arxiv.org/abs/2505.21680v1
|
unique challenges that are separate from the eICU data above. Data were de- composed into a sequence of tokens: [(Age,63),(Sex,Male),(Lead I ,0.12),(Lead II ,0.15), . . . , (Lead I ,−0.03),(Lead II ,0.01), . . .].We evaluated the performance of multivariateGPT compared to a discrete transformer approach on this data set. Both classes of models were trained on the data set and tested on the task of lead reconstruction. The model is provided with the ground truth for a subset of leads (here I, II, V2, V6), and the model is tasked with autoregressivelly infilling the masked leads (8 remaining leads). The multivarateGPT model performs substantially better across all lead reconstruction tasks compared with the discrete models (Table 4). We find mixed performance when comparing the discrete model with 50 bins to the model with 100 bins suggesting tradeoffs in precision and ability to learn representations for a larger vocabulary. In general performance on the limb lead reconstruction is better than on precordial leads which is often seen due to the depolarization vectors that each lead represents. 3.2.3 Physionet ICU Data The Physionet ICU data set contains measurements collected in a sparse manner across a 36 cat- egorical and classes. This poses a unique challenge as a generative model must predict which measurements and what values will occur at any given measurement time. We evaluated the per- formance off multivariate GPT against discrete transformer models for next token class accuracy and next token value MSE (conditioned on correct class predictions). The multivariateGPT model performed significantly better than the discrete model with percentile based bins (Table 5). This 7 Table 3: Empiric coverage fractions for predicted 95% confidence intervals. Values closer to 0.95 correspond to better calibration. Values are MSE ±standard deviation. Method MAP Heart Rate multivariateGPT 0.951±0.009 0.957 ±0.005 TFM-ODE 0.932 ±0.066 0.966 ±0.022 Discrete (n=10) 1.000 ±0.000* 1.000 ±0.000* Discrete (n=50) 0.9983 ±0.0004 0.9872 ±0.0130 *Due to the bin sizes, all values are positioned within the central 95% of the distribution. Table 4: Limb leads (III, aVF, aVL, aVR) values ×10−4, precordial leads (V1,V3,V4,V5) values ×10−2). Error in reconstructing full-duration trajectories for each lead. Model III A VF A VL A VR Multivariate 1.94±0.59 1.25 ±0.41 1.86 ±0.38 0.39 ±0.09 Discrete (n=50) 106.1±32.5 725 .7±168.0 736 .7±229.4 227 .5±55.9 Discrete (n=100) 93.9±23.5 115 .6±67.0 322 .7±92.9 196 .0±51.5 Model V1 V3 V4 V5 Multivariate 2.06±0.41 4.21 ±0.19 5.09 ±1.44 31.0 ±1.55 Discrete (n=50) 22.2±9.77 38 .4±28.85 26 .0±15.75 32 .0±21.4 Discrete (n=100) 34.6±22.3 42 .3±18.1 22 .1±13.14 5 .53±8.27 the discrete model erroneously predicting tokens representing extreme values. This highlights the robustness to noise and outliers that numeric embedding provides. Table 5: Performance comparison of models across classification and regression metrics for Physionet ICU data. Values are ±SEM. Model Class Accuracy Value MSE MultivariateGPT 0.862±0.002 0.385±0.018 Discrete n= 10 0 .863±0.002 27 .6±3.6 Discrete n= 50 0 .863±0.002 10 .1±0.4 4 Related Work Multiple neural network-based approaches have been developed to model timeseries data to capture the complexities inherent in real-world data including mixed categorical and numeric variables, irregular or informative
|
https://arxiv.org/abs/2505.21680v1
|
sampling, and stochasticity. Here we review related work for token-based and continuous function-based approaches. Token-Based methods : A naive discrete token-based approach is not optimal for numeric values Spathis and Kawsar [2024]. Careful tokenization of strings containing digits can improve performance on tasks Born and Manica [2023], and single digit tokenization has been found to be more efficient than tokenization into longer groups Zhou et al. [2024], Schmidt et al. [2024]. The LLAMA series of models adopted single-digit-based tokenization Touvron et al. [2023] and an adaptation of these models for time series modeling enforced digit-based tokenization through addition of delimiting characters Gruver et al. [2023]. Right-to-left digit tokenization was shown to improve arithmetic performance over naive byte pair encoding strategies across a range of arithmetic tasks Singh and Strouse [2024]. Additional efforts to transform numeric quantities into discrete tokens include discretization into bins Gorishniy et al. [2022], Stein et al. [2024], Rajkomar et al. [2018], Renc et al. [2024], Zhu et al. [2024], Huang et al. and a similarity-driven quantization approach for time series data Zhicheng et al. [2024]. This is the main method employed in current models of medical data Rajkomar et al. [2018], Theodorou et al. [2023], Pang et al. [2024], Renc et al. [2024]. Within image and audio processing, multiple methods have been developed for transforming signals into discrete tokens including LFQ, FSQ, and BSQ which utilize vector quantization via discrete codebooks Yu et al. [2023], Jia et al. [2025], Mentzer et al. [2023], Zhao et al. [2024]. Embedding approaches 8 aligned with the continuous nature of numeric values have also been developed Wang et al. [2025], Golkar et al. [2024]. Once tokens are created, standard cross entropy loss is not well suited for numeric decoding and alternate loss functions including regression-like loss Zausinger et al. [2024], Han et al. [2022a] have been developed. A remaining limitation from this existing work is a numeric representation for diverse classes of data with a single likelihood-based loss which we provide here. Continuous Models Continuous-time trajectory models are an alternative approach to model irregu- larly sampled data. Models such as the latent trajectory-based Neural-ODEs Rubanova et al. [2019], Chen et al. [2018] and Neural-SDEs Kidger et al. [2021], Oh et al. [2024] have outperformed LSTM- and RNN-based methods Che et al. [2018], Cao et al. [2018]. In turn, trajectory flow matching (TFM) outperformed these models in clinical timeseries modeling with improved computational efficiency through its simulation-free training and the additional benefit of estimating uncertainty Zhang et al. [2024]. Additional methods to capture uncertainty include Brouwer et al. [2022]. These methods handle irregularly spaced inputs, but they do not currently derive information from the irregular spacing and do not handle categorical data well. 5 Conclusion We present multivariateGPT, a unified architecture for transformer-based modeling to discrete and numeric data. We show how multivariate timeseries data can be decomposed into an autoregressive prediction task and provide an embedding method and likelihood-based loss function for training a transformer model. Experiments on simple physical systems highlight that this approach enables generalization that is not apparent in
|
https://arxiv.org/abs/2505.21680v1
|
discrete methods and is more sample efficient, reaching high pre- cision in trajectory reconstruction in fewer iterations than discrete models. Furthermore, vocabularies are smaller because no discretization is necessary. Experiments on real-world clinical data show improved performance over state of the art models of clinical timeseries data with higher accuracy and better calibration. Additionally, mutlivariateGPT enables new types of predictions, namely predicting both the timing and value of observations. This is critical in irregular and informatively sampled data sets where the passage of time itself is critical. This approach has broad potential in clinical settings where it could be used to train a foundation model on entire Electronic Health Record (EHR) databases. This could have broad impacts with improved prediction across a range of clinical tasks to improve decision making and resource utilization either through zero-shot prediction from Monte Carlo sampling of future trajectories or fine-tuning for specific tasks. The sequence representations could be used for patient search, matching, or phenotyping. This work has impact outside of medicine as it provides a framework for retrogressively modeling any database composed of categorical and numeric data which can be converted into class-value tuples. Limitations: Limitations of this method include the current choice of Gaussian parameterization for value estimation. Although predictions are well calibrated using this parameterization on current data sets, an important direction for future work will be integrating different distributions that may better capture time, count, or ordinal data. This model also inherits limitations of transformer-based autoregressive modeling including interpretability of predictions and memory scaling with longer context windows. As this approach leverages the same architecture used in language models, we expect these limitations to improve with advancements in the field. Future Work: Autoregressive image and language models show scaling of performance with model and dataset size which we hope to assess for this model. Not all numeric values are well represented by a continuous value estimated from a normal distribution no matter how well mean and variance are conditioned, such as non-negative, count, or ordinal data. We are working to incorporate likelihood based loss functions to specifically address these data types. We envision a mapping between common database data types (character, floating point, integer) and the embedding/loss functions proposed here. In this work we focus on low-dimensional numeric data (scalars across many categories), but we envision future work to incorporate additional data modalities such as speech and image data through established methods. 9 6 Broader Impact Our work extends efforts for creating token-based autoregressive models to mixed categorical and numeric data streams using a single model architecture and likelihood based loss. This method is demonstrated to enable numeric generalization which was not seen in discrete token approaches and has state-of-the-art predictive performance. Timeseries prediction has numerous benefits in clinical applications, from identifying high-risk patients before critical events to optimizing resource allocation. In addition, sequence representations created by these models could be used for improved patient search or matching, enabling better clinical trial studies or recruitment. These benefits come with potential risks, including inaccurate predictions and propagation of biases in training data. If
|
https://arxiv.org/abs/2505.21680v1
|
used improperly, this could lead to over- or under-treatment due to false positive or negative predictions. While this work focuses on clinical applications, our method is flexible and can be used to model any data source that can be flattened into sequences of class-value tuples (for example by wide to long tabular data conversion). Future work will include application of this work to time-series data from other domains with similar risks of predictive errors in other high-risk use cases such as credit card fraud detection, sales forecasting, and job scheduling in information systems. Acknowledgments and Disclosure of Funding Acknowledgments AJL receives funding through the CTSA Grant UL1 TR001863 from the National Center for Advanc- ing Translational Science (NCATS), a component of the National Institutes of Health (NIH). This publication’s contents are solely the responsibility of the authors and do not necessarily represent the official views of NIH. References M. Alberts, G. Gabrieli, and I. E. Morales. Interleaving text and number embeddings to solve mathemathics problems. arXiv preprint arXiv:2410.19353 , 2024. J. Born and M. Manica. Regression transformer enables concurrent sequence regression and genera- tion for molecular language modelling. Nature Machine Intelligence , 5(4):432–444, 2023. E. D. Brouwer, J. Gonzalez, and S. Hyland. Predicting the impact of treatments over time with uncertainty aware neural differential equations. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) , volume 151 of Proceedings of Machine Learning Research , pages 4705–4722. PMLR, 2022. URL https://proceedings.mlr.press/v151/ de-brouwer22a.html . W. Cao, D. Wang, J. Li, H. Zhou, L. Li, and Y . Li. Brits: Bidirectional recurrent imputation for time series. Advances in neural information processing systems , 31, 2018. Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y . Liu. Recurrent neural networks for multivariate time series with missing values. Scientific Reports , 8(1), 2018. doi: 10.1038/s41598-018-24271-9. R. T. Chen, Y . Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential equations. Advances in neural information processing systems , 31, 2018. E. Getzen, A. L. Tan, G. Brat, G. S. Omenn, Z. Strasser, Q. Long, J. H. Holmes, D. Mowery, C. for Clinical Characterization of COVID-19 by EHR (4CE, et al. Leveraging informative missing data to learn about acute respiratory distress syndrome and mortality in long-term hospitalized covid-19 patients throughout the years of the pandemic. medRxiv , 2023. S. Golkar, M. Pettee, M. Eickenberg, A. Bietti, M. Cranmer, G. Krawezik, F. Lanusse, M. McCabe, R. Ohana, L. Parker, B. Régaldo-Saint Blancard, T. Tesileanu, K. Cho, and S. Ho. xval: A continuous numerical tokenization for scientific language models, 2024. URL https://arxiv. org/abs/2310.02989 . arXiv preprint arXiv:2310.02989. 10 Y . Gorishniy, I. Rubachev, and A. Babenko. On embeddings for numerical features in tabular deep learning. Advances in Neural Information Processing Systems , 35:24991–25004, 2022. B. Gow, T. Pollard, L. A. Nathanson, A. Johnson, B. Moody, C. Fernandes, N. Greenbaum, J. W. Waks, P. Eslami, T. Carbonati, A. Chaudhari, E. Herbst, D. Moukheiber, S. Berkowitz, R. Mark, and S. Horng. Mimic-iv-ecg: Diagnostic electrocardiogram matched subset (version 1.0). https: //doi.org/10.13026/4nqg-sb35 , 2023. PhysioNet. N. Gruver, M. Finzi,
|
https://arxiv.org/abs/2505.21680v1
|
S. Qiu, and A. G. Wilson. Large language models are zero-shot time series forecasters. In Advances in Neural Information Processing Systems , volume 36, pages 19622– 19635, 2023. H. Han, J. Xu, M. Zhou, Y . Shao, S. Han, and D. Zhang. Luna: language understanding with number augmentations on transformers via number plugins and pre-training. arXiv preprint arXiv:2212.02691 , 2022a. K. Han, Y . Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y . Tang, A. Xiao, C. Xu, Y . Xu, et al. A survey on vision transformer. IEEE transactions on pattern analysis and machine intelligence , 45(1): 87–110, 2022b. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997. Z. Huang, K. Srinivas, H. Samulowitz, N. S. D’Souza, C. C. Aggarwal, P.-Y . Chen, and J. Gao. Language models are good tabular learners. Transactions on Machine Learning Research . J. Jia, J. Gao, B. Xue, J. Wang, Q. Cai, Q. Chen, X. Zhao, P. Jiang, and K. Gai. From principles to applications: A comprehensive survey of discrete tokenizers in generation, comprehension, recommendation, and information retrieval. arXiv preprint arXiv:2502.12448 , 2025. A. E. W. Johnson, L. Bulgarelli, L. Shen, and et al. Mimic-iv, a freely accessible electronic health record dataset. Scientific Data , 10(1), 2023. doi: 10.1038/s41597-022-01899-x. URL https://doi.org/10.1038/s41597-022-01899-x . A. Karpathy. NanoGPT. https://github.com/karpathy/nanoGPT , 2022. P. Kidger, J. Foster, X. Li, and T. J. Lyons. Neural sdes as infinite-dimensional gans. In International conference on machine learning , pages 5453–5463. PMLR, 2021. F. Mentzer, D. Minnen, E. Agustsson, and M. Tschannen. Finite scalar quantization: Vq-vae made simple. arXiv preprint arXiv:2309.15505 , 2023. M. Moor, B. Rieck, M. Horn, C. R. Jutzeler, and K. Borgwardt. Early prediction of sepsis in the icu using machine learning: A systematic review. Frontiers in Medicine , 8:607952, 2021. Y . Oh, D. Lim, and S. Kim. Stable neural stochastic differential equations in analyzing irregular time series data. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=4VIgNuQ1pY . C. Pang, X. Jiang, N. P. Pavinkurve, K. S. Kalluri, E. L. Minto, J. Patterson, L. Zhang, G. Hripcsak, G. Gürsoy, N. Elhadad, et al. Cehr-gpt: Generating electronic health records with chronological patient timelines. arXiv preprint arXiv:2402.04400 , 2024. T. J. Pollard, A. E. W. Johnson, J. D. Raffa, L. A. Celi, R. G. Mark, and O. Badawi. The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific Data , 5(1):1–13, 2018. A. Rajkomar, E. Oren, K. Chen, A. M. Dai, N. Hajaj, M. Hardt, P. J. Liu, X. Liu, J. Marcus, M. Sun, et al. Scalable and accurate deep learning with electronic health records. NPJ digital medicine , 1 (1):18, 2018. P. Renc, Y . Jia, A. E. Samir, J. Was, Q. Li, D. W. Bates, and A. Sitek. Zero shot health trajectory prediction using transformer. NPJ Digital Medicine , 7(1):256, 2024. 11 Y . Rubanova, R. T. Chen, and D. K. Duvenaud. Latent ordinary differential equations for irregularly- sampled time series. Advances in neural information processing systems , 32, 2019. C. W. Schmidt, V .
|
https://arxiv.org/abs/2505.21680v1
|
Reddy, H. Zhang, A. Alameddine, O. Uzan, Y . Pinter, and C. Tanner. Tokenization is more than compression. arXiv preprint arXiv:2402.18376 , 2024. S. N. Shukla and B. M. Marlin. A survey on principles, models and methods for learning from irregularly sampled time series. arXiv preprint arXiv:2012.00168 , 2020. A. K. Singh and D. J. Strouse. Tokenization counts: The impact of tokenization on arithmetic in frontier llms. 2024. arXiv:2402.14903. D. Spathis and F. Kawsar. The first step is the hardest: pitfalls of representing and tokenizing temporal data for large language models. Journal of the American Medical Informatics Association , 31(9): 2151–2158, Sept. 2024. doi: 10.1093/jamia/ocae090. A. Stein, S. Sharpe, D. Bergman, S. Kumar, C. B. Bruss, J. Dickerson, T. Goldstein, and M. Goldblum. A simple baseline for predicting events with auto-regressive tabular transformers. arXiv preprint arXiv:2410.10648 , 2024. Q. Sun, Q. Yu, Y . Cui, F. Zhang, X. Zhang, Y . Wang, H. Gao, J. Liu, T. Huang, and X. Wang. Emu: Generative pretraining in multimodality. In International Conference on Learning Representations (ICLR) , 2023. H. Tang, D. Liu, and C. Shen. Data-efficient multi-scale fusion vision transformer. Pattern Recogni- tion, 161:111305, 2025. G. Team, T. Mesnard, C. Hardin, R. Dadashi, S. Bhupatiraju, S. Pathak, L. Sifre, M. Rivière, M. S. Kale, J. Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024. B. Theodorou, C. Xiao, and J. Sun. Synthesize high-dimensional longitudinal electronic health records via hierarchical autoregressive language model. Nature communications , 14(1):5305, 2023. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. Y . Wang, Z. Lin, Y . Teng, Y . Zhu, S. Ren, J. Feng, and X. Liu. Bridging continuous and discrete tokens for autoregressive visual generation. arXiv preprint arXiv:2503.16430 , 2025. L. Yu, J. Lezama, N. B. Gundavarapu, L. Versari, K. Sohn, D. Minnen, Y . Cheng, V . Birodkar, A. Gupta, X. Gu, et al. Language model beats diffusion–tokenizer is key to visual generation. arXiv preprint arXiv:2310.05737 , 2023. J. Zausinger, L. Pennig, K. Chlodny, V . Limbach, A. Ketteler, T. Prein, V . M. Singh, M. M. Danziger, and J. Born. Regress, don’t guess–a regression-like loss on number tokens for language models. 2024. arXiv:2411.02083. X. N. Zhang, Y . Pu, Y . Kawamura, A. Loza, Y . Bengio, D. Shung, and A. Tong. Trajectory flow matching with applications to clinical time series modelling. In Advances in Neural Information Processing Systems , volume 37, pages 107198–107224, 2024. Y . Zhao, Y . Xiong, and P. Krähenbühl. Image and video tokenization with binary spherical quantiza- tion. arXiv preprint arXiv:2406.07548 , 2024. C. Zhicheng, F. SHIBO, Z. Zhang, X. Xiao, X. Gao, and P. Zhao. Sdformer: Similarity-driven discrete transformer for time series generation. In
|
https://arxiv.org/abs/2505.21680v1
|
The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. 12 Z. Zhou, J. Wang, D. Lin, and K. Chen. Scaling behavior for large language models regarding numeral systems: An example using pythia. arXiv preprint arXiv:2409.17391 , 2024. Y . Zhu, Z. Wang, J. Gao, Y . Tong, J. An, W. Liao, E. M. Harrison, L. Ma, and C. Pan. Prompting large language models for zero-shot clinical prediction with structured longitudinal electronic health record data. 2024. arXiv:2402.01713. A Appendix: Data Set and Experimental Details All code is available in the supplementary material and in the anonymized repository here: https: //anonymous.4open.science/r/multivariateGPT_anon-4ED4/README.md . For all discrete models, bins are evenly spaced quantiles with resolution specified by the number of bins. A.1 Oscillator Data Ablation studies for learned variance and sampling as well as additional training durations for discrete models are shown in Figure 4. Multivariatefixed variance, max likelihoodMultivariatefixed variance, sampledDiscreteN=100, train 10x Figure 4: Additional simple harmonic oscillator experiments. Rows show task and columns show model type. The task in row 1 is reconstruction of training trajectories from seed points. The task in row 2 is generalizing to a trajectory that was not part of the training data. The multivariate model where variance is fixed and sampling is done by selecting the (c, v)tuple with maximum likelihood shows similar performance on trajectories in training data is similar, but generalization fails (column 1). The multivariate model where variance is fixed an sampling is performed instead of selection of the value with maximum likelihood does not have enough numeric precision to fit trajectories (default variance is 1). The discrete model with 100 bins trained from 10 times longer than the multivariate model fits training data but fails to generalize (column 3) A.2 MIMIC-IV Electrocardiogram Data Set: The MIMIC-IV electrocardiogram data set consisted of 89,335 electrocardiogram records with patient age, gender, and voltage time series for each of the 12 standard electrocardiogram leads (I, II, III, aVF, aVR, aVL, V1, V2, V3, V4, V5). The entire data set was divided into train, validation and test splits with 78,141, 3,185, and 8,009 records in each, respectively. The raw data was collected at 500 Hz and down-sampled to 100 Hz. A 4thorder Butterworth bandpass filter with a frequency range of 0.5 to 20 Hz was applied to each time series to reduce drift and high-frequency noise. 13 Model and Training Details: Model specifications are shown below in Table 6. Models were trained for 20,000 iterations or until early stopping criteria were met. Warm-up and cosine decay for learning rate were used. The AdamW optimizer was used. Model Vocab Param n_embd n_head n_layer LR Batch Context Steps Discrete n=50 466 32.8M 512 64 10 5×10−465k 1600 11200 Discrete n=100 880 33.2M 512 64 10 5×10−465k 1600 11200 Numeric 12 31.6M 512 64 10 5×10−465k 1600 16500 Table 6: Model configurations. Variables are as follows n_embd: embedding dimension, n_head: number of self-attention heads, LR: max learning rate, Batch: full batch size in tokens, Context: size of context window in tokens, Steps: number of training steps until early stopping criteria met
|
https://arxiv.org/abs/2505.21680v1
|
or max training steps met. An example of lead reconstruction is shown below in Fig. 5. Figure 5: Example lead reconstructions of a limb lead (III) and precordial lead (V2). A.3 eICU Model and Training Details: The following details the different model specifications and hyperpa- rameters for training models on the eICU data (Table 7). Model Vocab Param n_embd n_head n_layer LR Batch Context Steps Discrete n=10 65 25.5M 512 8 8 1×10−38192 512 5000 Discrete n=50 252 25.7M 512 8 8 1×10−38192 512 5000 Numeric 14 25.4M 512 8 8 1×10−38192 512 5000 Table 7: Model configurations. Variables are as follows n_embd: embedding dimension, n_head: number of self-attention heads, LR: max learning rate, Batch Size: full batch size in tokens, Context: size of context window in tokens, Steps: number of training steps until early stopping criteria met or max training steps met. Comparison Model Details: For TFM-ODE we used the implementation that was published for this data set in Zhang et al. [2024]. For large-language model baseline, we used gemma 1.1-7b Team et al. [2024]. The following prompt was used: "You are tasked with predicting future values of heart rate and mean arterial pressure. the input data is: <data in csv format> give your best estimate to what hr and map will be at the following time: <times for estimation> and The norepi_inf at these times is: <norepi infusion rates> use csv output with the following columns: time, hr, map." Evaluation Details: To effectively combine MAP and heart rate for evaluation, we applied Min- MaxScaler to normalize them to a consistent range. We fitted and transformed the ground truth values for MAP and HR separately and then applied the same transformation to the predicted values. MAP and HR were treated as a 2D array, and the MSE was calculated between the predictions and the ground truth. To quantify performance variation associated with model training stability, we trained each model 3 times from a different random initialization and quantified variation in MSE (Table 8). 14 Model 3h 6h 9h 12h Discrete (n=10) 0.1160 ±0.0265 0.1084 ±0.0223 0.1063 ±0.0216 0.1017 ±0.0191 Discrete (n=50) 0.0317 ±0.0087 0.0296 ±0.0100 0.0291 ±0.0101 0.0274 ±0.0111 Multivariate GPT 0.0125 ±0.0006 0.0113 ±0.0007 0.0106 ±0.0011 0.0097 ±0.0008 Table 8: Mean Squared Error ( ±Standard Deviation) for MAP and Heart Rate from various seed lengths for each model. Error intervals derived from multiple training runs. A.4 Physionet ICU Data The Physionet ICU data consisted of 65,155 unique records for patients admitted to the ICU. Data include demographics, vital signs, and lab values totaling 36 unique categorical and numeric mea- surement classes Moor et al. [2021]. Numeric classes were z-score normalized except for SpO2 which used a logistic scaling due to a ceiling effect where values cannot be greater than 100% and are commonly at or near this value. Model specifications are shown below in Table 9. Models were trained using a warm up period of approximately 5% of the data followed by cosine learning rate decay. The AdamW optimizer was used. Model Vocab Param n_embd n_head n_layer LR Batch Context
|
https://arxiv.org/abs/2505.21680v1
|
LLMPR : A Novel LLM-Driven Transfer Learning based Petition Ranking Model Avijit Gayen2, Somyajit Chakraborty3, Mainak Sen2, Soham Paul2, Angshuman Jana1* 1*India Institute of Information Technology Ghuwahati, Bongora, Ghuwahati, 781015, Assam, India. 2Techno India University, West Bengal, Salt Lake, Kolkata, 700091, West Bengal, India. 3University College Cork, Collage Rd., Cork, T12 K8AF, Cork, Ireland. *Corresponding author(s). E-mail(s): angshuman@iiitg.ac.in; Contributing authors: avijit.g@technoindiaeducation.com; somyajitchakraborty@ucc.ie; mainaksen.1988@gmail.com; soham211001001105@technoindiaeducation.com; Abstract The persistent accumulation of unresolved legal cases, especially within the Indian judiciary, significantly hampers the timely delivery of justice. Manual methods of prioritizing petitions are often prone to inefficiencies and subjective biases, further exacerbating delays. To address this issue, we propose LLMPR (Large Language Model-based Petition Ranking), an automated framework that utilizes transfer learning and machine learning to assign priority rankings to legal peti- tions based on their contextual urgency. Leveraging the ILDC dataset comprising 7,593 annotated petitions, we process unstructured legal text and extract features through various embedding techniques, including DistilBERT, LegalBERT, and MiniLM. These textual embeddings are combined with quantitative indicators such as gap days, rank scores, and word counts to train multiple machine learn- ing models, including Random Forest, Decision Tree, XGBoost, LightGBM, and CatBoost. Our experiments demonstrate that Random Forest and Decision Tree models yield superior performance, with accuracy exceeding 99% and a Spearman rank correlation of 0.99. Notably, models using only numerical features achieve nearly optimal ranking results (R2= 0.988, ρ= 0.998), while LLM-based embed- dings offer only marginal gains. These findings suggest that automated petition ranking can effectively streamline judicial workflows, reduce case backlog, and improve fairness in legal prioritization. 1arXiv:2505.21689v1 [cs.CL] 27 May 2025 Keywords: Petition Ranking, Legal Dataset, Transfer Learning, Large Language Model(LLM), Judiciary System 1 Introduction The judiciary system has been a part of any nation since ancient times. Historically, it was controlled and served by a monarch of a kingdom or an empire. Since then, the judiciary system has become a separate, independent part of any nation, irrespective of the nature of the government. It not only serves as a protector of the constitution of the nation but also ensures the fundamental rights of the citizens of the country. The function of the judiciary system is to provide timely justice to its citizens, irrespective of their caste, colour, and financial status. The smooth functioning of the judiciary system ensures the prosperity of the nation. The timely availability of legitimate justice enhances the morale of society, hence it reduces corruption as well as improves the social systems. On the other hand, the delayed process of the judiciary system deprives the common citizen of their deserved justice [1]. [2]examines the factors like legal infrastructure, out-of-date laws which influence the delay in judiciary decisions. The backlog in the judiciary system has been observed as one of the major issues. It is an open truth that delayed justice is always compromised due to lack of evidence which is abolished over time. In the Indian judiciary system, currently, around 30 mil- lion cases are pending. According to statistics, approximately 73000 cases are pending per judge [3]. The existing complicated paperwork and rigid
|
https://arxiv.org/abs/2505.21689v1
|
rules of assessing justice make it a more time-consuming process. Thus, the accessibility of justice for the com- mon marginal people of society is far off. On the other hand, this delayed process of justice is capitalised on by the rich people for their own self-interest [4]. This situation develops partiality in the judicial system. Moreover, political and rich people disrupt the independence of the judiciary system [5]. Also, the lack of coherent data across the different courts and such fixed deadline for the completion of cases makes this situa- tion worse. Along with the large set of adjourned cases, the increasing crime rate over the years and increasing count of new Public interest litigation (PILs)1overload the judiciary system day-by-day [7]. Finally, the grey area of the constitution majorly hin- ders the fast judgment process in those cases which deal with the fundamental rights of the citizens. There are fast-track courts and separate courts, tribunals, etc. that have been set to prioritise and quickly settle cases by expert judges. However, the introduction of technology became indispensable to handle backlogs, data coherency, etc, in the judi- ciary system. According to the Indian Law Commission [8], it has been suggested to reduce the oral argument time in court proceedings by the introduction of technology- based proceedings to improve the backlogs. In this context, legal judgment prediction has become a point of interest for researchers in the last few years in the domain of machine learning and artificial intelligence. In [9], the authors mainly focused on 1PIL is type of legal action where an individual or group initiates a case to protect the rights and interests of the general public or a marginalized group, even if they haven’t been directly harmed. Essentially, it’s using the legal system to address issues of public concern and social change [6]. 2 predicting judgment in the context of various cases. The ever-increasing new peti- tions overloads the judiciary system and becomes the main issue of the backlog of this system. This backlog not only overloads the entire judiciary system but also delays important cases due to the long list of comparatively trivial cases. The manual rank- ing of petitions often faces biases which hinder the urgency and fast decision of the petitions of sensitive and concurrent issues. Fast-track courts exist to address the high- priority cases; however, a quick and automated ranking system of the petitions can negate the manual biases. Introduction of such a system not only negates the manual biases but also identifies the importance of the petition to rank them, which in turn accelerates the entire judiciary system to reduce the existing backlogs. There are very few works [10–12] that are related to the above issue, we could not find any such work that could address the issue. Thus, there is a strong requirement for the development of such a petition ranking system, which will reduce the backlogs by prioritising the petitions which help them to get a quick final decision on petitions. In this work, we predict the rank of petitions’ which has been initially
|
https://arxiv.org/abs/2505.21689v1
|
accepted for further legal proceedings. Specifically, we predict the rank of accepted petitions filed based on the statement of the petition framed by the legal practitioner. This ranking system would work as an automated system that identifies the importance based on contextual sensitivity and fundamental judicial priority of the petitions filed, but also helps to automatically negate the pressure of trivial cases, cases with the least legal basis, etc. Hence, precious court time is being saved and allowing discussion of more important cases that are relevant to social improvements. The major challenges of this work could be •petitions are unstructured, i.e., these are different from one case to another. •The languages could be different •There exists a large corpus of data Therefore, predicting the rank of initially accepted petitions is challenging as well as indispensable to developing a decision support system in this context. This system would not only reduce the backlog of the judiciary system but also provide a fair and unbiased judiciary system. To predict the ranking of accepted petitions, we proposed the following method- ology that leverages a structured framework using machine learning techniques. In our work, we have used the ILDC dataset [13], containing 7 ,593 annotated petition records, serves as the foundational data source. Initial preprocessing of the textual data focuses on handling unstructured text by eliminating noise such as stop words, special characters, and non-alphanumeric tokens. Techniques like stemming and lem- matisation ensure the data is standardized and simplified for downstream analysis. Further, we employ various advanced text embedding methods, including DistilBERT, LegalBERT, and MiniLM, to transform the text into meaningful numerical represen- tations. Alongside these embeddings, we also use numerical features gaps days, i.e., time gap between acceptance of the petition to its first proceeding and word counts. These features are combined into a unified feature matrix to effectively train these machine learning models. 3 We further incorporate a wide range of machine learning models like Random Forest, Decision Tree, XGBoost, LightGBM, CatBoost, ElasticNet, and Linear Regres- sion. These models are selected to capture complex relationships within the data. The ranking system uses a combination of numerical and textual features, where mod- els predict the petition rank based on semantic relevance and procedural urgency. Evaluation metrics such as accuracy, precision, recall, F1-score, and Spearman rank correlation ensure robust assessment of model performance. Cross-validation tech- niques, including K-Fold and Monte Carlo, validate the generalizability of the models. Random Forest and Decision Tree models demonstrate superior performance, achiev- ing high accuracy and rank correlation values, making them the most effective for petition ranking tasks. This systematic approach not only identifies high-priority cases but also contributes to reducing the judicial backlog by streamlining the ranking of petitions. The novelty of the work is to predict the rank of the “accepted” petitions using machine learning techniques from the unstructured legal petition data. The contribution of the work can be listed as follows— •We have employed various LLM-based state-of-art text embedding techniques including domain specific text embedding method i.e., LegalBERT to convert the text into feature vector. •We have also
|
https://arxiv.org/abs/2505.21689v1
|
included various numerical features, i.e., gapday,rank soore , word count andsentence count in our model. •We have measured the performance of several machine learning algorithms on our dataset to identify the most suitable model in this context. •Finally, we have validated our model prediction with the actual labels of each peti- tion against our ranking obtained by our proposed method using spearman rank correlation. We found Random Forest gave the best result. Table 1 presents all the abbreviations and terminologies that we shall use in the course of this study. In the next section, we highlight the major related works. In section 3, we discuss the proposed work for the petition ranking framework, and further results are shown in section 4. We finally conclude with the future direction of this work in section 5. 2 Related Work In this section, we discuss a comprehensive survey of the related works aligned to the proposed work. Initially, we highlight the works related to the judiciary systems and their issues. Further, we discuss some recent relevant works of machine learning techniques that have been used to mitigate the litigation of the judiciary system. We also outline some recent works on petition decision support system. Finally, we highlight the works related to AI-enabled judiciary framework in recent times. These works find the research gap in current trends of works in the context of AI in judiciary system. 4 Table 1 : List of Abbreviations Abbreviation Definition LLM Large Language Model LLMPR Large Language Model-based Petition Ranking PIL Public Interest Litigation BERT Bidirectional Encoder Representations from Transformers TF-IDF Term Frequency-Inverse Document Frequency RF Random Forest XGBoost Extreme Gradient Boosting LGBM Light Gradient Boosting Machine CatBoost Categorical Boosting ELNet ElasticNet Regression LR Linear Regression NN Neural Network ILDC Indian Legal Documents Corpus MSE Mean Squared Error MAE Mean Absolute Error Sp. Corr. Spearman Rank Correlation Exp. Var. Explained Variance Score 2.1 Judiciary System & Litigation of the system The Judiciary system has been introduced as one of the parts of the social system from the ancient age of civilization. Based on the belief, social, and religious values, the judiciary system of any nation pertinently includes those in their legal justice [14, 15]. Though the legal proceedings of any nation develop based on social and religious values, it has a strong impact and influence across the nations due to the exchange of culture, education, trade, etc. Though the smooth functioning of the judiciary system ensures timely justice and hence improves the legal rights of citizens, the judiciary system faces a lot of potential issues irrespective of the country [16–18]. The delay in delivering justice has been a major issue of the judiciary system[1, 3]. In [3], Singh et al. observed that the average period of delivery of justice is over multiple decades in the Indian judiciary system. According to the author, in the Indian judiciary system, currently approximately 30 million cases are pending where around 73000 cases are pending per judge. In some other works [19, 20], authors describe the hindrance of accessibility of justice by the common marginal
|
https://arxiv.org/abs/2505.21689v1
|
citizens and how corruption has influenced the judicial system. It also discusses the suppression of legitimate justice under the strong control of political influence. In some works [21], authors highlighted the lack of data coherency across the different courts. In another work [22], Barno et al. observed that the absence of a proper schedule and deadline for the completion of cases makes the system inefficient. In some work [7], the author pointed out that the increasing crime rate over the years as well as the increasing count of new Public interest litigation (PILs), overload the judiciary system day by day. Along with all the above issues, the lack of interpretation of the fundamental rights of the citizens of the constitution majorly delays the judgment process [23]. 5 2.2 Machine Learning in Judiciary System While fast-track courts and specialized tribunals have been established to expedite case resolution through expert judges, the integration of technology has become essen- tial in addressing judicial backlogs and ensuring data coherence in the legal system. In a recent work [24], authors observed that strong enactment of consumer protection law would improve the growth of e-commerce business. As per the recommendations of the Indian Law Commission, it has been proposed to decrease the duration of oral arguments in court proceedings [25] by incorporating technology-based proceedings as a means to alleviate backlog issues [26, 27]. In this context, the prediction of legal judgments has garnered attention from researchers in recent years within the field of machine learning and artificial intelligence[28]. These works [9] mainly focus on predicting judgment in the context of various cases. Some relevant works [29, 30] dis- cussed the introduction of e-court to enhance the judiciary system. [31]used explodes the use of LLM(Large Language Model) for summarizing Italian legal news. They showed that LLM outperforms older models like BART, T5 in terms of grammatical accuracy. Authors in [32], experimented on the CAIL2018 dataset to show superior accuracy and robustness. In this work, the author combined semantic matching with causal relationship learning. 2.3 Petition Decision Support System The continuous influx of new petitions overwhelms the judicial system, emerging as the primary cause of its backlog. Consequently, there is a pressing need for a rapid and automated decision support system to handle the initial determination of petitions before proceeding to court discussions. While there are limited studies [33] tackling the mentioned issue. In this work, the author discusses the method of predicting the initial decision of a petition filed in a web portal. Though the work deals with a structured dataset, as the data is collected from a web portal, there is a significant demand for the creation of a decision support system to decrease the volume of petitions at the initial stage. This, in turn, would contribute to the reduction of the backlog in the judicial system. The authors in [34], proposes a hybrid deep learning model based on CNN and Bi-LSTM to automatically extract features from the title and body of a petition. Authors [35] have proposed a BERT-based fine-tuned multi-label classifier on a dataset from the
|
https://arxiv.org/abs/2505.21689v1
|
Taiwanese Joint Platform. In [36], a decision support system was made using CNN+BiLSTM to predict the court decision based on past data. Zekun et al. in [37] proposed an explainable convolutional neural network model to enhance the e-petitions tagging system on the Message Board for Leaders (MBL) in China. This system uses layerwise relevance propagation (LRP), an understandable and explainable method to find interpretability compared to several baseline models. 2.4 AI-enabled Judiciary Framework LawGPT zh is a Chinese language model based on ChatGLM-6B LoRA 16-bit instruc- tion. It contains legal question-and-answer datasets. LaWGPT [38], a series of models pre-trained on large scale chinese legal text databases. It is also fine tuned on legal dialogue question-and-answer datasets. Lawyer LLaMA [39], a Chinese legal LLM 6 provides legal advice and generates legal advice. [40]showed that lightweight LSTM language models achieved better results on short legal text classification with reduced computational overhead compared to larger models. In the recent trend, AI and Law become a pivotal point of interest of the researchers. It produces a plethora of works that help to accelerate the slow judiciary system. In spite of the ample contribution in recent work in this domain, we do not find any contributory work that address the ranking framework for the “Accepted” petitions that can not only helps in reduction of existing backlogs but also biases of manual intervention. Thus, there is a strong requirement of development of such petition ranking framework to accelerate the slow judiciary system. 3 Proposed Work This section describes the proposed model for the ranking of petitions. We initially describe the details of the legal labeled corpus used in this work. We further describe the methodology adopted to rank the ”Accepted” petition. We also outline the features used in the proposed model. It includes the data pre-possessing method used to clean the dataset and the various text embedding techniques used in our work. We further describe the various machine learning model adopted in our proposed method. 3.1 Dataset In this work, we use “ILDC” (Indian Legal Documents Corpus) dataset [13]. It is a large corpus of over 35000 Indian Supreme Court cases that has been annotated with original court decisions. A subset of this large corpus is used as a dataset that contains 7593 data points. and In the dataset, we have 4 columns: a) ‘text’ , b)‘label’ , c) ‘split’ , d)‘name’ . The description of the columns of the dataset is as follows: •Text : The ‘text’ column actually consists of the annotated cases in plain text. The data-type of this column is ‘object’. There are correspondingly 7593 text recordings as there are no ‘Null’ values in this column. Each data point, i.e., petition/case- recording has an average length of 20000+ words (that means, each case recording has average 20000+ words). •Label : The ‘label’ column actually holds the initial decision of the corresponding petition/case-recording. It has values of 1 and 0, which signifies what decision was granted – whether the petition was accepted (1) or rejected (0). This column has a data-type of ‘int64’ and also
|
https://arxiv.org/abs/2505.21689v1
|
has 7593 values, without any empty/null recordings. There are correspondingly 3194 cases which were accepted (decision – 1) and 4399 cases which were rejected (decision – 0). •Split : The ‘split’ column has a data-type again of ‘object’ and basically helps us identify the splitting of the data. This column has 3 types of values – Train, Test and Development. The number of petitions belonging to these 3 categories are as follows: Training has 5082 petitions assigned, Testing has 1517 petitions assigned, and Development has 994 assigned to it. This signifies that 5082 petitions (texts) would be used for training the model, 1517 petitions would be used for testing and 994 petitions would be used for development. 7 •Name : The last column also has ‘object’ as it’s data type. This column basically tells us the name of the ‘.txt’ file where the case-recording/petition had been taken from. The filename generally has the format ‘ year caseno.txt′, thus the names of the files in this column have been entered like that, first the year of the filing, then the case number, like for example, ‘2008 1460.txt′ To avoid data leakage, we respected the predefined split column in the ILDC dataset. Train and test partitions are non-overlapping. We further confirmed this by computing pairwise TF–IDF cosine similarity of bigrams across splits, which yielded a maximum similarity of 0.765—well below the 0.80 threshold typically used to flag near- duplicates. To generate a continuous urgency ranking for each petition, we computed the number of days between the petition filing date and its first listed court hearing (denoted as gapdays). This value was then transformed using inverse square scaling and log normalization to obtain the final rank score , where smaller delays result in higher urgency. This method is inspired by prior judicial delay studies that treat early court attention as a proxy for systemic priority. The snippet of the dataset has been shown in the table 2. Text Label Split Name F. NARIMAN, J. Leave granted. In 2008, the Pu... 1 train 2019 890.txt S. THAKUR, J. Leave granted. These appeals ar... 0 train 2014 170.txt Markandey Katju, J. Leave granted. Heard lear... 1 train 2010 721.txt civil appellate jurisdiction civil appeal numb... 0 dev 1989 75.txt original jurisdiction writ petitions number. 8... 0 dev 1985 233.txt civil appellate jurisdiction civil appeal numb... 1 test 1986 397.txt criminal appellate jurisdiction criminal appea... 0 test 1993 98.txt Table 2 : Format of ILDC dataset 3.2 Methodology In this section, we have described a detailed methodology adopted in our proposed model. The methodology adopted in this study integrates robust preprocessing tech- niques, most relevant text embedding techniques, state-of-the-art machine learning algorithms, and evaluation metrics to develop a predictive framework for legal peti- tion rankings. The framework combines numerical features extracted from the petition using LLM-based technique and textual embeddings extracted from the large size text petition. In this section, we outline the theoretical foundations, mathematical for- mulations, and practical implementations of the proposed model. The methodology involves the following key steps: 1. Data Preprocessing: As the petitions
|
https://arxiv.org/abs/2505.21689v1
|
are majorly unstructured in nature, Ini- tially we cleaned and preprocessed it to remove noise and irrelevant information. This included steps such as removing stop words, tokenization, stemming and lemmatization. 8 2. Text Embedding: In this work, we further use various LLM-based text embedding techniques to convert the textual data into numerical features suitable for machine learning algorithms. 3. Numerical Feature Integration: We next incorporate some derived numerical feature extracted from the text of the each petitions except the text embedded features with the help of OpenAI’s GPT4o prompts [41]. The integration of this feature improves the rank predictability of our model. 4. Ground Truth Preparation: We further prepare rank of the petitions from the extracted data from the text of the petitions. The ranking based on the extracted numerical score from the text of the petition is used as the ground truth of the proposed model for validation the testing. 5. Model Selection & Training: We use seven most relevant ML model e.g., Random Forest, Linear Regression, ElasticNet, Decision Tree, XGBoost, Light- GBM,CatBoost. for this petition ranking model. These models was chosen for its balance between interpretability and performance in this context. 6. Model Evaluation: At the end of the work, the model’s performance was evalu- ated using metrics such as accuracy, precision, recall, and F1-score and AUC. The comparison based on the above metric reveals its ability to correctly rank the “Accepted” legal petitions. We also include the schematic diagram in fig 1 which describe the detailed flow of the work adopted in the proposed technique to predict the desired rank of the “Accepted” petitions for further judicial processing after its initial decision. The raw petition dataset (ILDC Dataset) is taken, and with the help of Natural Language Processing (NLP), each petition is converted into tokens (tokenization), i.e., the words are con- structed and represented as tokens for better representation and use for the model. Next, the tokenized texts or tokens are cleansed again with the help of NLP, where all stop words (words which are not of much value, like ‘a’, ‘an’, ‘the’, ‘he’, etc.) along with punctuation marks, alphanumeric characters and special characters. Once cleaned, the newly filtered tokens are passed through the processes of ‘Stemming’ and ‘Lemmatisa- tion’, which basically reduce words to their root forms or dictionary forms based upon context, for better understanding and simplification. From these fully preprocessed tokens, word embeddings are generated with the help of LLMs. GPT-based LLMs are used so that word embeddings are specific, meaningful and contextual. Again, GPT- based LLMs help with the feature engineering part, from where we sort out ‘Date of Filing of the petition/case’ and ‘Date of the first hearing/proceeding’, for each peti- tion. Once we get these 2 dates, it is easy to calculate the gap between these days. So, we calculate the gaps between the dates for each petition. Finally, we use metrics like inverse score and log score for ranking these petitions according to the urgency of their dates. The results are taken out using several models, and hence, the results from all
|
https://arxiv.org/abs/2505.21689v1
|
the models are evaluated by checking the correlation between the results produced from each of them. This is achieved by Spearman Rank Correlation, which finally tells us which model’s ranking is preferred. 9 Fig. 1 : Schematic diagram of Petition Ranking Model using Machine Learning. 3.3 Features used in Learning model In this section, we describe the features used for the classification and the pre- processing method applied before performing LLM-based text embedding vectoriza- tion techniques of the text field of the dataset to deduce the feature vectors. We further describe the method of numerical feature engineering which is being integrated in our model. 3.3.1 Pre-processing of Large text Petition We initially perform rigorous text preprocessing of the large size petitions( Average number of words/ petition ≈30,000). At the very first step, we filter the raw petitions from the dataset which are labeled as “Accepted” after the initial decision of the petition. To handle the raw unstructured text data, we perform tokenization on the text field to create a structured input for further analysis. Next, we eliminate stop words that do not contribute much to the meaning of the text. Further, we remove non-alphanumeric characters, symbols, and punctuation marks from the tokenized text. This step helps to focus on the essential words and maintain consistency. We finally apply stemming and lemmatization techniques to reduce words to their base or root words. We also identified and handled rare words, either by removing them or replacing them with a common token. This technique has been adopted to prevent the model from overfitting on infrequently occurring terms. The major challenges faced to preprocess the textual data of the legal petitions are its voluminous in size as well as unstructured format. This preprocessing not only involves in standardizing the data to ensure uniformity but also eliminate noise to enhance the quality of downstream analysis. The preprocessing pipeline begins with converting all text to lowercase, which eliminates discrepancies due to case sensitivity. Non-alphanumeric characters, such as special symbols and punctuation, are removed to retain only meaningful tokens. 3.3.2 Large Language Model Based Embedding In this section, we discuss the detailed mathematical model and its understanding of the various text embedding techniques employed in our proposed model. We employed a suite of state-of-the-art Large Language Models (LLMs), each tailored to capture unique aspects of semantic understanding and contextual relationships in legal texts. 10 DistilBERT This embedding technique is a distilled version of BERT embedding method. It was uti- lized for its efficiency and scalability. DistilBERT [42] retains 97% of BERT’s language understanding capabilities with only 60% of the parameters. Thus it makes suitable for resource-constrained environments. The embedding extraction process involves the transformer architecture’s attention mechanism as introduced by Vaswani et al. [43]. The mathematical model used in DistilBERT can be expressed as follows: H= softmaxQK⊤ √dk V, (1) where Q,K, and Vare the query, key, and value matrices. The embeddings gener- ated by DistilBERT ensure a balance between computational efficiency and semantic richness. MiniLM This text embedding technique is designed for lightweight applications. It employs deep self-attention
|
https://arxiv.org/abs/2505.21689v1
|
distillation to achieve compact yet high-quality embeddings [44]. Its mechanism focuses on preserving alignment between teacher and student model attention outputs, with the objective. The formal description of the model could be defined as follows: Ldistill =1 TTX t=1∥A(t) teacher−A(t) student∥2, (2) where Arepresents the attention matrix. MiniLM demonstrated strong performance on recall tasks while maintaining computational efficiency. Flan-T5 This embedding technique leverages pre-training objectives, including span corruption and multitask fine-tuning. It generalize across diverse tasks and models input-output pairs effectively through its encoder-decoder structure [45]. With a given sequence X, it predicts output Yby maximizing: P(Y|X) =mY i=1P(yi|X, y 1, . . . , y i−1), (3) where yirepresents each token in the output sequence. Flan-T5 proved particularly adept at handling complex relationships in legal texts. LegalBERT This text embedding method is a domain-specific variant of BERT. It was pre-trained on legal texts to capture the unique linguistic patterns and domain specific jargon [46]. Its embeddings were fine-tuned to identify semantic and procedural nuances critical for legal analysis. The architecture follows the standard transformer setup, ensuring domain-specific relevance. 11 E5 This text embedding method is designed for general-purpose search and retrieval tasks and emphasize efficiency in semantic matching [47]. The embedding process leverages contrastive learning objectives to ensure high-quality representations for similarity tasks. The formal representation of the technique can be stated as follows: Lcontrastive =−logexp(sim( ei, ej))P kexp(sim( ei, ek)), (4) where sim( ei, ej) represents the cosine similarity. 3.3.3 Numerical Feature Engineering To improve the raking predictability of the proposed model, we integrate few numerical features. This feature integration complements the textual embeddings by capturing temporal and structural aspects of the legal proceedings. We include following key features in our model: a) gapdays i.e., the number of days between petition acceptance and the first proceeding, b) rank score , c)word count , and d) sentence count . The gapdays feature is computed as: gapdays = |date proceeding −date acceptance |, (5) It provides a direct measure of procedural delays in the judiciary system. We represent the two different scaling rank score i.e., a) rank score log and b)rank score inverse square to capture the different aspect of this score. The mathematical model of these features can be stated as: rank score log= log(1 + gap days) , (6) rank score inverse square =1 gapdays2. (7) These transformations mitigate the influence of extreme values, improving inter- pretability and model robustness. 3.3.4 Feature Integration Further, we integrate the two different types of features—a) textual embedding feature and b) numerical feature. These features were combined into a unified feature matrix for machine learning models. We represent the Ftextas the textual embedding fea- ture with ddimensions and Fnumas the numerical features with pdimensions, which combined into feature matrix Fcan be represented as follows: F= FnumFtext , (8) where F∈Rn×(p+d)fornsamples. As shown in Figures 2, 3, and 4, we employed a three-stage zero-shot prompting [48] pipeline using GPT4o [41] to enrich the dataset 12 Stage 1: Extract Petition Acceptance and Proceeding Dates Prompt User : Analyze this legal dataset. Provide full dataset info. Then
|
https://arxiv.org/abs/2505.21689v1
|
extract peti- tion acceptance date and first proceeding date. Compute the number of days gap between them. [attached raw ILDC dataset] Response GPT4o : The dataset contains 7,593 entries and 4 columns: text ,label , split , and name . Dates in various formats are parsed using regular expres- sions. Keywords like “filed”, “admitted” (for acceptance) and “hearing”, “scheduled” (for proceedings) are used. The gap in days is calculated, and the reciprocal is stored as a rank score. [dataset generated] Fig. 2 : Prompt-response interaction for extracting and computing temporal gap fea- tures. Stage 2: Compute Gap Days and Rank Score Prompt User : Use the extracted dates to calculate the gap days (difference between petition acceptance and first proceeding). Add the gap to the dataset. Com- pute the reciprocal of the gap as rank score. Response GPT4o : A new column gapdays is added to the dataset, capturing the time difference. The rank score is calculated as rank score = 1 / gapdays2. This will serve as a priority metric for ranking accepted petitions. [dataset updated] Fig. 3 : Prompt-driven calculation of temporal features and ranking metric. systematically. The first stage extracts critical temporal markers from petition texts. The second stage computes gap-based urgency scores. Finally, the third stage adds essential text-level statistics to support regression modelling. 3.4 Embedding Pre-processing and Representation Petition texts were used as-is, without lowercasing, lemmatization, or stopword removal, to preserve the natural input format expected by pretrained language mod- els. For transformer-based models (RoBERTa, LegalBERT, DistilBERT, FLAN-T5, etc.), we used the HuggingFace AutoTokenizer andAutoModel interfaces with a max- imum token length of 128 and truncation enabled. For each text, we extracted the final hidden states and applied mean pooling across tokens to obtain fixed-size vector 13 Stage 3: Augment Dataset with Text Statistics Prompt User : On the existing dataset, compute basic text statistics for each petition: word count, sentence count, and average word length. Add these as new columns. Response GPT4o : The dataset is updated with three new fields: word count , sentence count , and avgword length , computed using standard NLP preprocessing. These features complement the rank score and support down- stream regression modeling. [datset generated] Fig. 4 : Prompt-guided augmentation of linguistic features for each petition. embeddings. Sentence-level embedding models like MiniLM and Instructor-XL were accessed via the .encode() method of SentenceTransformers, which internally applies optimized pooling and normalization. No models were fine-tuned; all were used in infer- ence mode. The resulting embedding vectors were concatenated with numeric features before regression. 3.5 Machine Learning Models To predict the rank scores of legal petitions, this study utilized a diverse set of machine learning models. These models were selected for their ability to handle complex data relationships, including non-linearity, high dimensionality, and feature interactions. The selected models include Random Forest, XGBoost, LightGBM, CatBoost, and ElasticNet. Each of these models brings unique characteristics to the task, enabling the study to comprehensively evaluate and identify the best-performing techniques for this domain. The following subsections provide a detailed theoretical explanation of each model. 3.5.1 Random Forest
|
https://arxiv.org/abs/2505.21689v1
|
Random Forest is an ensemble learning technique that combines the outputs of mul- tiple decision trees to produce a robust prediction. By training each tree on a random subset of the data and features, Random Forest reduces the risk of overfitting, which is a common issue in single decision tree models. The ensemble approach aggregates predictions from all trees, either by averaging (for regression) or voting (for classifica- tion). Mathematically, for a dataset with nsamples and Ttrees, the final prediction ˆyis given by: ˆy=1 TTX t=1ht(x), (9) where ht(x) represents the prediction from the t-th tree. 14 The strength of Random Forest lies in its robustness to noise and its ability to capture non-linear relationships between features and the target variable. Its paral- lelizable nature makes it computationally efficient for large datasets. In this study, Random Forest demonstrated its effectiveness in identifying complex patterns in the combined numerical and textual embeddings, achieving high predictive accuracy and rank correlation. 3.5.2 XGBoost and LightGBM XGBoost (Extreme Gradient Boosting) and LightGBM (Light Gradient Boosting Machine) are advanced implementations of gradient boosting, a powerful ensemble learning technique. Gradient boosting sequentially builds weak learners, typically deci- sion trees, and optimizes their combined performance by minimizing a loss function. For a dataset with nsamples, the objective function can be expressed as: L(θ) =nX i=1ℓ(yi,ˆyi) + Ω( θ), (10) where ℓis the loss function measuring the error between actual and predicted values, and Ω( θ) is a regularization term penalizing model complexity. XGBoost employs second-order derivatives of the loss function, allowing precise updates during optimization. It also incorporates features like tree pruning and column sampling to improve generalization and computational efficiency. In contrast, Light- GBM uses a histogram-based technique to divide feature values into discrete bins, reducing memory usage and training time significantly. Both models are well-suited for handling large, high-dimensional datasets. 3.5.3 Decision Tree Decision Tree is a non-parametric supervised learning algorithm that splits the dataset into subsets based on the value of a feature. Decision Tree resembles a flowchart, where internal nodes represent feature tests, branches correspond to outcomes, and leaf nodes represent predictions. At each split, the algorithm seeks to maximize the reduction in impurity, measured using metrics such as Gini Impurity or Entropy. For regression tasks, the Decision Tree minimizes the variance within each split. If Tis the tree, Nis the number of samples in a node, and ˆ yiis the predicted value, the mean squared error (MSE) at a node is: MSE node=1 NNX i=1(yi−ˆyi)2. (11) The tree grows by recursively splitting nodes until a stopping criterion is met, such as a minimum number of samples or a maximum depth. The Decision Tree offers simplicity and interpretability, making it a popular choice for regression problems. In 15 this study, it demonstrated strong predictive performance, achieving a high Spearman Rank Correlation of 0.980 and a competitive accuracy of 99.072%. 3.5.4 ElasticNet ElasticNet is a regularized linear regression model that combines L1 (Lasso) and L2 (Ridge) penalties to overcome limitations of each. The loss function for ElasticNet is defined as: L(β) =1 2NNX i=1(yi−ˆyi)2+αλ∥β∥1+(1−α) 2λ∥β∥2 2,
|
https://arxiv.org/abs/2505.21689v1
|
(12) where ∥β∥1is the L1 norm promoting sparsity, ∥β∥2 2is the L2 norm penalizing large coefficients, α∈[0,1] controls the trade-off between L1 and L2 regularization, and λ is the regularization strength. ElasticNet is particularly useful when features are highly correlated or when there are more features than samples. It combines the feature selection benefits of Lasso with the stability of Ridge regression. However, in this study, ElasticNet underperformed compared to non-linear models, with a low Spearman Rank Correlation of -0.338 and an MSE comparable to other methods. This result highlights its limitations in capturing the non-linear relationships inherent in the dataset. 3.5.5 CatBoost CatBoost (Categorical Boosting) is a gradient boosting framework explicitly designed to handle categorical data efficiently. Unlike other boosting methods, CatBoost employs an ordered boosting mechanism, which reduces the risk of overfitting. This mechanism uses permutations of the dataset to ensure that the target values in the training process are not leaked into the model. The loss function optimized in CatBoost is given by: L(θ) =nX i=1ℓ(yi,ˆyi), (13) where ℓis typically the mean squared error for regression tasks. CatBoost also inte- grates feature combinations and automatic handling of categorical variables, reducing the need for extensive preprocessing. In this study, CatBoost was particularly useful for its ability to handle complex feature interactions in legal data, such as the interaction between procedural time- lines and semantic embeddings. Its strong regularization mechanisms ensured stable predictions despite the variability in data distribution. 3.4 Embedding Generation: We extracted textual embeddings for each peti- tion using multiple pretrained transformer models, including RoBERTa, DistilBERT, LegalBERT, MiniLM, FLAN-T5, and Instructor-XL, all sourced via HuggingFace Transformers [49]. For each model, petition texts were tokenized using the model’s native tokenizer and passed through the encoder to obtain contextualized hid- den states, which were then mean-pooled across all tokens to produce fixed-size 16 vector embeddings. These embeddings were concatenated with four numeric fea- tures— gapdays,rank score log,word count , and sentence count —before being fed into the regression model. No prompting or generation was performed; the language models were used solely for feature extraction. 4 Results & Discussions In this section, we initially highlight the major performance metrics to assess our proposed model. Further, we compare the performance of several supervised machine learning models in the context of five LLM-based text embedding techniques. Finally, we summarize the results of the two cross-validation techniques employed in our work to validate the model. All evaluation artefacts, including test-set predictions, confi- dence intervals, and metrics summary tables across models, are publicly available in our data repository [50]. 4.1 Performance Metrics In our work, we have used two different types of metrics to measure the performance of this work: a) Model Evaluation Metrics b) Validation Techniques. 4.1.1 Evaluation Metrics To assess model performance comprehensively, multiple metrics were employed, including Mean Squared Error (MSE), Spearman Rank Correlation, and Accuracy. Mean Squared Error (MSE): This metric measures the average squared difference between predicted and actual values: MSE =1 nnX i=1(yi−ˆyi)2. (14) Lower MSE values indicate better model performance, with minimal prediction errors. Spearman Rank Correlation: This metric
|
https://arxiv.org/abs/2505.21689v1
|
evaluates the monotonic relationship between predicted and actual rank scores: ρ= 1−6Pn i=1d2 i n(n2−1), (15) where diis the difference between the ranks of yiand ˆyi. A high ρvalue suggests a strong alignment between predicted and actual rankings. Accuracy: For regression tasks, accuracy was defined as the percentage of predictions within a specified tolerance: Accuracy =Count( |yi−ˆyi| ≤ϵ) n×100, (16) where ϵrepresents the tolerance threshold. 17 The “Tol-10% Acc.” metric counts a prediction correct if its absolute error is no more than 10% of the true rank score. This relative tolerance accounts for the small magnitude and skewed distribution of the target variable. 4.1.2 Validation Techniques Monte Carlo Cross-Validation (MCCV): This technique involves randomly splitting the dataset into training and testing sets multiple times. For each iteration, the model is trained on the training set and evaluated on the test set. The average performance across iterations is computed as: L=1 kkX i=1L(fi, Di test), (17) where kis the number of iterations, Lis the loss function, and fiis the model trained on the i-th iteration. K-Fold Cross Validation(KFCV): We conduct k-fold cross-validation to ensure that the model’s performance is consistent across different subsets of the data. This helps in verifying the generalizability of the model. The cross-validation results confirm that the model maintains high accuracy and robustness across multiple folds. 4.2 Comparison of Models’ Performance The comprehensive comparison of model performance is summarized in Table 4 and 5. It provides a clear view of each model’s strengths and areas for improvement. We represent the major regression metrics in table 4 which shows the mean square error, R2, Spearman Rank correlation and expected Variance. On the other hand, table 5 shows the various classification metrics i.e., accuracy test, mean square error of K-fold cross validation, accuracy of K-fold cross validation and Monte carlo cross validation. Thus, this comparative analysis underscores the importance of selecting models based on the specific requirements of the task, such as efficiency, or precision. Features MSE MAE R2ρ Expl. Var. Tol-10% Acc. Numeric only 0.00004012 0.00004012 0.988 0.998 0.988 66.67±9.23 RoBERTa + numeric 0.00004358 0.00004358 0.987 0.996 0.988 58.33 ±9.02 DistilBERT + numeric 0.00012485 0.00012485 0.815 0.944 0.830 66.67 ±12.50 LegalBERT + numeric 0.00005671 0.00005671 0.964 0.775 0.965 66.67 ±12.50 MiniLM + numeric 0.00011689 0.00011689 0.905 0.825 0.907 25.00 ±8.33 Instructor XL + numeric 0.00020276 0.00020276 0.640 0.956 0.704 41.67 ±8.33 FLAN-T5 + numeric 0.00022341 0.00022341 0.255 0.970 0.320 16.67 ±8.33 Table 3 : Test-set performance (LightGBM, predefined court split). Numeric-only features already explain nearly all variance; transformer embeddings offer marginal improvements. 18 Note: “Tol-10% Acc.” reports the percentage of predictions within 10% relative error. Confidence intervals reflect 1 000 bootstrap resamples. Table 4 : Regression Metrics for Machine Learning Models Across Embeddings Emb. Model MSE R2Sp. Corr. Exp. Var. DistilBERTRandom Forest 0.002 -0.072 0.968 -0.071 Linear Regression 0.002 -0.027 0.005 -0.025 ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree 0.002 -0.225 0.980 -0.225 XGBoost 0.002 -0.219 0.872 -0.218 LightGBM 0.002 0.001 0.762 0.002 CatBoost 0.002 -0.004 0.820 -0.002 LegalBERTRandom Forest 0.002 -0.002 0.991 -0.000 Linear Regression 0.002 -0.001 -0.402 0.000
|
https://arxiv.org/abs/2505.21689v1
|
ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree 0.002 -0.002 0.992 -0.001 XGBoost 0.002 -0.002 0.829 -0.000 LightGBM 0.002 0.006 0.692 0.008 CatBoost 0.002 -0.003 0.901 -0.001 MiniLMRandom Forest 0.002 -0.002 0.991 -0.000 Linear Regression 0.002 -0.001 -0.380 0.000 ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree 0.002 -0.002 0.992 -0.001 XGBoost 0.002 -0.005 0.810 -0.003 LightGBM 0.002 0.006 0.714 0.008 CatBoost 0.002 -0.003 0.904 -0.001 Flan-T5Random Forest 0.002 -0.002 0.991 -0.000 Linear Regression 0.002 -0.001 -0.295 0.000 ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree 0.002 -0.002 0.992 -0.001 XGBoost 0.002 -0.002 0.806 -0.001 LightGBM 0.002 0.007 0.671 0.008 CatBoost 0.002 -0.003 0.871 -0.001 E5Random Forest 0.002 -0.002 0.991 -0.000 Linear Regression 0.002 -0.001 -0.243 0.001 ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree 0.002 -0.002 0.992 -0.001 XGBoost 0.002 -0.002 0.854 -0.001 LightGBM 0.002 0.007 0.626 0.009 CatBoost 0.002 -0.002 0.942 -0.000 19 Table 5 : Classification Metrics for ML Models Across Embeddings Emb. Model Test Accu. KFCV’s MSE KFCV’s Accu. MCCV’s Accu. DistilBERTRandom Forest 98.887 0.000 96.399 94.174 Linear Regression56.030 - - - ElasticNet 99.629 - - - Decision Tree 99.072 0.001 97.809 98.534 XGBoost 99.258 0.001 95.470 95.288 LightGBM 98.701 - - - CatBoost 98.516 0.000 66.730 77.180 LegalBERTRandom Forest 99.629 0.000 98.627 98.831 Linear Regression99.629 - - - ElasticNet 99.629 - - - Decision Tree 99.629 0.000 99.406 99.499 XGBoost 99.629 0.001 98.998 99.091 LightGBM 98.701 - - - CatBoost 99.443 0.000 84.998 78.794 MiniLMRandom Forest 99.629 0.000 98.886 98.980 Linear Regression99.629 - - - ElasticNet 99.629 - - - Decision Tree 99.629 0.000 99.369 99.425 XGBoost 99.443 0.001 96.324 95.733 LightGBM 97.588 - - - CatBoost 98.887 0.000 85.333 87.087 Flan-T5Random Forest 99.629 0.000 97.884 97.978 Linear Regression99.629 - - - ElasticNet 99.629 - - - Decision Tree 99.629 0.001 98.923 98.961 XGBoost 99.629 0.001 97.029 97.087 LightGBM 98.330 - - - CatBoost 99.258 0.000 82.623 80.779 E5Random Forest 99.629 0.000 97.735 97.403 Linear Regression99.629 - - - ElasticNet 99.629 - - - Decision Tree 99.629 0.000 99.295 99.295 XGBoost 99.258 0.001 96.472 96.160 LightGBM 97.403 - - - CatBoost 99.814 0.000 85.221 84.991 Table 4 shows that Random Forest and Decision Tree models achieve the highest Spearman Rank Correlation ( 0.99) across multiple embeddings, indicating their supe- rior performance in petition ranking, while Linear Regression and ElasticNet perform poorly due to their inability to capture non-linear relationships. In table 5, missing 20 values arise because this is a classification task, making regression-based metrics like MSE and R ²inapplicable, as the focus is on ranking discrete petition decisions rather than predicting continuous values. Ablation reveals that numeric features alone (e.g., date gaps and word count) explain 99% of the variance ( R2= 0.988) and yield near- perfect ranking ( ρ= 0.998). Adding transformer-based embeddings produced at most marginal gains ( ≤0.002 in R2), suggesting that urgency in petitions is largely encoded in structural attributes rather than semantic content. 4.2.1 Model Insights Our observation reveals that Random Forest and Decision Tree consistently outperformed others, especially with embeddings such as LegalBERT and MiniLM. Random Forest achieved a Spearman correlation of up to
|
https://arxiv.org/abs/2505.21689v1
|
0.991 and a test accuracy of 98.887% with DistilBERT embeddings. While Decision Tree exhibited near-perfect performance with a Spearman correlation of 0.992 and an accuracy of 99.629% for embeddings like MiniLM and E5. These results underscore the robustness of these models in capturing non-linear interactions and leveraging the latent semantic and procedural information embedded in the dataset. The figure 5 represents the compar- ative study of performance based on spearman rank correlation and test accuracy of the several machine learning model in the context of five different embedding tech- niques. As shown in Figure 6, both the K-fold cross-validation and Monte carlo cross validation accuracies remain consistently high across different models, underscoring the effectiveness of the hybrid approach in capturing complex semantic and procedural nuances inherent in unstructured legal texts. We observe that LightGBM performed well in certain cases, such as achieving a test accuracy of 98.701% with Flan-T5 embed- dings, although its Spearman correlation values were generally lower than those of Random Forest and Decision Tree. The cause behind the comparatively low perfor- mance of this model is it’s histogram-based optimization that may not have been as effective in capturing the nuances of the dataset’s feature space. On the other hand CatBoost , generally robust in handling categorical features, exhibited inconsistent performance. It is been observed that this model achieved a Spearman correlation of 0.901 with LegalBERT embeddings but struggled during K-Fold cross-validation( 66.730%). These results suggest that CatBoost’s sensitivity to hyperparameters or specific data distributions might have limited its effectiveness in this particular task. The analysis of model performance outlines the importance of employing advanced, non-linear models for tasks involving complex datasets. It is a clear observation that tree-based models, particularly Random Forest and Decision Tree, demonstrated their ability to capture non-linear relationships and semantic patterns critical for accurate predictions. On the other hand, relatively strong performance of XGBoost further highlights the effectiveness of gradient-boosting algorithms in handling structured and unstructured data. These models can be particularly valuable in scenarios requir- ing nuanced feature interactions or dynamic adjustments to data distributions. Our experiments show that temporal and length-based numeric features alone achieve near- perfect ranking performance ( ρ= 0.998), while LLM-based embeddings contribute marginal improvements. 21 (a) Spearman Rank Correlation (b) Test Accuracy Fig. 5 : Comparative performance metrics of the evaluated machine learning models. Figure (a) presents the Spearman Rank Correlation, (b) shows the Test Accuracy for various LLM-based embeddings.22 (a) KFCV Accuracy (b) MCCV Accuracy Fig. 6 : Comparative performance metrics of the evaluated machine learning mod- els. Figure (a) illustrates the K-Fold Cross-Validation Accuracy, and (b) displays the Monte Carlo Cross-Validation Accuracy for various LLM-based embeddings.23 5 Conclusions and Future works In this paper, we proposed LLMPR, a novel petition ranking model that leverages large language models and machine learning techniques to improve the prioritization of legal petitions. The increasing backlog of cases in judicial systems, particularly in India, delays justice and burdens the legal framework. Our automated ranking system processes unstructured legal petitions and assigns priority rankings based on textual and numerical features. Using the ILDC
|
https://arxiv.org/abs/2505.21689v1
|
dataset, we applied advanced text embeddings such as DistilBERT, LegalBERT, and MiniLM, combined with numerical features like gap days, rank scores, and word count, to enhance ranking accuracy. Our evaluation showed that Random Forest and Decision Tree models outperformed others, achieving high accuracy (99%) and Spearman rank correlation (0.99). The results demonstrate that AI-driven petition ranking can help optimize judicial workflows, reduce delays, and ensure that urgent cases receive prompt attention. However, our study has certain limitations. First, the model was trained on a single-language dataset (English), which restricts its applicability in multilingual legal systems. Second, our dataset is limited to Indian legal petitions, making it necessary to evaluate its effectiveness on global legal frameworks. Additionally, while our model ranks petitions effectively, it does not provide explanations for its decisions, which may impact transparency and trust among legal professionals. Finally, contextual variations in legal language could affect ranking performance, requiring further refinement.This study also illustrates that procedural metadata such as filing gaps and document length are strong proxies for urgency in legal petitions. While LLM-based embeddings may offer semantic nuance, their contribution to prioritization is minimal when robust temporal features are available. To overcome these limitations, future research will focus on expanding the model to multilingual datasets for broader applicability across judicial systems. Furthermore, enhancing explainability mechanisms with interpretable AI will improve transparency. Additionally, exploring deep learning architectures like LSTM, GRUs will refine the model’s ability to process complex legal texts. Declaration Author contributions. Avijit Gayen : Conceptualization, Methodology, Exper- iment, Writing. Somyajit Chakraborty : Writing, Data collection, Experiment. Mainak Sen : Conceptualization, Methodology, Writing. Soham Paul : Data col- lection, Experiment. Angshuman Jana : Supervision, Conceptualization, Writing -review & editing. Data availability. Used Dataset is publicly available. Conflict of interest. The authors declare that they have no conflict of interest. References [1] A. Melcarne, G.B. Ramello, et al., Is justice delayed justice denied? an empirical approach. International Review of Law and Economics 65, 105,953 (2021) 24 [2] K.A. Joshi, P. Mathur, R. Koranga, L. Singh, in Proceedings of the 5th Interna- tional Conference on Information Management & Machine Intelligence (2023), pp. 1–7 [3] M. Singh, in 2018 International Conference on Advances in Computing, Commu- nication Control and Networking (ICACCCN) (IEEE, 2018), pp. 128–131 [4] R. Baruah, R. Arora, Judicial accountability and judicial independence: The touchstone of indian democracy. Available at SSRN 2011755 (2012) [5] A. Kumar, A. Singh, The impact of political influence and power on the indian judiciary. IJLS 9, 1 (2023) [6] S.L. Cummings, D.L. Rhode, Public interest litigation: Insights from theory and practice. Fordham Urb. LJ 36, 603 (2009) [7] D. Hazra, What does (and does not) affect crime in india? International Journal of Social Economics 47(4), 503–521 (2020) [8] Law commission of india. https://lawcommissionofindia.nic.in/. [Accessed: 2024- 02-21] [9] R.C. Lawlor, What computers can do: Analysis and prediction of judicial decisions. American Bar Association Journal pp. 337–344 (1963) [10] L. Vercosa, V. Silva, J. Cruz, C. Bastos-Filho, B.L. Bezerra, Investigation of lawsuit process duration using machine learning and process mining. Discover Analytics 2(1), 9 (2024) [11] B.A. Sokhansanj,
|
https://arxiv.org/abs/2505.21689v1
|
G.L. Rosen, Predicting institution outcomes for inter partes review (ipr) proceedings at the united states patent trial & appeal board by deep learning of patent owner preliminary response briefs. Applied Sciences 12(7), 3656 (2022) [12] M.A.F. Faccioni, M. da Silva Lisboa, M.L. Rocha, D.N. Prata, G.V. Barbosa, in 2023 Fifth International Conference on Transdisciplinary AI (TransAI) (IEEE, 2023), pp. 110–113 [13] V. Malik, R. Sanjay, S.K. Nigam, K. Ghosh, S.K. Guha, A. Bhattacharya, A. Modi, in Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (Association for Computational Linguistics, Online, 2021), pp. 4046–4062. https://doi.org/10.18653/v1/2021. acl-long.313. URL https://aclanthology.org/2021.acl-long.313 [14] A. Abdillah, M. Din, M.Y.A. Kadir, Indonesian judicial system on probation. International Journal of Multicultural and Multireligious Understanding 10(1), 25 458–468 (2023) [15] A. Kholiq, I. Halimatusa’diyah, Does gender blindness improve gender equal- ity? female judges and the glass ceiling effect in the islamic judicial system in indonesia. Social & Legal Studies 32(1), 139–158 (2023) [16] R. Sil, Alpana, A. Roy, A review on applications of artificial intelligence over indian legal system. IETE Journal of Research 69(9), 6029–6038 (2023) [17] E. Ash, S. Asher, A. Bhowmick, S. Bhupatiraju, D.L. Chen, T. Devi, C. Goess- mann, P. Novosad, B. Siddiqi, Measuring gender and religious bias in the indian judiciary (2022) [18] R. Ippoliti, G. Tria, Efficiency of judicial systems: model definition and output estimation. Journal of Applied Economics 23(1), 385–408 (2020) [19] E. Sundari, A. Retnowati, The weakness of the control system for fighting cor- ruption in the judicial process: The case of indonesia. International Journal of Social, Policy and Law 2(1), 93–102 (2021) [20] S. Susanto, E-court as the prevention efforts against the indonesia judicial corruption. Yustisia Jurnal Hukum 9(1), 116–138 (2020) [21] M. ˇCehuli´ c, et al., Perspectives of legal culture: A systematic literature review. Revija za sociologiju 51(2), 257–282 (2021) [22] M. Barno, D.N. Mart´ ınez, K.R. Williams, Exploring alternatives to cash bail: An evaluation of orange county’s pretrial assessment and release supervision (pars) program. American Journal of Criminal Justice 45, 363–378 (2020) [23] R.C. Farrell, An excess of methods: Identifying implied fundamental rights in the supreme court. . Louis U. Pub. L. Rev. 26, 203 (2007) [24] N. Chawla, B. Kumar, E-commerce and consumer protection in india: the emerging trend. Journal of Business Ethics 180(2), 581–604 (2022) [25] T. Sourdin, B. Li, D.M. McNamara, Court innovations and access to justice in times of crisis. Health policy and technology 9(4), 447–453 (2020) [26] Q.S. Rasheed, A.K. Sharma, An alternative proposal of justice: Muslim women activists and socio-legal realities in india. Journal of International Women’s Studies 22(1), 270–292 (2021) [27] M. Smith, Integrating technology in contemporary legal education. The Law Teacher 54(2), 209–221 (2020) [28] H. Zhong, C. Xiao, C. Tu, T. Zhang, Z. Liu, M. Sun, How does nlp benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158 26 (2020) [29] C. Shi, T. Sourdin, B. Li, in IJCA , vol. 12 (HeinOnline, 2021), p. 1 [30] D. Putra, A modern
|
https://arxiv.org/abs/2505.21689v1
|
judicial system in indonesia: legal breakthrough of e-court and e-legal proceeding. Jurnal Hukum dan Peradilan 9(2), 275–297 (2020) [31] I. Benedetto, L. Cagliero, M. Ferro, F. Tarasconi, C. Bernini, G. Giacalone, Lever- aging large language models for abstractive summarization of italian legal news. Artificial Intelligence and Law pp. 1–21 (2025) [32] Z. Wang, Y. Ding, C. Wu, Y. Guo, W. Zhou, Causality-inspired legal provision selection with large language model-based explanation. Artificial Intelligence and Law pp. 1–25 (2024) [33] R. Li, A. Z¨ ufle, L. Zhao, G. Lamprianidis, in Proceedings of the 1st ACM SIGSPATIAL Workshop on Analytics for Local Events and News (2017), pp. 1–4 [34] F. Sun, Y. Zuo, Autonomous classification and decision-making support of citizen e-petitions based on bi-lstm-cnn. Mathematical Problems in Engineering 2022 (1), 9451,108 (2022) [35] D. Buryakov, M. Kovacs, U. Serd¨ ult, V. Kryssanov, in Proceedings of the 25th Annual International Conference on Digital Government Research (2024), pp. 156–164 [36] S. Ahmad, M.Z. Asghar, F.M. Alotaibi, Y.D. Al-Otaibi, A hybrid cnn+ bilstm deep learning-based dss for efficient prediction of judicial case decisions. Expert Systems with Applications 209, 118,318 (2022) [37] Z. Yang, J. Feng, Explainable multi-task convolutional neural network framework for electronic petition tag recommendation. Electronic Commerce Research and Applications 59, 101,263 (2023) [38] H.T. Nguyen, A brief report on lawgpt 1.0: A virtual legal assistant based on gpt-3. arXiv preprint arXiv:2302.05729 (2023) [39] Q. Huang, M. Tao, C. Zhang, Z. An, C. Jiang, Z. Chen, Z. Wu, Y. Feng, Lawyer llama technical report. arXiv preprint arXiv:2305.15062 (2023) [40] B. Clavi´ e, A. Gheewala, P. Briton, M. Alphonsus, R. Laabiyad, F. Piccoli, Legalm- fit: Efficient short legal text classification with lstm language model pre-training. arXiv preprint arXiv:2109.00993 (2021) [41] A. Hurst, A. Lerer, A.P. Goucher, A. Perelman, A. Ramesh, A. Clark, A. Ostrow, A. Welihinda, A. Hayes, A. Radford, et al., Gpt-4o system card. arXiv preprint arXiv:2410.21276 (2024) 27 [42] V. Sanh, L. Debut, J. Chaumond, T. Wolf, Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR abs/1910.01108 (2019). URL http: //arxiv.org/abs/1910.01108. 1910.01108 [43] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need. Advances in neural information processing systems 30(2017) [44] W. Wang, F. Wei, L. Dong, H. Bao, N. Yang, M. Zhou, Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transform- ers. CoRR abs/2002.10957 (2020). URL https://arxiv.org/abs/2002.10957. 2002.10957 [45] H.W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S.S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, A. Castro-Ros, M. Pellat, K. Robinson, D. Valter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E.H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q.V. Le, J. Wei. Scaling instruction-finetuned language models (2022). URL https://arxiv.org/abs/2210.11416 [46] I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras, I. Androutsopoulos, LEGAL-BERT: the muppets straight out of law school. CoRR abs/2010.02559 (2020). URL https://arxiv.org/abs/2010.02559. 2010.02559 [47] L. Wang, N. Yang, X. Huang, B. Jiao,
|
https://arxiv.org/abs/2505.21689v1
|
L. Yang, D. Jiang, R. Majumder, F. Wei. Text embeddings by weakly-supervised contrastive pre-training (2024). URL https://arxiv.org/abs/2212.03533 [48] F. Pourpanah, M. Abdar, Y. Luo, X. Zhou, R. Wang, C.P. Lim, X.Z. Wang, Q.J. Wu, A review of generalized zero-shot learning methods. IEEE transactions on pattern analysis and machine intelligence 45(4), 4051–4070 (2022) [49] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al., Huggingface’s transformers: State-of-the- art natural language processing. arXiv preprint arXiv:1910.03771 (2019) [50] S. Chakraborty, A. Gayen, M. Sen, A. Jana. Evaluation artefacts for llm@pr: A llm-driven petition ranking framework (2025). https://doi.org/10.5281/zenodo. 15496402. URL https://doi.org/10.5281/zenodo.15496402 28
|
https://arxiv.org/abs/2505.21689v1
|
arXiv:2505.21699v1 [eess.IV] 27 May 2025STA-Risk: A Deep Dive of Spatio-Temporal Asymmetries for Breast Cancer Risk Prediction Zhengbo Zhou⋆Dooman Arefan†Margarita Zuley† Jules Sumkin†Shandong Wu⋆†§ ⋆Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA †Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA §Department of Biomedical Informatics and Department of Bioengineering University of Pittsburgh, Pittsburgh, PA, USA Abstract. Predicting the risk of developing breast cancer is an impor- tant clinical tool to guide early intervention and tailoring personalized screening strategies. Early risk models have limited performance and re- cently machine learning-based analysis of mammogram images showed encouraging risk prediction effects. These models however are limited to the use of a single exam or tend to overlook nuanced breast tissue evolvement in spatial and temporal details of longitudinal imaging ex- ams that are indicative of breast cancer risk. In this paper, we propose STA-Risk (Spatial and Temporal Asymmetry-based Risk Prediction), a novel Transformer-based model that captures fine-grained mammo- graphic imaging evolution simultaneously from bilateral and longitudi- nal asymmetries for breast cancer risk prediction. STA-Risk is innovative by the side encoding and temporal encoding to learn spatial-temporal asymmetries, regulated by a customized asymmetry loss. We performed extensive experiments with two independent mammogram datasets and achieved superior performance than four representative SOTA models for 1- to 5-year future risk prediction. Source codes will be released upon publishing of the paper. Keywords: Breast cancer ·Risk prediction ·Mammography ·Asym- metry ·Deep learning. 1 Introduction Breast cancer remains the most common cancer among women worldwide [14]. Early detection of breast cancer can significantly improve treatment outcomes and patient survival. In the setting of mammographic screening, assessing an in- dividual’s risk of developing breast cancer is vital for early intervention and tai- loring personalized screening strategies (e.g., risk-stratified screening frequency and/orsupplementalscreening).EarlyriskmodelssuchasGailmodel[11],Claus model [3], and Tyrer-Cuzick model [4], only use basic personal and clinical data, showinglimitedperformance.Machinelearning-basedanalysisofscreeningmam- mogram images, such as the well-known Mirai model [17], has shown superior 2 performance than those early models. Mirai model is however, limited to the use of a single time-point mammogram exam as input, while most of the women in screening have multiple longitudinal exams. Taking advantage of longitudinal screening mammograms for breast can- cer risk prediction has a significant clinical value, as seen in several recent risk models. The PRIME+ [9] model used prior mammograms with a Transformer decoder, outperforming several methods that use only a single time-point exam. The LRP-NET [1] followed developing asymmetry to integrate information from longitudinal mammogram exams and also showed increased performance than using a single mammogram exam. Karaman et al. [6] extended the Mirai model [17] to process multi-year mammogram exams, demonstrating that incorporat- ing temporal trends can yield a higher prediction performance. While the ideas of using multiple longitudinal exams aim to capture breast tissue evolvement over time, it remains under-developed for effective methods that can capture fine-grained nuances of breast tissue variations in both the spatial and temporal dimensions to assess patient-specific breast cancer risk. Bilateral asymmetry in mammogram images has been clinically recognized as a biomarker of risk for developing breast cancer, as subtle left–right breast
|
https://arxiv.org/abs/2505.21699v1
|
differences often appear before overt lesions emerge [12]. Early work by Zheng et al. [19] [20] illustrated that simple measures of contralateral breast differ- ences—such as mammographic density or pixel-level fluctuation—is associated with short-term breast cancer risk. More recently, CNN- and RNN-based ap- proaches have been developed for refined asymmetry modeling, sometimes inte- grating multi-year exams [2] [1]. RADIFUSION [18] extended similar ideas via specialized attention blocks and multi-view gating to capture longitudinal sig- nals. BilAD [13] used paired breast tomosynthesis images from both breasts to detect location-specific tissue differences. However, previous work on quantifying bilateral asymmetry is limited to oversimplified left–right breast comparisons, such as arithmetic subtraction or left–right feature pooling, which tend to over- look or preclude the potentially rich and predictive information in spatial and temporal imaging details. Moreover, these studies are mostly only applicable to a single time-point exam or an ordinary aggregation of multiple exams. In evaluating longitudinal mammogram exams, subtle but clinically relevant tissue asymmetry patterns may emerge and evolve gradually, which requires a systematic mechanism to be capable of tracking in images both how each breast’s tissue changes on its own and how these changes constitute asymmetry with respect to the contralateral breast and over multiple sequential exams. In this paper, we propose STA-Risk (Spatial and Temporal Asymmetry-based Risk Prediction), a novel Transformer-based model that simultaneously captures both spatial and temporal asymmetries for breast cancer risk prediction. We aim to harnessthebilateral(deviationbetweentwobreasts)andlongitudinal(variations over time) cues to capture more expressive and fine-grained representation of mammographic imaging evolution, and associate them to the risk of developing breast cancer. The main contribution of this work is summarized as: 3 –1) We proposed a novel model structure, STA-Risk, that can systematically capture more expressive and fine-grained representation of mammographic imaging evolution at the bilateral and longitudinal dimensions for breast cancer risk prediction. –2) We contributed a unified method that integrates side encoding and tem- poral encoding to learn spatial-temporal asymmetries, regulated by a cus- tomized asymmetry loss. –3) We performed extensive experiments with two independent mammogram datasets and achieved superior performance than four representative SOTA models for 1- to 5-year future risk prediction. Fig. 1.The proposed STA-Risk architecture that aims to capture spatial and temporal asymmetry from longitudinal screening mammogram exams for predicting breast can- cer risk. The key components include side-aware spatial encoding, temporal attention, and a customized asymmetry loss. 2 Method The STA-Risk model (Figure 1) is characterized with three key components: Side encoding (left vs. right breast). A learnable side-identifying vector is embeded into each patch token, preserving side-specific features and allowing the model to attend to side-preserved asymmetric cues. Temporal encoding. Over multiple longitudinal exams, temporal encoding is designed to capture the evolving tissue patterns in each breast, taking into ac- count when and how changes occur. 4 Asymmetry loss. A novel loss function is customized to selectively regularize left–right differences, preventing the model from discarding potentially valuable shared representations while guiding it to focus on clinically relevant asymme- tries. 2.1 Overall Network Architecture In our setting each patient has longitudinal mammogram exams imaged at T
|
https://arxiv.org/abs/2505.21699v1
|
consecutive time-points (years), with each exam consisting of four standard mammographicimages/views:leftcranio-caudal(LCC),leftmediolateraloblique (LMLO),rightcranio-caudal(RCC),andrightmediolateraloblique(RMLO).As illustrated in Figure 1, the STA-Risk pipeline comprises a Spatial Encoder and a Temporal Encoder. At each time point, a side encoding is used to differentiate left from right breast. These inputs are then processed by the Spatial Encoder, whichemploystheattentionmechanismstoextractrelevantfeaturesandcapture cross-side asymmetry. The resulting outputs are aggregated into a single-time- point feature vector that encapsulates the imaging patterns of each breast for an individual exam. Next, the multiple single-time-point feature vectors are fed into the Temporal Encoder, which relies on temporal encoding and side encoding to maintain the chronological imaging information and retains the breast-side distinction for consistent cross-breast comparisons. By learning how local imag- ing structures of breast tissue evolve over time and across left-right breast, the model detects progressive spatiotemporal changes and lateral discrepancies that are potentially related to breast cancer risk. The final output is then passed to a Risk Prediction Module, supervised by two losses: a cross-entropy loss that refines cancer-vs-normal status discrimination and a customized asymmetry loss that drives the network to exploit left–right and temporal breast tissue mor- phological differences. This prediction module utilizes an additive hazard [17] [6] and it integrates the complete history embedding, denoted as h, to estimate future risk of developing breast cancer. The cumulative risk over kyears (where k∈ {1,2, . . . , 5}, as we target to predict 1- to 5-year risk) is computed by sum- ming the baseline risk β0with the annual hazard terms β1, β2, . . . , β k, where each βjcorresponds to the incremental risk for the j-th future year. 2.2 Side Encoding A key of our method is to reserve the breast side information to enable side consistent bilateral analysis between images of the left and right side. Here a very different operation is to not use direct subtraction of pixel intensities to measure the left-right differences, as it will preclude in the first place some of the side-specific imaging details that reflect subtle changes but matter for risk assessment. Instead, we introduce a learnable side embedding, denoted by vleftorvright, which are added directly to feature embeddings. Specifically, at each imaging time-point t, each view mammogram X(t) viewis processed using the tiny Swin Transformer [10] feature extractor (denoted by Swin-T) and a side 5 embedding vsideis incorporated into the extracted features, expressed as: n(t) view=Swin-T (X(t)) +vside. (1) We incorporate side encoding at two key places in the pipeline. First, prior to the cross-attention step in the Spatial Encoder, we attach a side embedding to the feature embedding to preserve left–right breast identity during spatial processing. Second, before entering the Temporal Encoder, each exam-specific feature vector is again augmented with a side embedding, enabling the model to monitor how each breast evolves over time. This two-stage approach empow- ers the network to detect subtle asymmetries in both the single-exam (spatial) structure and the longitudinal (temporal) progression of breast tissue. Overall our novel approach offers two main advantages: 1) Preserving Side-Specific Fea- tures. By encoding side identity with a separate embedding, our
|
https://arxiv.org/abs/2505.21699v1
|
model retains a fine-grained feature representation of left- or right-specific imaging traits and analyzes them in a nonlinear manner. 2) Contextual Modeling of Asymmetry. The Transformer learns nuanced relationships, where a small local change in one breast may be highly predictive of abnormality, while bilaterally symmetric changes may be actually less alarming. Notably, this level of contextual nuances cannot be captured by simple arithmetic subtraction of left-right breasts. 2.3 Temporal Encoding Sequential screening mammogram exams may occur at irregular intervals (e.g., 0.5, 1, 2, or even 3 years) due to irregular hospital visits, making purely index- based encodings suboptimal. Inspired by [15] and [5], we capture irregular in- tervals between exams, we encode temporal information based on the time dif- ference relative to the “present” exam at a reference-point (denoted by yp). In our formulation, exam dates are converted from years to months. For the t-th exam, let ytdenote its year, the relative time is calculated as τt= 12×(yt−yp). This way for the present exam, τt= 0, and for prior exams, τtwill be nega- tive, representing the number of months prior to the current exam. We then derive a d-dimensional temporal embedding, denoted as TEmb( τt), by encoding each temporal input using alternating sine and cosine functions over the embed- ding dimensions i∈[0,d 2], where dis the total dimensionality of the temporal embedding vector: TEmb( τt)2i= sinτt 100002i d , TEmb( τt)2i+1= cosτt 100002i d ,i= 0,1, . . . ,d 2−1. (2) This formulation ensures that exams that are temporally closer to the present exam have similar embeddings, while those further apart yield more distinct representations. The resulting temporal embedding is then added to the spatial and side encodings to form a unified representation for each exam. 6 2.4 Asymmetry Loss Following the Spatial Encoder, each mammogram exam is reduced to two fea- ture embeddings—one for the left and the other for the right breast—yielding {z(t) left,z(t) right}for each of the Ttime points t∈ {1,2,3, ...T}, following cross- attention between the corresponding n(t) CCandn(t) MLOviews. These embeddings are fed into a Temporal Encoder. The final output at each time point encapsu- lates both local spatial information and initial left–right differences. To quantify cross-breast asymmetry, we measure the Euclidean distance between the left and right embeddings at each time point t, Dt=∥z(t) left−z(t) right∥, (3) and compute the average DoverTtime points. In addition, we incorporate lon- gitudinal (temporal) asymmetry by assessing how each breast side’s embedding changes between consecutive exams, for instance ∆(left) t =∥z(t) left−z(t+1) left∥(and similarly for the right side ).(4) For cancer cases ( y= 1), we expect both larger cross-breast discrepancies and greater exam-to-exam changes, whereas normal cases ( y= 0) should exhibit smaller cross-breast differences and less significant temporal variation. We com- pute the average ∆overTtime points. We employ a margin-based hinge loss to provide target-specific guidance on these distances: Lasym =( max 0, m1−D + max 0, m′ 1−∆ ,ify= 1 max 0,D−m2 + max 0,∆−m′ 2 ,ify= 0,(5) where y∈ {0,1}indicates the status of cancer vs. normal, and m1, m2, m′ 1, m′ 2 are
|
https://arxiv.org/abs/2505.21699v1
|
user-defined margins. Let Lprimarydenote the main objective, we integrate them to customize a total loss as: Ltotal =Lprimary +λLasym, (6) where Lasymcombines both cross-breast and longitudinal terms, and λbalances the strength of asymmetry constraints. This joint formulation enables the STA- Risk model to leverage bilateral discrepancies and progressive temporal changes in the prediction of risk. 3 Experiments 3.1 Study Cohorts and Datasets Our experiments used two independent patient cohorts and imaging datasets. The first is the Karolinska Case-Control (CSAW-CC) Dataset [16], which is a part of the Cohort of Screen-Aged Women (CSAW). The CSAW-CC dataset 7 was specifically curated for developing breast imaging-based AI tools. It includes women aged 40–74 years old who underwent mammographic screening between 2008 and 2016 using Hologic imaging systems. To mitigate potential bias in the risk prediction due to early cancer signs or early-detectable cancers, patients diagnosed with breast cancer within six months following the “present” screen- ing exam were excluded. Our analysis included subjects who have at least two sequential screening exams. The final CSAW-CC cohort consisted of 406 breast cancer cases (all biopsy-proven) and 6,053 normal controls, with inter-exam in- tervals ranging from 12 to 36 months. The second dataset (denoted as Indepen- dent Dataset ) is a retrospectively collected case-control cohort at a different hospital, with individuals who participated in routine breast cancer screening from 2007 to 2014 also using Hologic systems. We have data use agreement for thisnot-publicly-availabledataset.Thiscohortcomprises293breastcancercases (all biopsy-proven) and 297 normal controls (at least 1-year follow-up to ensure normal status). Each subject had at least two sequential screening mammogram exams, with inter-exam intervals ranging from 12 to 24 months. 3.2 Implementation Details STA-Risk models were trained to predict 1- to 5-year breast cancer risk using sequential screening mammograms. For each mammogram exam of a patient’s data, it is treated as a reference time-point (Prior 0) and we then traced back- ward up to maximum three prior exams (Prior 1, Prior 2, Prior 3), with irregular intervals of 12–36 months between consecutive exams. Patient outcomes (e.g., cancer vs. normal status) were determined based on the next follow-up exam occurring after kyears since Prior 0, where kcorresponds to the prediction hori- zon (1–5 years). All dataset splits were rigorously performed at the patient level to prevent data leakage. We employed patient-wise 5-fold cross-validation to evaluate the performance of the proposed STA-Risk model. In each fold, data is split into training and testing set in an 80-20 radio. STA-Risk was benchmarked against several related andrepresentativeriskmodels,includingLRP-NET[1],Primte+[9],andLoMaR [6] that all work on longitudinal exams. We also compared to the Mirai [17] model, even though it only works on a single time-point exam; as it is a well- known model and here it serves as a baseline to show the benefits of using longitudinal data. To make the model learning focus on breast regions, as a preprocessing we used Libra [8] [7] to segment breasts first. To mitigate class imbalance, we adopt the reweighted cross-entropy loss function. The model was trained for 30 epochs with a batch size of 32, and the best checkpoint was selected via
|
https://arxiv.org/abs/2505.21699v1
|
a grid search over learning rate of 5e-5 and 1e-5. All experiments were conducted on an NVIDIA TESLA A100 GPU, courtesy of our institution’s computing resources. The parameter λfor the asymmetric loss was set to 0.01, andtheparameters m1,m2,m1′,m2′wereallsetto1,asdeterminedempirically through experiments. Model performance was evaluated using C-index and the 8 AreaUndertheROCCurve(AUC),withthemeanAUCandstandarddeviations computed over 5-fold cross-validation for predicting the 1- to 5-year risk. Table 1. Prediction Performance Comparisons of the Proposed STA-Risk Model to Other Models on two Datasets. Model C-index 1-year AUC 2-year AUC 3-year AUC 4-year AUC 5-year AUC CSAW-CC Dataset Mirai [17] 0.687±0.02 0.705±0.03 0.680±0.02 0.664±0.03 0.664±0.02 0.630±0.01 LRP-NET [1] 0.670±0.02 0.692±0.03 0.688±0.02 0.654±0.01 0.654±0.01 0.636±0.02 LoMaR [6] 0.696±0.01 0.709±0.02 0.708±0.01 0.689±0.02 0.686±0.03 0.675±0.03 Prime+ [9] 0.683±0.02 0.698±0.01 0.697±0.01 0.685±0.01 0.681±0.01 0.672±0.01 STA-Risk 0.722 ±0.010.749 ±0.020.744 ±0.010.706 ±0.020.704 ±0.030.694 ±0.02 Independent Dataset Mirai [17] 0.685±0.02 0.685±0.03 0.660±0.03 0.670±0.02 0.655±0.03 0.653±0.02 LRP-NET [1] 0.676±0.02 0.683±0.03 0.665±0.03 0.645±0.03 0.640±0.02 0.630±0.03 LoMaR [6] 0.706±0.02 0.717±0.02 0.691±0.02 0.661±0.03 0.634±0.02 0.620±0.02 Prime+ [9] 0.699±0.02 0.702±0.01 0.679±0.03 0.644±0.03 0.618±0.03 0.597±0.02 STA-Risk 0.732 ±0.020.744 ±0.020.717 ±0.030.684 ±0.030.662 ±0.030.654 ±0.03 Table 2. Ablation Study Results on the STA-Risk Model Components (Side – Side Encoding; Tmp – Temporal Encoding; Asy – Asymmetry Loss) Side Tmp Asy C-index 1-year AUC 2-year AUC 3-year AUC 4-year AUC 5-year AUC CSAW-CC Dataset ✗ ✗ ✗0.693±0.01 0.706±0.01 0.707±0.01 0.684±0.01 0.691±0.01 0.671±0.02 ✓ ✗ ✗0.699±0.01 0.714±0.01 0.715±0.02 0.696±0.01 0.691±0.02 0.679±0.02 ✗ ✗✓0.695±0.01 0.712±0.01 0.705±0.01 0.684±0.01 0.680±0.01 0.675±0.01 ✓ ✗✓0.704±0.02 0.718±0.01 0.715±0.01 0.697±0.01 0.693±0.01 0.683±0.01 ✓✓ ✗0.705±0.01 0.726±0.01 0.722±0.01 0.684±0.01 0.686±0.02 0.674±0.01 ✓✓✓0.722 ±0.010.749 ±0.010.744 ±0.010.706 ±0.020.704 ±0.020.694 ±0.02 Independent Dataset ✗ ✗ ✗0.707±0.02 0.713±0.02 0.688±0.02 0.653±0.03 0.628±0.02 0.614±0.02 ✓ ✗ ✗0.717±0.02 0.729±0.02 0.704±0.02 0.665±0.03 0.639±0.03 0.622±0.03 ✗ ✗✓0.725±0.02 0.730±0.02 0.706±0.04 0.675±0.04 0.650±0.03 0.632±0.03 ✓ ✗✓0.728±0.02 0.739±0.03 0.711±0.04 0.680±0.04 0.655±0.04 0.643±0.04 ✓✓ ✗0.725±0.02 0.730±0.02 0.706±0.04 0.675±0.04 0.650±0.03 0.632±0.03 ✓✓✓0.732 ±0.020.744 ±0.020.717 ±0.030.684 ±0.030.662 ±0.030.654 ±0.03 3.3 Results Table 1 presents the performance comparisons of the proposed STA-Risk model on the two datasets. As can be seen, our model consistently outperformed the compared models in predicting breast cancer risk across multiple time horizons. On the CSAW-CC dataset, STA-Risk achieves the highest C-index (0.722 ± 0.01), surpassing Mirai (0.687 ±0.02), LRP-NET (0.670 ±0.02), Prime+ (0.683 ±0.02), and LoMaR (0.696 ±0.01). In terms of AUC, STA-Risk shows higher performance for all the future risk predicted from 1 to 5 years. A similar out- performing pattern of our STA-Risk model is also observed on the Independent dataset. These results collectively verify the robustness and generalizability of our model across the two different datasets. 9 Fig. 2.(Left) Visualization on the Differences when Using STA-Risk vs. Without Side Encoding and Asymmetric Loss. (Right) Illustrative Visualization on the Effects of the STA-Risk Model. Table 2 shows the ablation effects of the key components of our model. As can be seen, the combination of the three components yielded the highest performance. Furthermore,excludingtemporalattentionsignificantlyreduceslonger-termpre- dictability, highlighting the importance of capturing breast tissue’s longitudinal evolvementandfeaturedependencies.Similarly,excludingasymmetrylossorside encoding also lowers performance, suggesting the rele- vance of bilateral asym- metry for risk prediction. Figure 2 visualized Grad-Cam heatmaps of several subjects in the test set: in the left panel, STA-Risk-produced
|
https://arxiv.org/abs/2505.21699v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.