text
string
source
string
be represented using factor graphs, which are bipartite graphs connecting variable nodes to factor nodes. Each factor fjmaps a subset Xjof variables to a non-negative value. The joint distribution is then P(x) = 1 ZQm j=1fj(xj). While both representations are equivalent, factor graphs make inference structure explicit ...
https://arxiv.org/abs/2505.21671v1
with MRFs feasible in practice (see Section 4). 4 Influence maximization. Another well-studied sequential decision problem on graphs is influence maxi- mization, where the goal is to select seed nodes to maximize influence spread under stochastic propagation models such as the independent cascade or linear threshold [K...
https://arxiv.org/abs/2505.21671v1
FrontierX1 X2 X3 X4 X5 X6 X7 X8 Figure 2: Reduction to a branching bandit on 8 nodes with root X1. After acting on {X1, X3}, the frontier is {X2, X4, X6}. Note that we have P(x2|x1, x3) =P(x2|x1) by the Markov property. To define the Gittins index, we assume the reward r(X, v) for revealing label v∈Σat node Xis bounded...
https://arxiv.org/abs/2505.21671v1
to the root while recalling that piecewise functions on a fixed domain range are closed under addition, multiplication, differentiation, and integration. This later allows us to bound the running time for computing our Gittins policy in Theorem 6 as the number of pieces in the function changes additively as we combine ...
https://arxiv.org/abs/2505.21671v1
the function ϕX,bdepends on all ϕZ,vfunctions, for all descendants ZofXand labels v∈Σ. However, our next result show that we can in fact obtain a polynomial run time that is independent of the maximum depth dof the induced rooted trees; this is why anytree restriction works on Line 5. Theorem 6. Given graph G= (X,E)and...
https://arxiv.org/abs/2505.21671v1
horizon settings We evaluated the policies against a family of randomly generated synthetic trees on n∈ {10,50,100}nodes across various discount factors β∈ {0.5,0.7,0.9}. We only run Optimal for small n= 10 instances, where the plots for Optimal andGittins exactly overlap as expected. For n= 10, we exactly compute the ...
https://arxiv.org/abs/2505.21671v1
To better reflect practical scenarios where timely detection is important across the entire population, we use β= 0.99 in our experiments and report results under both discounted and undiscounted objectives. In Appendix C, we explain in further detail how we pre-process the dataset, produce joint distributions Pfor eac...
https://arxiv.org/abs/2505.21671v1
scope within which our results apply. Each of these extensions below presents a well-motivated and technically rich research challenge building on the foundation we establish. 10 Table 1: Summary statistics of subsampled real-world sexual interaction graphs from [MR11]. CC stands for connected component, Max. depth ref...
https://arxiv.org/abs/2505.21671v1
Furthermore, our Bayesian formulation allows dynamic reweighting or calibration as more data is revealed, enabling adaptive policies that balance efficiency and equity. Exploring how to systematically integrate such fairness interventions into frontier-constrained graph exploration is a promising direction for future w...
https://arxiv.org/abs/2505.21671v1
Workshop on Representation Learning on Graphs and Manifolds , 2019. [GGW11] John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed Bandit Allocation Indices . John Wiley & Sons, 2011. [Git74] John Gittins. A dynamic allocation index for the sequential design of experiments. Progress in Statistics , pages 241–26...
https://arxiv.org/abs/2505.21671v1
Chin Hui Han, Christian Kroer, and Garud Iyengar. How Deep Is Your Defense-in-Depth? Hardening Cybersecurity Network Control Against Adaptive Attackers. AAAI 2025 Workshop on Artificial Intelligence for Cyber Security (AICS) , 2025. [MGDGO25] Heather Mattie, Ravi Goyal, Victor De Gruttola, and Jukka-Pekka Onnela. A Rev...
https://arxiv.org/abs/2505.21671v1
Now: AIDS at a Cross- roads, 2024. Available at https://www.unaids.org/en/resources/documents/2024/ global-aids-update-2024 . [UNA25] UNAIDS. Impact of US funding cuts on the global AIDS response – 28 March 2025 update, 2025. Accessed: 28 April 2025. [WD92] Christopher J.C.H. Watkins and Peter Dayan. Q-Learning. Machin...
https://arxiv.org/abs/2505.21671v1
nindividuals’ infection statuses using a pairwise MRF defined over the interaction graph G= (X,E), where each node Xirepresents an individual with a binary latent variable Xi∈ {0,1}indicating its HIV status, and each edge {Xi, Xj} ∈Eindicates a reported sexual interaction. Each individual also has associated covariates...
https://arxiv.org/abs/2505.21671v1
in Appendix A.1. We assume access to a historical dataset in which both the covariates and true HIV statuses are known. Classical approaches to MRF parameter learning such as [AKN06] typically assume access to multiple independent samples drawn from a fixed graphical model. Unfortunately, in our case, we only have acce...
https://arxiv.org/abs/2505.21671v1
also provide an error analysis in Appendix B.3 to bound the loss of the attainable discounted accumulated reward of an optimal policy computed on Pbθ1,bθ2while being executing on P. B Deferred proof details B.1 Gittins proofs Lemma 4. For any node X∈Xand label b∈Σ,ϕX,b(m)is a non-decreasing piecewise linear function ov...
https://arxiv.org/abs/2505.21671v1
index, let us first define two recursive functions ϕand Φ, as per [KO03]. For any non-root node X∈X, label b∈Σ, and value 0 ≤m≤¯r 1−β, ϕX,b(m) = max m,X v∈ΣP(X=v|Pa(X) =b)· r(X, v) +β·ΦCh(X),v(m) (1) IfXis the root, we define ϕX,∅(m) = max m,P v∈ΣP(X=v)· r(X, v) +β·ΦCh(X),v(m) . For any subset of nodes S∈X, lab...
https://arxiv.org/abs/2505.21671v1
differentiate each ϕY,bfunction, and then multiply them together. Integrating this resultant function and subtracting it from the constant¯r 1−βfunction requires only an additional O(1) operations on piecewise linear functions. Thus, computing all functions {ΦCh(X),b}b∈Σcosts O(|Ch(X)| · |Σ|) operations on piecewise li...
https://arxiv.org/abs/2505.21671v1
label xi∈ {0,1}, covariates ci∈Rd, and neighborhood N(Xi). Define feature maps f1(xi,ci)andf2(xi, xj,ci,cj)as per Eq. (4) and Eq. (5)respectively, shared parameters θ1∈R2+2dandθ2∈R4+5d, and the joint probability as per Eq. (6). Then, the log-pseudolikelihood gradients are: ∂logePθ1,θ2(x) ∂θ1=nX i=1αi· f1(1,c(i))−f1(0,...
https://arxiv.org/abs/2505.21671v1
the frontier and receives a label-dependent reward. We consider two MDPs defined over the same state space S, action mapping A:S → 2Xthat restricts valid actions to untesed frontier nodes, and discount factor β, but differing in the distribution used to infer infection status: •Thelearned MDP M= (S,A,ˆP,ˆR, β) is defin...
https://arxiv.org/abs/2505.21671v1
of use, interested researchers may independently access it via https://www.icpsr.umich.edu/web/ICPSR/studies/22140 . To facilitate repro- ducibility, our experimental scripts and the code used for data preprocessing and parameter estimation are available at https://github.com/cxjdavin/adaptive-frontier-exploration-on-g...
https://arxiv.org/abs/2505.21671v1
β= 0.9. See Fig. 6 for the full 3 ×3 plot. Figure 6: Full experimental results for synthetic tree experiments. C.3 Full results for Section 4.2 As described in Section 4.2, we progressively add random non-tree edges to the synthetic trees to observe the change in relative performance of our policies. Across all experim...
https://arxiv.org/abs/2505.21671v1
and Hepatitis (1732 nodes). Throughout all experiments, we consistently see that Gittins outperforms or is competitive with the other baselines both in terms of expected accumulated (un)discounted rewards and for any fixed timestep, e.g., the vertical dashed line in each plot indicates performance when only half the in...
https://arxiv.org/abs/2505.21671v1
arXiv:2505.21674v1 [cs.AI] 27 May 2025Make Planning Research Rigorous Again! Michael Katz IBM T. J. Watson Research Center Yorktown Heights, NY 10598 michael.katz1@ibm.comHarsha Kokel IBM Almaden Research Center San Jose, CA 95120 harsha.kokel@ibm.com Christian Muise Queen’s University Kingston, Canada christian.muise@...
https://arxiv.org/abs/2505.21674v1
with methodologies developed over the years through trial and error. This brings us to our primary position: “Make Planning Research Rigorous Again!” We believe that this rigor is the cornerstone of thoughtful and reproducible research that can be built upon. Therefore, our belief is that insights, methodologies, tools...
https://arxiv.org/abs/2505.21674v1
observable non-deterministic planning (FOND) [80, 85, 86] - where the action dynamic is non-deterministic. • Temporal Planning [32, 37, 75] -where actions have durations and temporal constraints. 2 • Numeric Planning [48, 90, 104] - where actions can affect numeric state variables. •Hierarchical Task Network (HTN) plan...
https://arxiv.org/abs/2505.21674v1
particularly the PDDL representation, has gained special attention in the era of LLM. The existing work can be partitioned into learning the domain model [ 41,88,38,107] and learning the problem instance representation, assuming the domain model exists [73, 119]. References to Some Tutorials: Readers can use the follow...
https://arxiv.org/abs/2505.21674v1
can solve any BlocksWorld instance - simply unstack all blocks and put them on the table in the first stage and incrementally build the requested goal state in the second stage. Such policies are called generalized policies and are dealt with in the generalized planning subfield. Generalized policies solve planning pro...
https://arxiv.org/abs/2505.21674v1
search in the problem state space, with automatically extracted from the problem heuristic function. Over the years, a large variety of search algorithms and heuristic functions were introduced, 4 some guaranteeing the obtained solutions to be optimal. One such famous algorithm is A∗, a best-first search algorithm that...
https://arxiv.org/abs/2505.21674v1
or simply unreachable from the initial state. In both cases, one should take great care with such states. 4 The Data The majority of datasets used for evaluation of LLM-based planners fall into one of the following categories: (a) existing games repurposed for multi-step planning/search problems, (b) natural language p...
https://arxiv.org/abs/2505.21674v1
the solutions are accessible through planning resources (c.f. Muise [83]). This issue is most aggravated when the benchmark is generated by scraping the data from the internet. For instance, the “Game of 24" and the “Mini crosswords" datasets [ 114] was generated by scraping https://4nums.com andhttps://www.goobix.com/...
https://arxiv.org/abs/2505.21674v1
To overcome this issue, some prior works partition the test and train split based on the number of objects or the plan-length. For example, train set might include problems with 3–7 objects where as test set might include problems with 7–20 objects. This approach ensures that the test instances are structurally differe...
https://arxiv.org/abs/2505.21674v1
these formalisms are Forbid Iterative, K∗, and SymK. All three can be conveniently invoked from Python. The former two are offered as PyPi packages. The latter is accessible via Unified Planning library. Among the most useful tools for data generation are PDDL problems generators, offering customizable software to gene...
https://arxiv.org/abs/2505.21674v1
with formal properties such as soundness, completeness, and where applicable, optimality [ 64]. These considerations are particularly important when selecting baselines. In general, unsound methods should be avoided, or, at the very least, not compared directly to sound approaches, as they do not provide the same guara...
https://arxiv.org/abs/2505.21674v1
addition, the proposed heuristic function is not guaranteed to not over-estimate the true goal cost, then this additional effort is essentially wasted, as optimality of the found solution cannot be guaranteed. Comparing the search efforts across algorithms might be tricky, even if they provide the same plan quality gua...
https://arxiv.org/abs/2505.21674v1
useful experimental protocols. In addition to making this case, this paper also details basic terminology and pointers that could act as a starting point for researchers to learn more about existing work and the state of the art. In each section, we also laid out some advice for researchers less familiar with the field...
https://arxiv.org/abs/2505.21674v1
Rachid Alami, editors, Recent Advances in AI Planning. 4th European Conference on Planning (ECP 1997) , volume 1348 of Lecture Notes in Artificial Intelligence , pages 130–142. Springer-Verlag, 1997. [16] Stephen A. Cook. The complexity of theorem-proving procedures. In Michael A. Harrison, Ranan B. Banerji, and Jeffre...
https://arxiv.org/abs/2505.21674v1
Conference on Artificial Intelligence (IJCAI 1999) , pages 956–961. Morgan Kaufmann, 1999. [32] Maria Fox and Derek Long. PDDL2.1: An extension to PDDL for expressing temporal planning domains. Journal of Artificial Intelligence Research , 20:61–124, 2003. [33] Guillem Francés, Miquel Ramirez, and Collaborators. Tarski...
https://arxiv.org/abs/2505.21674v1
the Sixth International Conference on Artificial Intelligence Planning and Scheduling (AIPS 2002) , pages 303–312. AAAI Press, 2002. [49] Malte Helmert. Complexity results for standard benchmark domains in planning. Artificial Intelligence , 143(2):219–262, 2003. [50] Malte Helmert. New complexity results for classical...
https://arxiv.org/abs/2505.21674v1
Proceedings of the Thirteenth National Conference on Artificial Intelli- gence (AAAI 1996) , pages 1194–1201. AAAI Press, 1996. [66] Emil Keyder and Héctor Geffner. Soft goals can be compiled away. Journal of Artificial Intelligence Research , 36:547–556, 2009. [67] Harsha Kokel, Michael Katz, Kavitha Srinivas, and Shi...
https://arxiv.org/abs/2505.21674v1
Neural Information Processing Systems (NeurIPS 2023) , pages 59522–59542, 2023. [80] Robert Mattmüller, Manuela Ortlieb, Malte Helmert, and Pascal Bercher. Pattern database heuristics for fully observable nondeterministic planning. In Ronen Brafman, Héctor Geffner, Jörg Hoffmann, and Henry Kautz, editors, Proceedings o...
https://arxiv.org/abs/2505.21674v1
2010. [95] Jendrik Seipp, Álvaro Torralba, and Jörg Hoffmann. PDDL generators. https://doi. org/10.5281/zenodo.6382173 , 2022. [96] Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Al- gorithm of thoughts: Enhancing exploration of ideas in large language models. CoRR , abs/2308.10379, 2...
https://arxiv.org/abs/2505.21674v1
Minh B. Do, and Subbarao Kambhampati. Effec- tive approaches for partial satisfaction (over-subscription) planning. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI 2004) , pages 562–569. AAAI Press, 2004. [111] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, F...
https://arxiv.org/abs/2505.21674v1
arXiv:2505.21677v1 [cs.LG] 27 May 2025What happens when generative AI models train recursively on each others’ generated outputs? Hung Anh Vu, Galen Reeves, Emily Wenger∗ Department of Electrical and Computer Engineering Duke University Abstract The internet is full of AI-generated content while also serving as a commo...
https://arxiv.org/abs/2505.21677v1
Quora is now AI-generated [57]. Given the increasing availability of these models for a variety of public-facing uses [2, 4, 28], AI-generated content from many different models will continue to proliferate. The standard practice of training on scraped internet data and the increasing prevalence of AI- generated conten...
https://arxiv.org/abs/2505.21677v1
research on collapse and calls for greater clarity and precision in discussing this phenomenon. Transfer learning and other model interactions. Significant prior work has studied the phenomenon of transfer learning, in which information learned by one model is passed to another, often by reusing the trained weights of ...
https://arxiv.org/abs/2505.21677v1
Another striking fact emerges from the categorization of training data in Table 1: large-scale model training datasets overlap. For example, GPT, Jamba, Llama, PaLM, and Phi are all trained on subsets of CommonCrawl [21], while GPT, Llama, and PaLM are all trained on Wikipedia and Books datasets. Several other models h...
https://arxiv.org/abs/2505.21677v1
fixed subset from the original and accumulated data at each model update. This scenario acknowledges the real-world compute limitations model trainers face. We believe that the accumulate-and-subsample paradigm best reflects reality, so we will leverage it in our work. We support this opinion with evidence from three w...
https://arxiv.org/abs/2505.21677v1
k; and Dtcould be an internet scrape from after initial model training. We weight the relative impact of these data types by the ratios α, β. •β,0≤β≤1, is relative size of the initial public data set D∗compared to the initial private data set˜Dk. This fraction remains constant if/when initial data is reused for updates...
https://arxiv.org/abs/2505.21677v1
y, θ ) where L(x, y, θ ):= (y−x⊤θ)2is the squared error loss and 0≤β0≤1controls the relative weight placed on the private data. Training then proceeds for generation stages t= 1,2,3, . . . as follows: 1.Each entity kuses its most recent parameter estimate ˆθt−1,tto generate new data Dtk= (xtki, ytki)ntk i=1according to...
https://arxiv.org/abs/2505.21677v1
If G1, . . . ,Gtare full rank then E[ˆθt] = I−Qt···Q1(I−G0G+ 0) (1K⊗θ),Cov(ˆθt) =Mt˜S+0 0S+ ∗ M⊤ t+Ct To help interpret this result, observe that if G0is full rank, then each initial estimate is unbiased, and unbiasedness persists throughout every stage of training. Conversely, if G0is rank deficient, then at least...
https://arxiv.org/abs/2505.21677v1
ratio between the MSE of the minimum variance unbiased estimator based on all the real data (both private and public) and the asymptotic MSE obtained from Theorem 3. We use dimension 15and rank 5. 6 Experiments with Large Language Models To understand how our theoretical predictions bear out in practice, we perform exp...
https://arxiv.org/abs/2505.21677v1
The theoretical results (Figure 5) show that a variety of behaviors are possible. Models sometimes get slightly worse or become better at certain tasks. As α→1, the predicted error rises. These results are mirrored in the interactions of three OPT language models (Figure 4), and extend to interactions between two OPT m...
https://arxiv.org/abs/2505.21677v1
10 Model Fitting Generations2345=1.0 T est Loss 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations 0 5 10 Model Fitting Generations Evaluati...
https://arxiv.org/abs/2505.21677v1
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [8]Meta AI. The Future of AI: Built with LLama, 2024. https://ai.meta.com/blog/ future-of-ai-built-with-llama/ . [9]Meta AI. Llama 4 model card, 2025. https://github.com/meta-llama/llam...
https://arxiv.org/abs/2505.21677v1
Workshop on Theoretical Foundations of Foundation Models , 2024. [30] US GAO. Science and tech spotlight - generative ai in health care, 2024. GAO-24-107634, https://www.gao.gov/products/gao-24-107634 . [31] Matthias Gerstgrasser, Rylan Schaeffer, Apratim Dey, Rafael Rafailov, et al. Is model collapse inevitable? break...
https://arxiv.org/abs/2505.21677v1
ai-generated media in art subreddits. arXiv preprint arXiv:2410.07302 , 2024. [48] Faustine Ngila. The copyright battles against openai have begun. Quartz , 2023. https://qz. com/openai-lawsuit-copyright-books-chatgpt-generative-ai-1850609334 . [49] Andrew J Peterson. Ai and the problem of knowledge collapse. AI & SOCI...
https://arxiv.org/abs/2505.21677v1
Bookcorpus dataset, 2022. https://huggingface.co/datasets/ SamuelYang/bookcorpus . [68] Hanlin Zhang, Benjamin L Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, and Boaz Barak. Watermarks in the sand: Impossibility of strong watermarking for generative models. arXiv preprint arXiv:2311.04378 , 2023. [69] ...
https://arxiv.org/abs/2505.21677v1
from future internet-scraped datasets? Several companies have publicly stated that they watermark AI-generated content [10,19,22], making this argument plausible. Furthermore, [25] show that using watermark detection techniques can help avoid model collapse under certain circumstances. However, reliance on watermarking...
https://arxiv.org/abs/2505.21677v1
est Loss 0123456789 Model Fitting Generations 0123456789 Model Fitting Generations 0123456789 Model Fitting Generations Model 1 on T ask 1 Model 2 on T ask 1 Model 1 on T ask 2 Model 2 on T ask 2Evaluation loss across generations for different and values (K=2) Figure 6: Actual behavior over time for interactions betwee...
https://arxiv.org/abs/2505.21677v1
X+ ∗y∗ and covariance Cov(ˆθt|D0) =Cov(Qtˆθt−1|D0) +Cov(QtX+ tw) =QtCt−1Qt+σ2QtS+ tQt| {z } Ct. This concludes the proof of Theorem 1. D.2 Proof of Theorem 2 Under the assumptions of the theorem, we have that ˜X+˜y X+ ∗y∗ ∼N˜S˜S+0 0 S∗S+ ∗ (1K+1⊗θ), σ2˜S+0 0S+ ∗ . (5) The goal for this proof is to verify that ...
https://arxiv.org/abs/2505.21677v1
arXiv:2505.21680v1 [cs.LG] 27 May 2025multivariateGPT: a decoder-only transformer for multivariate categorical and numeric data Andrew J. Loza1,2∗Jun Yup Kim4Shangzheng Song4Yihang Liu4 Joseph J. Y. Sung4R Andrew Taylor4Dennis L. Shung3 1Department of Biomedical Informatics and Data Science, Yale School of Medicine 2De...
https://arxiv.org/abs/2505.21680v1
decoding methods. Additional challenges occur when numeric data modalities can have variable size, often requiring resizing or alternative methods before encoding Han et al. [2022b], Tang et al. [2025]. A second class of models are based on neural ordinary differential equations which use a continuous representation of...
https://arxiv.org/abs/2505.21680v1
value of τ. Each term on the right-hand-side of Eq. 1 can be further decomposed: P({X, τ}i|Si−1) =P(τi|Si−1)mY j=1P(xj,|x1, ..., x j−1, τi,Si−1) (2) WhereSi−1is shorthand for the the prior sequence of {X, τ}tuples, mis the number of data elements in the set X, andxjare individual data elements within X. In this multiva...
https://arxiv.org/abs/2505.21680v1
a joint log-likelihood: lci=−CX j=1cjlog(ˆcj), lvi=−CX j=1cjlogP(vj|ˆµj,ˆσj) (7) where lciis the per-token class loss, lviis the per-token conditional value loss, cjis a binary indicator of the correct class index, ˆcjare predicted probabilities across classes, vjis a vector with one non-zero element equal to the corre...
https://arxiv.org/abs/2505.21680v1
a trajectory that was not present in the training data. The multivariate model generalizes to this unseen trajectory. The discrete model with 10 bins does not generalize and reproduces the orange trajectory from the training data. The discrete model with 100 bins initially approximates the orange trajectory and then fu...
https://arxiv.org/abs/2505.21680v1
models demonstrated improved performance with longer seed periods. Table 1: Mean Squared Error ( ±Standard Error) for MAP and Heart Rate from various seed lengths for each model. Model 3h 6h 9h 12h Gemma 1.1-7b 0.0999 ±0.0050 0.1246 ±0.0094 0.0461 ±0.0018 0.0299 ±0.0016 TFM_ODE 0.0268 ±0.0004 0.0236 ±0.0004 0.0166 ±0.0...
https://arxiv.org/abs/2505.21680v1
unique challenges that are separate from the eICU data above. Data were de- composed into a sequence of tokens: [(Age,63),(Sex,Male),(Lead I ,0.12),(Lead II ,0.15), . . . , (Lead I ,−0.03),(Lead II ,0.01), . . .].We evaluated the performance of multivariateGPT compared to a discrete transformer approach on this data se...
https://arxiv.org/abs/2505.21680v1
sampling, and stochasticity. Here we review related work for token-based and continuous function-based approaches. Token-Based methods : A naive discrete token-based approach is not optimal for numeric values Spathis and Kawsar [2024]. Careful tokenization of strings containing digits can improve performance on tasks B...
https://arxiv.org/abs/2505.21680v1
discrete methods and is more sample efficient, reaching high pre- cision in trajectory reconstruction in fewer iterations than discrete models. Furthermore, vocabularies are smaller because no discretization is necessary. Experiments on real-world clinical data show improved performance over state of the art models of ...
https://arxiv.org/abs/2505.21680v1
used improperly, this could lead to over- or under-treatment due to false positive or negative predictions. While this work focuses on clinical applications, our method is flexible and can be used to model any data source that can be flattened into sequences of class-value tuples (for example by wide to long tabular da...
https://arxiv.org/abs/2505.21680v1
S. Qiu, and A. G. Wilson. Large language models are zero-shot time series forecasters. In Advances in Neural Information Processing Systems , volume 36, pages 19622– 19635, 2023. H. Han, J. Xu, M. Zhou, Y . Shao, S. Han, and D. Zhang. Luna: language understanding with number augmentations on transformers via number plu...
https://arxiv.org/abs/2505.21680v1
Reddy, H. Zhang, A. Alameddine, O. Uzan, Y . Pinter, and C. Tanner. Tokenization is more than compression. arXiv preprint arXiv:2402.18376 , 2024. S. N. Shukla and B. M. Marlin. A survey on principles, models and methods for learning from irregularly sampled time series. arXiv preprint arXiv:2012.00168 , 2020. A. K. Si...
https://arxiv.org/abs/2505.21680v1
The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. 12 Z. Zhou, J. Wang, D. Lin, and K. Chen. Scaling behavior for large language models regarding numeral systems: An example using pythia. arXiv preprint arXiv:2409.17391 , 2024. Y . Zhu, Z. Wang, J. Gao, Y . Tong, J. An, W. Liao, E. M. ...
https://arxiv.org/abs/2505.21680v1
or max training steps met. An example of lead reconstruction is shown below in Fig. 5. Figure 5: Example lead reconstructions of a limb lead (III) and precordial lead (V2). A.3 eICU Model and Training Details: The following details the different model specifications and hyperpa- rameters for training models on the eICU...
https://arxiv.org/abs/2505.21680v1
LLMPR : A Novel LLM-Driven Transfer Learning based Petition Ranking Model Avijit Gayen2, Somyajit Chakraborty3, Mainak Sen2, Soham Paul2, Angshuman Jana1* 1*India Institute of Information Technology Ghuwahati, Bongora, Ghuwahati, 781015, Assam, India. 2Techno India University, West Bengal, Salt Lake, Kolkata, 700091, W...
https://arxiv.org/abs/2505.21689v1
rules of assessing justice make it a more time-consuming process. Thus, the accessibility of justice for the com- mon marginal people of society is far off. On the other hand, this delayed process of justice is capitalised on by the rich people for their own self-interest [4]. This situation develops partiality in the ...
https://arxiv.org/abs/2505.21689v1
accepted for further legal proceedings. Specifically, we predict the rank of accepted petitions filed based on the statement of the petition framed by the legal practitioner. This ranking system would work as an automated system that identifies the importance based on contextual sensitivity and fundamental judicial pri...
https://arxiv.org/abs/2505.21689v1
included various numerical features, i.e., gapday,rank soore , word count andsentence count in our model. •We have measured the performance of several machine learning algorithms on our dataset to identify the most suitable model in this context. •Finally, we have validated our model prediction with the actual labels o...
https://arxiv.org/abs/2505.21689v1
citizens and how corruption has influenced the judicial system. It also discusses the suppression of legitimate justice under the strong control of political influence. In some works [21], authors highlighted the lack of data coherency across the different courts. In another work [22], Barno et al. observed that the ab...
https://arxiv.org/abs/2505.21689v1
Taiwanese Joint Platform. In [36], a decision support system was made using CNN+BiLSTM to predict the court decision based on past data. Zekun et al. in [37] proposed an explainable convolutional neural network model to enhance the e-petitions tagging system on the Message Board for Leaders (MBL) in China. This system ...
https://arxiv.org/abs/2505.21689v1
has 7593 values, without any empty/null recordings. There are correspondingly 3194 cases which were accepted (decision – 1) and 4399 cases which were rejected (decision – 0). •Split : The ‘split’ column has a data-type again of ‘object’ and basically helps us identify the splitting of the data. This column has 3 types ...
https://arxiv.org/abs/2505.21689v1
are majorly unstructured in nature, Ini- tially we cleaned and preprocessed it to remove noise and irrelevant information. This included steps such as removing stop words, tokenization, stemming and lemmatization. 8 2. Text Embedding: In this work, we further use various LLM-based text embedding techniques to convert t...
https://arxiv.org/abs/2505.21689v1
the models are evaluated by checking the correlation between the results produced from each of them. This is achieved by Spearman Rank Correlation, which finally tells us which model’s ranking is preferred. 9 Fig. 1 : Schematic diagram of Petition Ranking Model using Machine Learning. 3.3 Features used in Learning mode...
https://arxiv.org/abs/2505.21689v1
distillation to achieve compact yet high-quality embeddings [44]. Its mechanism focuses on preserving alignment between teacher and student model attention outputs, with the objective. The formal description of the model could be defined as follows: Ldistill =1 TTX t=1∥A(t) teacher−A(t) student∥2, (2) where Arepresents...
https://arxiv.org/abs/2505.21689v1
extract peti- tion acceptance date and first proceeding date. Compute the number of days gap between them. [attached raw ILDC dataset] Response GPT4o : The dataset contains 7,593 entries and 4 columns: text ,label , split , and name . Dates in various formats are parsed using regular expres- sions. Keywords like “filed...
https://arxiv.org/abs/2505.21689v1
Random Forest is an ensemble learning technique that combines the outputs of mul- tiple decision trees to produce a robust prediction. By training each tree on a random subset of the data and features, Random Forest reduces the risk of overfitting, which is a common issue in single decision tree models. The ensemble ap...
https://arxiv.org/abs/2505.21689v1
(12) where ∥β∥1is the L1 norm promoting sparsity, ∥β∥2 2is the L2 norm penalizing large coefficients, α∈[0,1] controls the trade-off between L1 and L2 regularization, and λ is the regularization strength. ElasticNet is particularly useful when features are highly correlated or when there are more features than samples....
https://arxiv.org/abs/2505.21689v1
evaluates the monotonic relationship between predicted and actual rank scores: ρ= 1−6Pn i=1d2 i n(n2−1), (15) where diis the difference between the ranks of yiand ˆyi. A high ρvalue suggests a strong alignment between predicted and actual rankings. Accuracy: For regression tasks, accuracy was defined as the percentage ...
https://arxiv.org/abs/2505.21689v1
ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree 0.002 -0.002 0.992 -0.001 XGBoost 0.002 -0.002 0.829 -0.000 LightGBM 0.002 0.006 0.692 0.008 CatBoost 0.002 -0.003 0.901 -0.001 MiniLMRandom Forest 0.002 -0.002 0.991 -0.000 Linear Regression 0.002 -0.001 -0.380 0.000 ElasticNet 0.002 -0.001 -0.338 0.000 Decision Tree ...
https://arxiv.org/abs/2505.21689v1
0.991 and a test accuracy of 98.887% with DistilBERT embeddings. While Decision Tree exhibited near-perfect performance with a Spearman correlation of 0.992 and an accuracy of 99.629% for embeddings like MiniLM and E5. These results underscore the robustness of these models in capturing non-linear interactions and leve...
https://arxiv.org/abs/2505.21689v1
dataset, we applied advanced text embeddings such as DistilBERT, LegalBERT, and MiniLM, combined with numerical features like gap days, rank scores, and word count, to enhance ranking accuracy. Our evaluation showed that Random Forest and Decision Tree models outperformed others, achieving high accuracy (99%) and Spear...
https://arxiv.org/abs/2505.21689v1
G.L. Rosen, Predicting institution outcomes for inter partes review (ipr) proceedings at the united states patent trial & appeal board by deep learning of patent owner preliminary response briefs. Applied Sciences 12(7), 3656 (2022) [12] M.A.F. Faccioni, M. da Silva Lisboa, M.L. Rocha, D.N. Prata, G.V. Barbosa, in 2023...
https://arxiv.org/abs/2505.21689v1
judicial system in indonesia: legal breakthrough of e-court and e-legal proceeding. Jurnal Hukum dan Peradilan 9(2), 275–297 (2020) [31] I. Benedetto, L. Cagliero, M. Ferro, F. Tarasconi, C. Bernini, G. Giacalone, Lever- aging large language models for abstractive summarization of italian legal news. Artificial Intelli...
https://arxiv.org/abs/2505.21689v1
L. Yang, D. Jiang, R. Majumder, F. Wei. Text embeddings by weakly-supervised contrastive pre-training (2024). URL https://arxiv.org/abs/2212.03533 [48] F. Pourpanah, M. Abdar, Y. Luo, X. Zhou, R. Wang, C.P. Lim, X.Z. Wang, Q.J. Wu, A review of generalized zero-shot learning methods. IEEE transactions on pattern analysi...
https://arxiv.org/abs/2505.21689v1
arXiv:2505.21699v1 [eess.IV] 27 May 2025STA-Risk: A Deep Dive of Spatio-Temporal Asymmetries for Breast Cancer Risk Prediction Zhengbo Zhou⋆Dooman Arefan†Margarita Zuley† Jules Sumkin†Shandong Wu⋆†§ ⋆Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA †Department of Radiology, University of Pitts...
https://arxiv.org/abs/2505.21699v1
differences often appear before overt lesions emerge [12]. Early work by Zheng et al. [19] [20] illustrated that simple measures of contralateral breast differ- ences—such as mammographic density or pixel-level fluctuation—is associated with short-term breast cancer risk. More recently, CNN- and RNN-based ap- proaches ...
https://arxiv.org/abs/2505.21699v1
consecutive time-points (years), with each exam consisting of four standard mammographicimages/views:leftcranio-caudal(LCC),leftmediolateraloblique (LMLO),rightcranio-caudal(RCC),andrightmediolateraloblique(RMLO).As illustrated in Figure 1, the STA-Risk pipeline comprises a Spatial Encoder and a Temporal Encoder. At ea...
https://arxiv.org/abs/2505.21699v1
model retains a fine-grained feature representation of left- or right-specific imaging traits and analyzes them in a nonlinear manner. 2) Contextual Modeling of Asymmetry. The Transformer learns nuanced relationships, where a small local change in one breast may be highly predictive of abnormality, while bilaterally sy...
https://arxiv.org/abs/2505.21699v1
user-defined margins. Let Lprimarydenote the main objective, we integrate them to customize a total loss as: Ltotal =Lprimary +λLasym, (6) where Lasymcombines both cross-breast and longitudinal terms, and λbalances the strength of asymmetry constraints. This joint formulation enables the STA- Risk model to leverage bil...
https://arxiv.org/abs/2505.21699v1
a grid search over learning rate of 5e-5 and 1e-5. All experiments were conducted on an NVIDIA TESLA A100 GPU, courtesy of our institution’s computing resources. The parameter λfor the asymmetric loss was set to 0.01, andtheparameters m1,m2,m1′,m2′wereallsetto1,asdeterminedempirically through experiments. Model perform...
https://arxiv.org/abs/2505.21699v1