id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
3b5924fb4f5c2634173199ad01818be3721fbbcd64c0d0a1264ad9f63cb8965f
2026-01-01T00:00:00-05:00
Attribution-Guided Distillation of Matryoshka Sparse Autoencoders
arXiv:2512.24975v1 Announce Type: new Abstract: Sparse autoencoders (SAEs) aim to disentangle model activations into monosemantic, human-interpretable features. In practice, learned features are often redundant and vary across training runs and sparsity levels, which makes interpretations difficult to transfer and reuse. We introduce Distilled Matryoshka Sparse Autoencoders (DMSAEs), a training pipeline that distills a compact core of consistently useful features and reuses it to train new SAEs. DMSAEs run an iterative distillation cycle: train a Matryoshka SAE with a shared core, use gradient X activation to measure each feature's contribution to next-token loss in the most nested reconstruction, and keep only the smallest subset that explains a fixed fraction of the attribution. Only the core encoder weight vectors are transferred across cycles; the core decoder and all non-core latents are reinitialized each time. On Gemma-2-2B layer 12 residual stream activations, seven cycles of distillation (500M tokens, 65k width) yielded a distilled core of 197 features that were repeatedly selected. Training using this distilled core improves several SAEBench metrics and demonstrates that consistent sets of latent features can be transferred across sparsity levels
https://arxiv.org/abs/2512.24975
Academic Papers
svg
e07eb8132efae4650cfd797f759ffdedb316703c5d8196aa9e48ad0e43c4a752
2026-01-01T00:00:00-05:00
A Modal Logic for Possibilistic Reasoning with Fuzzy Formal Contexts
arXiv:2512.24980v1 Announce Type: new Abstract: We introduce a two-sort weighted modal logic for possibilistic reasoning with fuzzy formal contexts. The syntax of the logic includes two types of weighted modal operators corresponding to classical necessity ($\Box$) and sufficiency ($\boxminus$) modalities and its formulas are interpreted in fuzzy formal contexts based on possibility theory. We present its axiomatization that is \emph{sound} with respect to the class of all fuzzy context models. In addition, both the necessity and sufficiency fragments of the logic are also individually complete with respect to the class of all fuzzy context models. We highlight the expressive power of the logic with some illustrative examples. As a formal context is the basic construct of formal concept analysis (FCA), we generalize three main notions in FCA, i.e., formal concepts, object oriented concepts, and property oriented concepts, to their corresponding $c$-cut concepts in fuzzy formal contexts. Then, we show that our logical language can represent all three of these generalized notions. Finally, we demonstrate the possibility of extending our logic to reasoning with multi-relational fuzzy contexts, in which the Boolean combinations of different fuzzy relations are allowed.
https://arxiv.org/abs/2512.24980
Academic Papers
svg
7d24bdc6008be802d94f8f5fcd0cd8b05d1e9f92f038c4fb88497002dd0dad24
2026-01-01T00:00:00-05:00
DarkEQA: Benchmarking Vision-Language Models for Embodied Question Answering in Low-Light Indoor Environments
arXiv:2512.24985v1 Announce Type: new Abstract: Vision Language Models (VLMs) are increasingly adopted as central reasoning modules for embodied agents. Existing benchmarks evaluate their capabilities under ideal, well-lit conditions, yet robust 24/7 operation demands performance under a wide range of visual degradations, including low-light conditions at night or in dark environments--a core necessity that has been largely overlooked. To address this underexplored challenge, we present DarkEQA, an open-source benchmark for evaluating EQA-relevant perceptual primitives under multi-level low-light conditions. DarkEQA isolates the perception bottleneck by evaluating question answering from egocentric observations under controlled degradations, enabling attributable robustness analysis. A key design feature of DarkEQA is its physical fidelity: visual degradations are modeled in linear RAW space, simulating physics-based illumination drop and sensor noise followed by an ISP-inspired rendering pipeline. We demonstrate the utility of DarkEQA by evaluating a wide range of state-of-the-art VLMs and Low-Light Image Enhancement (LLIE) models. Our analysis systematically reveals VLMs' limitations when operating under these challenging visual conditions. Our code and benchmark dataset will be released upon acceptance.
https://arxiv.org/abs/2512.24985
Academic Papers
svg
4818d305a907e9c88b6c36daef0a782fc33258f859d9cec131ab2d69a5bd37d3
2026-01-01T00:00:00-05:00
PhysTalk: Language-driven Real-time Physics in 3D Gaussian Scenes
arXiv:2512.24986v1 Announce Type: new Abstract: Realistic visual simulations are omnipresent, yet their creation requires computing time, rendering, and expert animation knowledge. Open-vocabulary visual effects generation from text inputs emerges as a promising solution that can unlock immense creative potential. However, current pipelines lack both physical realism and effective language interfaces, requiring slow offline optimization. In contrast, PhysTalk takes a 3D Gaussian Splatting (3DGS) scene as input and translates arbitrary user prompts into real time, physics based, interactive 4D animations. A large language model (LLM) generates executable code that directly modifies 3DGS parameters through lightweight proxies and particle dynamics. Notably, PhysTalk is the first framework to couple 3DGS directly with a physics simulator without relying on time consuming mesh extraction. While remaining open vocabulary, this design enables interactive 3D Gaussian animation via collision aware, physics based manipulation of arbitrary, multi material objects. Finally, PhysTalk is train-free and computationally lightweight: this makes 4D animation broadly accessible and shifts these workflows from a "render and wait" paradigm toward an interactive dialogue with a modern, physics-informed pipeline.
https://arxiv.org/abs/2512.24986
Academic Papers
svg
42e32a87ec3a2a11fb0594cb788f7658656fac2c524fc77f3223ff82b753ab77
2026-01-01T00:00:00-05:00
Efficiently Estimating Data Efficiency for Language Model Fine-tuning
arXiv:2512.24991v1 Announce Type: new Abstract: While large language models (LLMs) demonstrate reasonable zero-shot capability across many downstream tasks, fine-tuning is a common practice to improve their performance. However, a task's data efficiency--i.e., the number of fine-tuning examples needed to achieve a desired level of performance--is often unknown, resulting in costly cycles of incremental annotation and retraining. Indeed, we demonstrate across a curated set of 30 specialized tasks that performant LLMs may struggle zero-shot but can attain stronger performance after fine-tuning. This motivates the need for methods to predict a task's data efficiency without requiring incremental annotation. After introducing a concrete metric that quantifies a task's data efficiency, we propose using the gradient cosine similarity of low-confidence examples to predict data efficiency based on a small number of labeled samples. We validate our approach on a diverse set of tasks with varying data efficiencies, attaining 8.6% error in overall data efficiency prediction and typically eliminating hundreds of unnecessary annotations on each task. Our experiment results and implementation code are available on GitHub.
https://arxiv.org/abs/2512.24991
Academic Papers
svg
312822a9a80c4ee3c587ec9abc6d60d66472fcbf3c8ee4dca5a8136b096d6c06
2026-01-01T00:00:00-05:00
Classifying long legal documents using short random chunks
arXiv:2512.24997v1 Announce Type: new Abstract: Classifying legal documents is a challenge, besides their specialized vocabulary, sometimes they can be very long. This means that feeding full documents to a Transformers-based models for classification might be impossible, expensive or slow. Thus, we present a legal document classifier based on DeBERTa V3 and a LSTM, that uses as input a collection of 48 randomly-selected short chunks (max 128 tokens). Besides, we present its deployment pipeline using Temporal, a durable execution solution, which allow us to have a reliable and robust processing workflow. The best model had a weighted F-score of 0.898, while the pipeline running on CPU had a processing median time of 498 seconds per 100 files.
https://arxiv.org/abs/2512.24997
Academic Papers
svg
54b72fe0f5cdd1cef37d8abb54abc9ee08165fc850995e6e524327d7bf287bb1
2026-01-01T00:00:00-05:00
Bi-C2R: Bidirectional Continual Compatible Representation for Re-indexing Free Lifelong Person Re-identification
arXiv:2512.25000v1 Announce Type: new Abstract: Lifelong person Re-IDentification (L-ReID) exploits sequentially collected data to continuously train and update a ReID model, focusing on the overall performance of all data. Its main challenge is to avoid the catastrophic forgetting problem of old knowledge while training on new data. Existing L-ReID methods typically re-extract new features for all historical gallery images for inference after each update, known as "re-indexing". However, historical gallery data typically suffers from direct saving due to the data privacy issue and the high re-indexing costs for large-scale gallery images. As a result, it inevitably leads to incompatible retrieval between query features extracted by the updated model and gallery features extracted by those before the update, greatly impairing the re-identification performance. To tackle the above issue, this paper focuses on a new task called Re-index Free Lifelong person Re-IDentification (RFL-ReID), which requires performing lifelong person re-identification without re-indexing historical gallery images. Therefore, RFL-ReID is more challenging than L-ReID, requiring continuous learning and balancing new and old knowledge in diverse streaming data, and making the features output by the new and old models compatible with each other. To this end, we propose a Bidirectional Continuous Compatible Representation (Bi-C2R) framework to continuously update the gallery features extracted by the old model to perform efficient L-ReID in a compatible manner. We verify our proposed Bi-C2R method through theoretical analysis and extensive experiments on multiple benchmarks, which demonstrate that the proposed method can achieve leading performance on both the introduced RFL-ReID task and the traditional L-ReID task.
https://arxiv.org/abs/2512.25000
Academic Papers
svg
7eeb4020d026c5a615e3e89fa941378a80365e332d9f45613d4ae57864b49c48
2026-01-01T00:00:00-05:00
FoundationSLAM: Unleashing the Power of Depth Foundation Models for End-to-End Dense Visual SLAM
arXiv:2512.25008v1 Announce Type: new Abstract: We present FoundationSLAM, a learning-based monocular dense SLAM system that addresses the absence of geometric consistency in previous flow-based approaches for accurate and robust tracking and mapping. Our core idea is to bridge flow estimation with geometric reasoning by leveraging the guidance from foundation depth models. To this end, we first develop a Hybrid Flow Network that produces geometry-aware correspondences, enabling consistent depth and pose inference across diverse keyframes. To enforce global consistency, we propose a Bi-Consistent Bundle Adjustment Layer that jointly optimizes keyframe pose and depth under multi-view constraints. Furthermore, we introduce a Reliability-Aware Refinement mechanism that dynamically adapts the flow update process by distinguishing between reliable and uncertain regions, forming a closed feedback loop between matching and optimization. Extensive experiments demonstrate that FoundationSLAM achieves superior trajectory accuracy and dense reconstruction quality across multiple challenging datasets, while running in real-time at 18 FPS, demonstrating strong generalization to various scenarios and practical applicability of our method.
https://arxiv.org/abs/2512.25008
Academic Papers
svg
da6a3257702305b0324d3287400ed3737c3b3480767523dda92ab32f8b6dad70
2026-01-01T00:00:00-05:00
At the intersection of Numerical Analysis and Spectral Geometry
arXiv:2512.25012v1 Announce Type: new Abstract: How do the geometric properties of a domain impact the spectrum of an operator defined on it? How do we compute accurate and reliable approximations of these spectra? The former question is studied in spectral geometry, and the latter is a central concern in numerical analysis. In this short expository survey we revisit the process of eigenvalue approximation, from the perspective of computational spectral geometry. Over the years a multitude of methods -- for discretizing the operator and for the resultant discrete system -- have been developed and analyzed in the field of numerical analysis. High-accuracy and provably convergent discretization approaches can be used to examine the interplay between the spectrum of an operator and the geometric properties of the spatial domain or manifold it is defined on. While computations have been used to guide conjectures in spectral geometry, in recent years approximation-theoretic tools and validated computations are also being used as part of proof strategies in spectral geometry. Given a particular spectral feature of interest, should we discretize the original problem, or seek a reformulation? Of the many possible approximation strategies, which should we choose? These choices are inextricably linked to the objective: on the one hand, rapid, specialized methods are often ideal for conjecture formulation (prioritizing efficiency and accuracy), whereas schemes with guaranteed, computable error bounds are needed when computation is incorporated into a proof strategy. We also review instances where the demanding requirements of spectral geometry -- the need for rigorous error control or the robust calculation of higher eigenvalues -- motivate new developments in numerical analysis.
https://arxiv.org/abs/2512.25012
Academic Papers
svg
2f802d2e644c9ea55362ee245d44e1b0f5de0d929ee2faac170568f786092eb3
2026-01-01T00:00:00-05:00
Diffusion Language Models are Provably Optimal Parallel Samplers
arXiv:2512.25014v1 Announce Type: new Abstract: Diffusion language models (DLMs) have emerged as a promising alternative to autoregressive models for faster inference via parallel token generation. We provide a rigorous foundation for this advantage by formalizing a model of parallel sampling and showing that DLMs augmented with polynomial-length chain-of-thought (CoT) can simulate any parallel sampling algorithm using an optimal number of sequential steps. Consequently, whenever a target distribution can be generated using a small number of sequential steps, a DLM can be used to generate the distribution using the same number of optimal sequential steps. However, without the ability to modify previously revealed tokens, DLMs with CoT can still incur large intermediate footprints. We prove that enabling remasking (converting unmasked tokens to masks) or revision (converting unmasked tokens to other unmasked tokens) together with CoT further allows DLMs to simulate any parallel sampling algorithm with optimal space complexity. We further justify the advantage of revision by establishing a strict expressivity gap: DLMs with revision or remasking are strictly more expressive than those without. Our results not only provide a theoretical justification for the promise of DLMs as the most efficient parallel sampler, but also advocate for enabling revision in DLMs.
https://arxiv.org/abs/2512.25014
Academic Papers
svg
3484aa3d42c69eabb03256e50c82879e4aef72ab7310430a5be56dc368bd3ee1
2026-01-01T00:00:00-05:00
MAMA-Memeia! Multi-Aspect Multi-Agent Collaboration for Depressive Symptoms Identification in Memes
arXiv:2512.25015v1 Announce Type: new Abstract: Over the past years, memes have evolved from being exclusively a medium of humorous exchanges to one that allows users to express a range of emotions freely and easily. With the ever-growing utilization of memes in expressing depressive sentiments, we conduct a study on identifying depressive symptoms exhibited by memes shared by users of online social media platforms. We introduce RESTOREx as a vital resource for detecting depressive symptoms in memes on social media through the Large Language Model (LLM) generated and human-annotated explanations. We introduce MAMAMemeia, a collaborative multi-agent multi-aspect discussion framework grounded in the clinical psychology method of Cognitive Analytic Therapy (CAT) Competencies. MAMAMemeia improves upon the current state-of-the-art by 7.55% in macro-F1 and is established as the new benchmark compared to over 30 methods.
https://arxiv.org/abs/2512.25015
Academic Papers
svg
607cd77647ab5f4b3a98bb4ecc0871dcb8c3d99c1a6d1289ba652ad37138c53e
2026-01-01T00:00:00-05:00
Approximations for the Weighted Reversal, Transposition, and Indel Distance Problem with Intergenic Region Information
arXiv:2512.25016v1 Announce Type: new Abstract: Genome rearrangement distances are an established method in genome comparison. Works in this area may include various rearrangement operations representing large-scale mutations, gene orientation information, the number of nucleotides in intergenic regions, and weights reflecting the expected frequency of each operation. In this article, we model genomes containing at most one copy of each gene by considering gene sequences, with orientations, and representing intergenic regions according to their nucleotide lengths. We looked at a problem called Weighted Reversal, Transposition, and Indel Distance, which seeks the minimal cost sequence composed by the rearrangement operations of reversals, transposition, and indels, capable of transforming one genome into another. We leverage a structure called Labeled Intergenic Breakpoint Graph to show an algorithm for that problem with guaranteed approximations considering some sets of weights for the operations.
https://arxiv.org/abs/2512.25016
Academic Papers
svg
39bfdcf80141c823007cc76bec4d5ba4fd3503030b07f09e709c3d2aaeb27327
2026-01-01T00:00:00-05:00
Convergence of the generalization error for deep gradient flow methods for PDEs
arXiv:2512.25017v1 Announce Type: new Abstract: The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit'' and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity.
https://arxiv.org/abs/2512.25017
Academic Papers
svg
1bb390af364a438682e25baeba5a852b605bacf3f1830d2e95bd76974a20534b
2026-01-01T00:00:00-05:00
Approximation Algorithms for Fair Repetitive Scheduling
arXiv:2512.25020v1 Announce Type: new Abstract: We consider a recently introduced fair repetitive scheduling problem involving a set of clients, each asking for their associated job to be daily scheduled on a single machine across a finite planning horizon. The goal is to determine a job processing permutation for each day, aiming to minimize the maximum total completion time experienced by any client. This problem is known to be NP-hard for quite restrictive settings, with previous work offering exact solution methods for highly-structured special cases. In this paper, we focus on the design of approximation algorithms with provable performance guarantees. Our main contributions can be briefly summarized as follows: (i) When job processing times are day-dependent, we devise a polynomial-time LP-based $2$-approximation, as well as a polynomial-time approximation scheme for a constant number of days. (ii) With day-invariant processing times, we obtain a surprisingly simple $(\frac{1+\sqrt{2}}{2}+\epsilon)$-approximation in polynomial time. This setting is also shown to admit a quasi-polynomial-time approximation scheme for an arbitrary number of days. The key technical component driving our approximation schemes is a novel batching technique, where jobs are conceptually grouped into batches, subsequently leading either to a low-dimensional dynamic program or to a compact configuration LP. Concurrently, while developing our constant-factor approximations, we propose a host of lower-bounding mechanisms that may be of broader interest.
https://arxiv.org/abs/2512.25020
Academic Papers
svg
3e4f44a4725f6e847b86b6f64b1616a4fb8edaed8e29f4f095b764e270ba3845
2026-01-01T00:00:00-05:00
ResponseRank: Data-Efficient Reward Modeling through Preference Strength Learning
arXiv:2512.25023v1 Announce Type: new Abstract: Binary choices, as often used for reinforcement learning from human feedback (RLHF), convey only the direction of a preference. A person may choose apples over oranges and bananas over grapes, but which preference is stronger? Strength is crucial for decision-making under uncertainty and generalization of preference models, but hard to measure reliably. Metadata such as response times and inter-annotator agreement can serve as proxies for strength, but are often noisy and confounded. We propose ResponseRank to address the challenge of learning from noisy strength signals. Our method uses relative differences in proxy signals to rank responses to pairwise comparisons by their inferred preference strength. To control for systemic variation, we compare signals only locally within carefully constructed strata. This enables robust learning of utility differences consistent with strength-derived rankings while making minimal assumptions about the strength signal. Our contributions are threefold: (1) ResponseRank, a novel method that robustly learns preference strength by leveraging locally valid relative strength signals; (2) empirical evidence of improved sample efficiency and robustness across diverse tasks: synthetic preference learning (with simulated response times), language modeling (with annotator agreement), and RL control tasks (with simulated episode returns); and (3) the Pearson Distance Correlation (PDC), a novel metric that isolates cardinal utility learning from ordinal accuracy.
https://arxiv.org/abs/2512.25023
Academic Papers
svg
ad7e973539a1df1ba8ce589a9127479e84bb61713dbe1697e9b2aed69279ed5a
2026-01-01T00:00:00-05:00
Modeling Language as a Sequence of Thoughts
arXiv:2512.25026v1 Announce Type: new Abstract: Transformer language models can generate strikingly natural text by modeling language as a sequence of tokens. Yet, by relying primarily on surface-level co-occurrence statistics, they fail to form globally consistent latent representations of entities and events, lack of which contributes to brittleness in relational direction (e.g., reversal curse), contextualization errors, and data inefficiency. On the other hand, cognitive science shows that human comprehension involves converting the input linguistic stream into compact, event-like representations that persist in memory while verbatim form is short-lived. Motivated by this view, we introduce Thought Gestalt (TG) model, a recurrent Transformer that models language at two levels of abstraction - tokens and sentence-level "thought" states. TG generates the tokens of one sentence at a time while cross-attending to a memory of prior sentence representations. In TG, token and sentence representations are generated using the same set of model parameters and trained with a single objective, the next-token cross-entropy: by retaining the computation graph of sentence representations written to memory, gradients from future token losses flow backward through cross-attention to optimize the parameters generating earlier sentence vectors. In scaling experiments, TG consistently improves efficiency over matched GPT-2 runs, among other baselines, with scaling fits indicating GPT-2 requires ~5-8% more data and ~33-42% more parameters to match TG's loss. TG also reduces errors on relational direction generalization on a father-son reversal curse probe.
https://arxiv.org/abs/2512.25026
Academic Papers
svg
298fb3b41e0d14979da8cb62f455e3fe553c524256184bb7a83d6952cca82eb7
2026-01-01T00:00:00-05:00
EF(X) Orientations: A Parameterized Complexity Perspective
arXiv:2512.25033v1 Announce Type: new Abstract: The concept of fair orientations in graphs was introduced by Christodoulou, Fiat, Koutsoupias, and Sgouritsa in 2023, naturally modeling fair division scenarios in which resources are only contested by neighbors. In this model, vertices represent agents and undirected edges represent goods; edges have to be oriented towards one of their endpoints, i.e., allocated to one of their adjacent agents. Although EFX orientations (envy-free up to any good) have been extensively studied in this setting, EF orientations (envy-free) remain unexplored. In this work, we initiate their study, mostly under the lens of parameterized complexity, presenting various tractable cases, hardness results, and parameterizations. Our results concern both simple graphs and multigraphs. Interestingly, many of our results transfer to EFX orientations, thus complementing and improving upon previous work; notably, we answer an open question regarding the structural parameterized complexity of the latter problem on graphs of polynomially-bounded valuations. We also show that EF orientations are tractable in cases in which EFX orientations are not, particularly for binary valuations. Lastly, we consider charity in the orientation setting, establishing algorithms for finding the minimum amount of edges that have to be removed from a graph in order for EF(X) orientations to exist.
https://arxiv.org/abs/2512.25033
Academic Papers
svg
9efdd867b9e4d18c74430c33464d3cb5fd69e3e4a4a681fa00cfcf49f1005b9d
2026-01-01T00:00:00-05:00
Generative Classifiers Avoid Shortcut Solutions
arXiv:2512.25034v1 Announce Type: new Abstract: Discriminative approaches to classification often learn shortcuts that hold in-distribution but fail even under minor distribution shift. This failure mode stems from an overreliance on features that are spuriously correlated with the label. We show that generative classifiers, which use class-conditional generative models, can avoid this issue by modeling all features, both core and spurious, instead of mainly spurious ones. These generative classifiers are simple to train, avoiding the need for specialized augmentations, strong regularization, extra hyperparameters, or knowledge of the specific spurious correlations to avoid. We find that diffusion-based and autoregressive generative classifiers achieve state-of-the-art performance on five standard image and text distribution shift benchmarks and reduce the impact of spurious correlations in realistic applications, such as medical or satellite datasets. Finally, we carefully analyze a Gaussian toy setting to understand the inductive biases of generative classifiers, as well as the data properties that determine when generative classifiers outperform discriminative ones.
https://arxiv.org/abs/2512.25034
Academic Papers
svg
a288fc7af9051a19efa65ce0ebeae5340bbdf14bdcfa9f6f089f62c5a45f8dc8
2026-01-01T00:00:00-05:00
Thin Tree Verification is coNP-Complete
arXiv:2512.25043v1 Announce Type: new Abstract: An $\alpha$-thin tree $T$ of a graph $G$ is a spanning tree such that every cut of $G$ has at most an $\alpha$ proportion of its edges in $T$. The Thin Tree Conjecture proposes that there exists a function $f$ such that for any $\alpha > 0$, every $f(\alpha)$-edge-connected graph has an $\alpha$-thin tree. Aside from its independent interest, an algorithm which could efficiently construct an $O(1)/k$-thin tree for a given $k$-edge-connected graph would directly lead to an $O(1)$-approximation algorithm for the asymmetric travelling salesman problem (ATSP)(arXiv:0909.2849). However, it was not even known whether it is possible to efficiently verify that a given tree is $\alpha$-thin. We prove that determining the thinness of a tree is coNP-hard.
https://arxiv.org/abs/2512.25043
Academic Papers
svg
80414b772cc5aa4bca63d4f9bbc92dd6e358aec3fd7b8564acf4dcc510626073
2026-01-01T00:00:00-05:00
AdaGReS:Adaptive Greedy Context Selection via Redundancy-Aware Scoring for Token-Budgeted RAG
arXiv:2512.25052v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) is highly sensitive to the quality of selected context, yet standard top-k retrieval often returns redundant or near-duplicate chunks that waste token budget and degrade downstream generation. We present AdaGReS, a redundancy-aware context selection framework for token-budgeted RAG that optimizes a set-level objective combining query-chunk relevance and intra-set redundancy penalties. AdaGReS performs greedy selection under a token-budget constraint using marginal gains derived from the objective, and introduces a closed-form, instance-adaptive calibration of the relevance-redundancy trade-off parameter to eliminate manual tuning and adapt to candidate-pool statistics and budget limits. We further provide a theoretical analysis showing that the proposed objective exhibits epsilon-approximate submodularity under practical embedding similarity conditions, yielding near-optimality guarantees for greedy selection. Experiments on open-domain question answering (Natural Questions) and a high-redundancy biomedical (drug) corpus demonstrate consistent improvements in redundancy control and context quality, translating to better end-to-end answer quality and robustness across settings.
https://arxiv.org/abs/2512.25052
Academic Papers
svg
1241f689d205780985167d921a496ee8d370b9eb9e436cef6045b12cd7f19253
2026-01-01T00:00:00-05:00
Context-aware LLM-based AI Agents for Human-centered Energy Management Systems in Smart Buildings
arXiv:2512.25055v1 Announce Type: new Abstract: This study presents a conceptual framework and a prototype assessment for Large Language Model (LLM)-based Building Energy Management System (BEMS) AI agents to facilitate context-aware energy management in smart buildings through natural language interaction. The proposed framework comprises three modules: perception (sensing), central control (brain), and action (actuation and user interaction), forming a closed feedback loop that captures, analyzes, and interprets energy data to respond intelligently to user queries and manage connected appliances. By leveraging the autonomous data analytics capabilities of LLMs, the BEMS AI agent seeks to offer context-aware insights into energy consumption, cost prediction, and device scheduling, thereby addressing limitations in existing energy management systems. The prototype's performance was evaluated using 120 user queries across four distinct real-world residential energy datasets and different evaluation metrics, including latency, functionality, capability, accuracy, and cost-effectiveness. The generalizability of the framework was demonstrated using ANOVA tests. The results revealed promising performance, measured by response accuracy in device control (86%), memory-related tasks (97%), scheduling and automation (74%), and energy analysis (77%), while more complex cost estimation tasks highlighted areas for improvement with an accuracy of 49%. This benchmarking study moves toward formalizing the assessment of LLM-based BEMS AI agents and identifying future research directions, emphasizing the trade-off between response accuracy and computational efficiency.
https://arxiv.org/abs/2512.25055
Academic Papers
svg
f1e67bcbc81a3eeff666c84970e5b7c2f9d20dde5bc4847136cd32fdae05f6d4
2026-01-01T00:00:00-05:00
Reliable and Resilient Collective Communication Library for LLM Training and Serving
arXiv:2512.25059v1 Announce Type: new Abstract: Modern ML training and inference now span tens to tens of thousands of GPUs, where network faults can waste 10--15\% of GPU hours due to slow recovery. Common network errors and link fluctuations trigger timeouts that often terminate entire jobs, forcing expensive checkpoint rollback during training and request reprocessing during inference. We present R$^2$CCL, a fault-tolerant communication library that provides lossless, low-overhead failover by exploiting multi-NIC hardware. R$^2$CCL performs rapid connection migration, bandwidth-aware load redistribution, and resilient collective algorithms to maintain progress under failures. We evaluate R$^2$CCL on two 8-GPU H100 InfiniBand servers and via large-scale ML simulators modeling hundreds of GPUs with diverse failure patterns. Experiments show that R$^2$CCL is highly robust to NIC failures, incurring less than 1\% training and less than 3\% inference overheads. R$^2$CCL outperforms baselines AdapCC and DejaVu by 12.18$\times$ and 47$\times$, respectively.
https://arxiv.org/abs/2512.25059
Academic Papers
svg
af2ce11fa5fc65e1cab13a520e50974797d9b623f5fcbed7d018ca8b336d000f
2026-01-01T00:00:00-05:00
On the geometry and topology of representations: the manifolds of modular addition
arXiv:2512.25060v1 Announce Type: new Abstract: The Clock and Pizza interpretations, associated with architectures differing in either uniform or learnable attention, were introduced to argue that different architectural designs can yield distinct circuits for modular addition. In this work, we show that this is not the case, and that both uniform attention and trainable attention architectures implement the same algorithm via topologically and geometrically equivalent representations. Our methodology goes beyond the interpretation of individual neurons and weights. Instead, we identify all of the neurons corresponding to each learned representation and then study the collective group of neurons as one entity. This method reveals that each learned representation is a manifold that we can study utilizing tools from topology. Based on this insight, we can statistically analyze the learned representations across hundreds of circuits to demonstrate the similarity between learned modular addition circuits that arise naturally from common deep learning paradigms.
https://arxiv.org/abs/2512.25060
Academic Papers
svg
b9e060c090a6d070241e91d4c2f0ffc3851ac8160e6a35db548b161f8b49922b
2026-01-01T00:00:00-05:00
Many Minds from One Model: Bayesian Transformers for Population Intelligence
arXiv:2512.25063v1 Announce Type: new Abstract: Despite their scale and success, modern transformers are almost universally trained as single-minded systems: optimization produces one deterministic set of parameters, representing a single functional hypothesis about the data. Motivated by the idea that intelligence emerge from many minds, we propose Population Bayesian Transformers (B-Trans), which transform a standard Large Language Model into a Bayesian Transformer model to supports sampling diverse yet coherent model instances from a single set of pre-trained weights. B-Trans introduces a Bayesian-motivated posterior proxy by treating the bias-like offsets in normalization layers as stochastic variables with a Gaussian variational approximation, inducing a distribution over model behavior without the cost of training full Bayesian neural networks. Sampling from this proxy yields a set of model instances with diverse behaviors while maintaining general competence. To preserve coherence within each generation, we freeze the sampled noise at the sequence level, enforcing temporal consistency across tokens. B-Trans allows for population-level decision-making, where aggregating predictions across sampled individuals significantly enhances exploration. Experiments across zero-shot generation, Reinforcement Learning with Verifiable Rewards (RLVR), and RL without explicit labels demonstrate that B-Trans effectively leverage the wisdom of crowds, yielding superior semantic diversity while achieving better task performance compared to deterministic baselines.
https://arxiv.org/abs/2512.25063
Academic Papers
svg
d3cd3a380254cc683050ac297eaba230c0e539bfa359fb008a005fe58d3f40a2
2026-01-01T00:00:00-05:00
Vulcan: Instance-Optimal Systems Heuristics Through LLM-Driven Search
arXiv:2512.25065v1 Announce Type: new Abstract: Resource-management tasks in modern operating and distributed systems continue to rely primarily on hand-designed heuristics for tasks such as scheduling, caching, or active queue management. Designing performant heuristics is an expensive, time-consuming process that we are forced to continuously go through due to the constant flux of hardware, workloads and environments. We propose a new alternative: synthesizing instance-optimal heuristics -- specialized for the exact workloads and hardware where they will be deployed -- using code-generating large language models (LLMs). To make this synthesis tractable, Vulcan separates policy and mechanism through LLM-friendly, task-agnostic interfaces. With these interfaces, users specify the inputs and objectives of their desired policy, while Vulcan searches for performant policies via evolutionary search over LLM-generated code. This interface is expressive enough to capture a wide range of system policies, yet sufficiently constrained to allow even small, inexpensive LLMs to generate correct and executable code. We use Vulcan to synthesize performant heuristics for cache eviction and memory tiering, and find that these heuristics outperform all human-designed state-of-the-art algorithms by upto 69% and 7.9% in performance for each of these tasks respectively.
https://arxiv.org/abs/2512.25065
Academic Papers
svg
a6243e8019ac57e06130ae83db868b3cc816857b1c888cb92648c174476d7d26
2026-01-01T00:00:00-05:00
From Inpainting to Editing: A Self-Bootstrapping Framework for Context-Rich Visual Dubbing
arXiv:2512.25066v1 Announce Type: new Abstract: Audio-driven visual dubbing aims to synchronize a video's lip movements with new speech, but is fundamentally challenged by the lack of ideal training data: paired videos where only a subject's lip movements differ while all other visual conditions are identical. Existing methods circumvent this with a mask-based inpainting paradigm, where an incomplete visual conditioning forces models to simultaneously hallucinate missing content and sync lips, leading to visual artifacts, identity drift, and poor synchronization. In this work, we propose a novel self-bootstrapping framework that reframes visual dubbing from an ill-posed inpainting task into a well-conditioned video-to-video editing problem. Our approach employs a Diffusion Transformer, first as a data generator, to synthesize ideal training data: a lip-altered companion video for each real sample, forming visually aligned video pairs. A DiT-based audio-driven editor is then trained on these pairs end-to-end, leveraging the complete and aligned input video frames to focus solely on precise, audio-driven lip modifications. This complete, frame-aligned input conditioning forms a rich visual context for the editor, providing it with complete identity cues, scene interactions, and continuous spatiotemporal dynamics. Leveraging this rich context fundamentally enables our method to achieve highly accurate lip sync, faithful identity preservation, and exceptional robustness against challenging in-the-wild scenarios. We further introduce a timestep-adaptive multi-phase learning strategy as a necessary component to disentangle conflicting editing objectives across diffusion timesteps, thereby facilitating stable training and yielding enhanced lip synchronization and visual fidelity. Additionally, we propose ContextDubBench, a comprehensive benchmark dataset for robust evaluation in diverse and challenging practical application scenarios.
https://arxiv.org/abs/2512.25066
Academic Papers
svg
16cb296a71957f000610de099e6e133000fe9f5c1ef4264c25aeb168159ae42a
2026-01-01T00:00:00-05:00
FineTec: Fine-Grained Action Recognition Under Temporal Corruption via Skeleton Decomposition and Sequence Completion
arXiv:2512.25067v1 Announce Type: new Abstract: Recognizing fine-grained actions from temporally corrupted skeleton sequences remains a significant challenge, particularly in real-world scenarios where online pose estimation often yields substantial missing data. Existing methods often struggle to accurately recover temporal dynamics and fine-grained spatial structures, resulting in the loss of subtle motion cues crucial for distinguishing similar actions. To address this, we propose FineTec, a unified framework for Fine-grained action recognition under Temporal Corruption. FineTec first restores a base skeleton sequence from corrupted input using context-aware completion with diverse temporal masking. Next, a skeleton-based spatial decomposition module partitions the skeleton into five semantic regions, further divides them into dynamic and static subgroups based on motion variance, and generates two augmented skeleton sequences via targeted perturbation. These, along with the base sequence, are then processed by a physics-driven estimation module, which utilizes Lagrangian dynamics to estimate joint accelerations. Finally, both the fused skeleton position sequence and the fused acceleration sequence are jointly fed into a GCN-based action recognition head. Extensive experiments on both coarse-grained (NTU-60, NTU-120) and fine-grained (Gym99, Gym288) benchmarks show that FineTec significantly outperforms previous methods under various levels of temporal corruption. Specifically, FineTec achieves top-1 accuracies of 89.1% and 78.1% on the challenging Gym99-severe and Gym288-severe settings, respectively, demonstrating its robustness and generalizability. Code and datasets could be found at https://smartdianlab.github.io/projects-FineTec/.
https://arxiv.org/abs/2512.25067
Academic Papers
svg
49d5b4d90c65ff7d4c79d6e30b8e8269c68843fe8617d6c8cafe3f277efc09b6
2026-01-01T00:00:00-05:00
Scaling Open-Ended Reasoning to Predict the Future
arXiv:2512.25070v1 Announce Type: new Abstract: High-stakes decision making involves reasoning under uncertainty about the future. In this work, we train language models to make predictions on open-ended forecasting questions. To scale up training data, we synthesize novel forecasting questions from global events reported in daily news, using a fully automated, careful curation recipe. We train the Qwen3 thinking models on our dataset, OpenForesight. To prevent leakage of future information during training and evaluation, we use an offline news corpus, both for data generation and retrieval in our forecasting system. Guided by a small validation set, we show the benefits of retrieval, and an improved reward function for reinforcement learning (RL). Once we obtain our final forecasting system, we perform held-out testing between May to August 2025. Our specialized model, OpenForecaster 8B, matches much larger proprietary models, with our training improving the accuracy, calibration, and consistency of predictions. We find calibration improvements from forecasting training generalize across popular benchmarks. We open-source all our models, code, and data to make research on language model forecasting broadly accessible.
https://arxiv.org/abs/2512.25070
Academic Papers
svg
e796d2065e19f91f659cfc241b002004c57b6facac1cf2cc007371865c7dd903
2026-01-01T00:00:00-05:00
Edit3r: Instant 3D Scene Editing from Sparse Unposed Images
arXiv:2512.25071v1 Announce Type: new Abstract: We present Edit3r, a feed-forward framework that reconstructs and edits 3D scenes in a single pass from unposed, view-inconsistent, instruction-edited images. Unlike prior methods requiring per-scene optimization, Edit3r directly predicts instruction-aligned 3D edits, enabling fast and photorealistic rendering without optimization or pose estimation. A key challenge in training such a model lies in the absence of multi-view consistent edited images for supervision. We address this with (i) a SAM2-based recoloring strategy that generates reliable, cross-view-consistent supervision, and (ii) an asymmetric input strategy that pairs a recolored reference view with raw auxiliary views, encouraging the network to fuse and align disparate observations. At inference, our model effectively handles images edited by 2D methods such as InstructPix2Pix, despite not being exposed to such edits during training. For large-scale quantitative evaluation, we introduce DL3DV-Edit-Bench, a benchmark built on the DL3DV test split, featuring 20 diverse scenes, 4 edit types and 100 edits in total. Comprehensive quantitative and qualitative results show that Edit3r achieves superior semantic alignment and enhanced 3D consistency compared to recent baselines, while operating at significantly higher inference speed, making it promising for real-time 3D editing applications.
https://arxiv.org/abs/2512.25071
Academic Papers
svg
ca5762cfa509890a6ea354f35b7926f1db6732a9cea1d22a78150301cc8a01c3
2026-01-01T00:00:00-05:00
Coordinated Humanoid Manipulation with Choice Policies
arXiv:2512.25072v1 Announce Type: new Abstract: Humanoid robots hold great promise for operating in human-centric environments, yet achieving robust whole-body coordination across the head, hands, and legs remains a major challenge. We present a system that combines a modular teleoperation interface with a scalable learning framework to address this problem. Our teleoperation design decomposes humanoid control into intuitive submodules, which include hand-eye coordination, grasp primitives, arm end-effector tracking, and locomotion. This modularity allows us to collect high-quality demonstrations efficiently. Building on this, we introduce Choice Policy, an imitation learning approach that generates multiple candidate actions and learns to score them. This architecture enables both fast inference and effective modeling of multimodal behaviors. We validate our approach on two real-world tasks: dishwasher loading and whole-body loco-manipulation for whiteboard wiping. Experiments show that Choice Policy significantly outperforms diffusion policies and standard behavior cloning. Furthermore, our results indicate that hand-eye coordination is critical for success in long-horizon tasks. Our work demonstrates a practical path toward scalable data collection and learning for coordinated humanoid manipulation in unstructured environments.
https://arxiv.org/abs/2512.25072
Academic Papers
svg
7e308d17cfb8122f23fdd0583550e6a366c17b3c6993da8b174be682f74d5a98
2026-01-01T00:00:00-05:00
GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction
arXiv:2512.25073v1 Announce Type: new Abstract: Recent advances in 3D reconstruction have achieved remarkable progress in high-quality scene capture from dense multi-view imagery, yet struggle when input views are limited. Various approaches, including regularization techniques, semantic priors, and geometric constraints, have been implemented to address this challenge. Latest diffusion-based methods have demonstrated substantial improvements by generating novel views from new camera poses to augment training data, surpassing earlier regularization and prior-based techniques. Despite this progress, we identify three critical limitations in these state-of-the-art approaches: inadequate coverage beyond known view peripheries, geometric inconsistencies across generated views, and computationally expensive pipelines. We introduce GaMO (Geometry-aware Multi-view Outpainter), a framework that reformulates sparse-view reconstruction through multi-view outpainting. Instead of generating new viewpoints, GaMO expands the field of view from existing camera poses, which inherently preserves geometric consistency while providing broader scene coverage. Our approach employs multi-view conditioning and geometry-aware denoising strategies in a zero-shot manner without training. Extensive experiments on Replica and ScanNet++ demonstrate state-of-the-art reconstruction quality across 3, 6, and 9 input views, outperforming prior methods in PSNR and LPIPS, while achieving a $25\times$ speedup over SOTA diffusion-based methods with processing time under 10 minutes. Project page: https://yichuanh.github.io/GaMO/
https://arxiv.org/abs/2512.25073
Academic Papers
svg
c5b23fe82e18d6b1ed2a31764096238e709f7d9fcc4b5def69acb46e860977f5
2026-01-01T00:00:00-05:00
SpaceTimePilot: Generative Rendering of Dynamic Scenes Across Space and Time
arXiv:2512.25075v1 Announce Type: new Abstract: We present SpaceTimePilot, a video diffusion model that disentangles space and time for controllable generative rendering. Given a monocular video, SpaceTimePilot can independently alter the camera viewpoint and the motion sequence within the generative process, re-rendering the scene for continuous and arbitrary exploration across space and time. To achieve this, we introduce an effective animation time-embedding mechanism in the diffusion process, allowing explicit control of the output video's motion sequence with respect to that of the source video. As no datasets provide paired videos of the same dynamic scene with continuous temporal variations, we propose a simple yet effective temporal-warping training scheme that repurposes existing multi-view datasets to mimic temporal differences. This strategy effectively supervises the model to learn temporal control and achieve robust space-time disentanglement. To further enhance the precision of dual control, we introduce two additional components: an improved camera-conditioning mechanism that allows altering the camera from the first frame, and CamxTime, the first synthetic space-and-time full-coverage rendering dataset that provides fully free space-time video trajectories within a scene. Joint training on the temporal-warping scheme and the CamxTime dataset yields more precise temporal control. We evaluate SpaceTimePilot on both real-world and synthetic data, demonstrating clear space-time disentanglement and strong results compared to prior work. Project page: https://zheninghuang.github.io/Space-Time-Pilot/ Code: https://github.com/ZheningHuang/spacetimepilot
https://arxiv.org/abs/2512.25075
Academic Papers
svg
f52b6a35bb1ba89c4ca044b0d7321216d770975371d6436e7b863e6225dffa6a
2026-01-01T00:00:00-05:00
On Good-for-MDPs Automata
arXiv:2202.07629v4 Announce Type: cross Abstract: Nondeterministic good-for-MDPs (GFM) automata are for MDP model checking and reinforcement learning what good-for-games (GFG) automata are for reactive synthesis: a more compact alternative to deterministic automata that displays nondeterminism, but only so much that it can be resolved locally, such that a syntactic product can be analysed. GFM has recently been introduced as a property for reinforcement learning, where the simpler B\"uchi acceptance conditions it allows to use is key. However, while there are classic and novel techniques to obtain automata that are GFM, there has not been a decision procedure for checking whether or not an automaton is GFM. We show that GFM-ness is decidable and provide an EXPTIME decision procedure as well as a PSPACE-hardness proof. We also compare the succinctness of GFM automata with other types of automata with restricted nondeterminism. The first natural comparison point are GFG automata. Deterministic automata are GFG, and GFG automata are GFM, but not vice versa. This raises the question of how these classes relate in terms of succinctness. GFG automata are known to be exponentially more succinct than deterministic automata, but the gap between GFM and GFG automata as well as the gap between ordinary nondeterministic automata and those that are GFM have been open. We establish that these gaps are exponential, and sharpen this result by showing that the latter gap remains exponential when restricting the nondeterministic automata to separating safety or unambiguous reachability automata.
https://arxiv.org/abs/2202.07629
Academic Papers
svg
98a988bf461638ea84b2de935381665eb5bfcafa30f0d2ee820f4b90cc2c61d1
2026-01-01T00:00:00-05:00
Comparative Evaluation of Embedding Representations for Financial News Sentiment Analysis
arXiv:2512.13749v1 Announce Type: cross Abstract: Financial sentiment analysis enhances market understanding; however, standard natural language processing approaches encounter significant challenges when applied to small datasets. This study provides a comparative evaluation of embedding-based methods for financial news sentiment classification in resource-constrained environments. Word2Vec, GloVe, and sentence transformer representations are evaluated in combination with gradient boosting on manually labeled headlines. Experimental results identify a substantial gap between validation and test performance, with models performing worse than trivial baselines despite strong validation metrics. The analysis demonstrates that pretrained embeddings yield diminishing returns below a critical data sufficiency threshold, and that small validation sets contribute to overfitting during model selection. Practical application is illustrated through weekly sentiment aggregation and narrative summarization for market monitoring workflows. The findings offer empirical evidence that embedding quality alone cannot address fundamental data scarcity in sentiment classification. For practitioners operating with limited resources, the results indicate the need to consider alternative approaches such as few-shot learning, data augmentation, or lexicon-enhanced hybrid methods when labeled samples are scarce.
https://arxiv.org/abs/2512.13749
Academic Papers
svg
3ad0d2146738f0d73b65552083fe8dc1a208494fdbee463449bbe3aad6a23aca
2026-01-01T00:00:00-05:00
q3-MuPa: Quick, Quiet, Quantitative Multi-Parametric MRI using Physics-Informed Diffusion Models
arXiv:2512.23726v1 Announce Type: cross Abstract: The 3D fast silent multi-parametric mapping sequence with zero echo time (MuPa-ZTE) is a novel quantitative MRI (qMRI) acquisition that enables nearly silent scanning by using a 3D phyllotaxis sampling scheme. MuPa-ZTE improves patient comfort and motion robustness, and generates quantitative maps of T1, T2, and proton density using the acquired weighted image series. In this work, we propose a diffusion model-based qMRI mapping method that leverages both a deep generative model and physics-based data consistency to further improve the mapping performance. Furthermore, our method enables additional acquisition acceleration, allowing high-quality qMRI mapping from a fourfold-accelerated MuPa-ZTE scan (approximately 1 minute). Specifically, we trained a denoising diffusion probabilistic model (DDPM) to map MuPa-ZTE image series to qMRI maps, and we incorporated the MuPa-ZTE forward signal model as an explicit data consistency (DC) constraint during inference. We compared our mapping method against a baseline dictionary matching approach and a purely data-driven diffusion model. The diffusion models were trained entirely on synthetic data generated from digital brain phantoms, eliminating the need for large real-scan datasets. We evaluated on synthetic data, a NISM/ISMRM phantom, healthy volunteers, and a patient with brain metastases. The results demonstrated that our method produces 3D qMRI maps with high accuracy, reduced noise and better preservation of structural details. Notably, it generalised well to real scans despite training on synthetic data alone. The combination of the MuPa-ZTE acquisition and our physics-informed diffusion model is termed q3-MuPa, a quick, quiet, and quantitative multi-parametric mapping framework, and our findings highlight its strong clinical potential.
https://arxiv.org/abs/2512.23726
Academic Papers
svg
99dc492e8e37fb08197cfd323b7fa03a8d5f8ea2aef43bf3f60a99e030ad362f
2026-01-01T00:00:00-05:00
Spike-Timing-Dependent Plasticity for Bernoulli Message Passing
arXiv:2512.23728v1 Announce Type: cross Abstract: Bayesian inference provides a principled framework for understanding brain function, while neural activity in the brain is inherently spike-based. This paper bridges these two perspectives by designing spiking neural networks that simulate Bayesian inference through message passing for Bernoulli messages. To train the networks, we employ spike-timing-dependent plasticity, a biologically plausible mechanism for synaptic plasticity which is based on the Hebbian rule. Our results demonstrate that the network's performance closely matches the true numerical solution. We further demonstrate the versatility of our approach by implementing a factor graph example from coding theory, illustrating signal transmission over an unreliable channel.
https://arxiv.org/abs/2512.23728
Academic Papers
svg
d3b3cfcce53fbac63dfe9cdeeb89505a82b110b002858d20db0b3448a57e59c6
2026-01-01T00:00:00-05:00
Leveraging Machine Learning for Early Detection of Lung Diseases
arXiv:2512.23757v1 Announce Type: cross Abstract: A combination of traditional image processing methods with advanced neural networks concretes a predictive and preventive healthcare paradigm. This study offers rapid, accurate, and non-invasive diagnostic solutions that can significantly impact patient outcomes, particularly in areas with limited access to radiologists and healthcare resources. In this project, deep learning methods apply in enhancing the diagnosis of respiratory diseases such as COVID-19, lung cancer, and pneumonia from chest x-rays. We trained and validated various neural network models, including CNNs, VGG16, InceptionV3, and EfficientNetB0, with high accuracy, precision, recall, and F1 scores to highlight the models' reliability and potential in real-world diagnostic applications.
https://arxiv.org/abs/2512.23757
Academic Papers
svg
968237e3aea93f607da4d23098bd4b031178a67101ab5c4820fdd5ac090cf2b2
2026-01-01T00:00:00-05:00
Stochastic Galerkin Method and Hierarchical Preconditioning for PDE-constrained Optimization
arXiv:2512.23804v1 Announce Type: cross Abstract: We develop efficient hierarchical preconditioners for optimal control problems governed by partial differential equations with uncertain coefficients. Adopting a discretize-then-optimize framework that integrates finite element discretization, stochastic Galerkin approximation, and advanced time-discretization schemes, the approach addresses the challenge of large-scale, ill-conditioned linear systems arising in uncertainty quantification. By exploiting the sparsity inherent in generalized polynomial chaos expansions, we derive hierarchical preconditioners based on truncated stochastic expansion that strike an effective balance between computational cost and preconditioning quality. Numerical experiments demonstrate that the proposed preconditioners significantly accelerate the convergence of iterative solvers compared to existing methods, providing robust and efficient solvers for both steady-state and time-dependent optimal control applications under uncertainty.
https://arxiv.org/abs/2512.23804
Academic Papers
svg
112bbfaa7b47ec11a2969165caa447f80cfb7b1d2e7236596fcdbfd54354d9e7
2026-01-01T00:00:00-05:00
Fitted Q Evaluation Without Bellman Completeness via Stationary Weighting
arXiv:2512.23805v1 Announce Type: cross Abstract: Fitted Q-evaluation (FQE) is a central method for off-policy evaluation in reinforcement learning, but it generally requires Bellman completeness: that the hypothesis class is closed under the evaluation Bellman operator. This requirement is challenging because enlarging the hypothesis class can worsen completeness. We show that the need for this assumption stems from a fundamental norm mismatch: the Bellman operator is gamma-contractive under the stationary distribution of the target policy, whereas FQE minimizes Bellman error under the behavior distribution. We propose a simple fix: reweight each regression step using an estimate of the stationary density ratio, thereby aligning FQE with the norm in which the Bellman operator contracts. This enables strong evaluation guarantees in the absence of realizability or Bellman completeness, avoiding the geometric error blow-up of standard FQE in this setting while maintaining the practicality of regression-based evaluation.
https://arxiv.org/abs/2512.23805
Academic Papers
svg
9d9e9b3ce1c5a70fee12d67594564bf52cb4fb3277dcef2ff54ef23ad59cfcd9
2026-01-01T00:00:00-05:00
Syndrome aware mitigation of logical errors
arXiv:2512.23810v1 Announce Type: cross Abstract: Broad applications of quantum computers will require error correction (EC). However, quantum hardware roadmaps indicate that physical qubit numbers will remain limited in the foreseeable future, leading to residual logical errors that limit the size and accuracy of achievable computations. Recent work suggested logical error mitigation (LEM), which applies known error mitigation (EM) methods to logical errors, eliminating their effect at the cost of a runtime overhead. Improving the efficiency of LEM is crucial for increasing the logical circuit volumes it enables to execute. We introduce syndrome-aware logical error mitigation (SALEM), which makes use of the syndrome data measured during error correction, when mitigating the logical errors. The runtime overhead of SALEM is exponentially lower than that of previously proposed LEM schemes, resulting in significantly increased circuit volumes that can be executed accurately. Notably, relative to the routinely used combination of error correction and syndrome rejection (post-selection), SALEM increases the size of reliably executable computations by orders of magnitude. In this practical setting in which space and time are both resources that need to be optimized, our work reveals a surprising phenomenon: SALEM, which tightly combines EC with EM, can outperform physical EM even above the standard fault-tolerance threshold. Thus, SALEM can make use of EC in regimes of physical error rates at which EC is commonly deemed useless.
https://arxiv.org/abs/2512.23810
Academic Papers
svg
d79226c46c6e4115d9b719f177b96ca880b4ac48e41d33fbeff2a652a531c012
2026-01-01T00:00:00-05:00
Quantum Error Mitigation with Attention Graph Transformers for Burgers Equation Solvers on NISQ Hardware
arXiv:2512.23817v1 Announce Type: cross Abstract: We present a hybrid quantum-classical framework augmented with learned error mitigation for solving the viscous Burgers equation on noisy intermediate-scale quantum (NISQ) hardware. Using the Cole-Hopf transformation, the nonlinear Burgers equation is mapped to a diffusion equation, discretized on uniform grids, and encoded into a quantum state whose time evolution is approximated via Trotterized nearest-neighbor circuits implemented in Qiskit. Quantum simulations are executed on noisy Aer backends and IBM superconducting quantum devices and are benchmarked against high-accuracy classical solutions obtained using a Krylov-based solver applied to the corresponding discretized Hamiltonian. From measured quantum amplitudes, we reconstruct the velocity field and evaluate physical and numerical diagnostics, including the L2 error, shock location, and dissipation rate, both with and without zero-noise extrapolation (ZNE). To enable data-driven error mitigation, we construct a large parametric dataset by sweeping viscosity, time step, grid resolution, and boundary conditions, producing matched tuples of noisy, ZNE-corrected, hardware, and classical solutions together with detailed circuit metadata. Leveraging this dataset, we train an attention-based graph neural network that incorporates circuit structure, light-cone information, global circuit parameters, and noisy quantum outputs to predict error-mitigated solutions. Across a wide range of parameters, the learned model consistently reduces the discrepancy between quantum and classical solutions beyond what is achieved by ZNE alone. We discuss extensions of this approach to higher-dimensional Burgers systems and more general quantum partial differential equation solvers, highlighting learned error mitigation as a promising complement to physics-based noise reduction techniques on NISQ devices.
https://arxiv.org/abs/2512.23817
Academic Papers
svg
235f408ca37f448e76b4672a155f8c8ca3a7d29791fb2db62a14c6444ad93c37
2026-01-01T00:00:00-05:00
Energy-Tweedie: Score meets Score, Energy meets Energy
arXiv:2512.23818v1 Announce Type: cross Abstract: Denoising and score estimation have long been known to be linked via the classical Tweedie's formula. In this work, we first extend the latter to a wider range of distributions often called "energy models" and denoted elliptical distributions in this work. Next, we examine an alternative view: we consider the denoising posterior $P(X|Y)$ as the optimizer of the energy score (a scoring rule) and derive a fundamental identity that connects the (path-) derivative of a (possibly) non-Euclidean energy score to the score of the noisy marginal. This identity can be seen as an analog of Tweedie's identity for the energy score, and allows for several interesting applications; for example, score estimation, noise distribution parameter estimation, as well as using energy score models in the context of "traditional" diffusion model samplers with a wider array of noising distributions.
https://arxiv.org/abs/2512.23818
Academic Papers
svg
dd132e7ff9109649c5e23991540f92413cabd132710c04ef0b13ada2bee92825
2026-01-01T00:00:00-05:00
The Flow-Limit of Reflect-Reflect-Relax: Existence, Stability, and Discrete-Time Behavior
arXiv:2512.23843v1 Announce Type: cross Abstract: We study the Reflect-Reflect-Relax (RRR) algorithm in its small-step (flow-limit) regime. In the smooth transversal setting, we show that the transverse dynamics form a hyperbolic sink, yielding exponential decay of a natural gap measure. Under uniform geometric assumptions, we construct a tubular neighborhood of the feasible manifold on which the squared gap defines a strict Lyapunov function, excluding recurrent dynamics and chaotic behavior within this basin. In the discrete setting, the induced flow is piecewise constant on W-domains and supports Filippov sliding along convergent boundaries, leading to finite-time capture into a solution domain. We prove that small-step RRR is a forward-Euler discretization of this flow, so that solution times measured in rescaled units converge to a finite limit while iteration counts diverge, explaining the emergence of iteration-optimal relaxation parameters. Finally, we introduce a heuristic mesoscopic framework based on percolation and renormalization group to organize performance deterioration near the Douglas-Rachford limit.
https://arxiv.org/abs/2512.23843
Academic Papers
svg
3a7516a76bf91aea120897ed42e50524fdfeeb02beb6f2e8704fc6ecbf7319e3
2026-01-01T00:00:00-05:00
A Test of Lookahead Bias in LLM Forecasts
arXiv:2512.23847v1 Announce Type: cross Abstract: We develop a statistical test to detect lookahead bias in economic forecasts generated by large language models (LLMs). Using state-of-the-art pre-training data detection techniques, we estimate the likelihood that a given prompt appeared in an LLM's training corpus, a statistic we term Lookahead Propensity (LAP). We formally show that a positive correlation between LAP and forecast accuracy indicates the presence and magnitude of lookahead bias, and apply the test to two forecasting tasks: news headlines predicting stock returns and earnings call transcripts predicting capital expenditures. Our test provides a cost-efficient, diagnostic tool for assessing the validity and reliability of LLM-generated forecasts.
https://arxiv.org/abs/2512.23847
Academic Papers
svg
cd90eaadb33616dee96ed7ba875c3eb57bb8ccccb97cfe35e70ba6fa05471e3f
2026-01-01T00:00:00-05:00
Autoregressive long-horizon prediction of plasma edge dynamics
arXiv:2512.23884v1 Announce Type: cross Abstract: Accurate modeling of scrape-off layer (SOL) and divertor-edge dynamics is vital for designing plasma-facing components in fusion devices. High-fidelity edge fluid/neutral codes such as SOLPS-ITER capture SOL physics with high accuracy, but their computational cost limits broad parameter scans and long transient studies. We present transformer-based, autoregressive surrogates for efficient prediction of 2D, time-dependent plasma edge state fields. Trained on SOLPS-ITER spatiotemporal data, the surrogates forecast electron temperature, electron density, and radiated power over extended horizons. We evaluate model variants trained with increasing autoregressive horizons (1-100 steps) on short- and long-horizon prediction tasks. Longer-horizon training systematically improves rollout stability and mitigates error accumulation, enabling stable predictions over hundreds to thousands of steps and reproducing key dynamical features such as the motion of high-radiation regions. Measured end-to-end wall-clock times show the surrogate is orders of magnitude faster than SOLPS-ITER, enabling rapid parameter exploration. Prediction accuracy degrades when the surrogate enters physical regimes not represented in the training dataset, motivating future work on data enrichment and physics-informed constraints. Overall, this approach provides a fast, accurate surrogate for computationally intensive plasma edge simulations, supporting rapid scenario exploration, control-oriented studies, and progress toward real-time applications in fusion devices.
https://arxiv.org/abs/2512.23884
Academic Papers
svg
bb040399f5bd7e8c66abdbd6d3f052af75a0a506e8cd130f4e75393446889050
2026-01-01T00:00:00-05:00
A multimodal Transformer for InSAR-based ground deformation forecasting with cross-site generalization across Europe
arXiv:2512.23906v1 Announce Type: cross Abstract: Near-real-time regional-scale monitoring of ground deformation is increasingly required to support urban planning, critical infrastructure management, and natural hazard mitigation. While Interferometric Synthetic Aperture Radar (InSAR) and continental-scale services such as the European Ground Motion Service (EGMS) provide dense observations of past motion, predicting the next observation remains challenging due to the superposition of long-term trends, seasonal cycles, and occasional abrupt discontinuities (e.g., co-seismic steps), together with strong spatial heterogeneity. In this study we propose a multimodal patch-based Transformer for single-step, fixed-interval next-epoch nowcasting of displacement maps from EGMS time series (resampled to a 64x64 grid over 100 km x 100 km tiles). The model ingests recent displacement snapshots together with (i) static kinematic indicators (mean velocity, acceleration, seasonal amplitude) computed in a leakage-safe manner from the training window only, and (ii) harmonic day-of-year encodings. On the eastern Ireland tile (E32N34), the STGCN is strongest in the displacement-only setting, whereas the multimodal Transformer clearly outperforms CNN-LSTM, CNN-LSTM+Attn, and multimodal STGCN when all models receive the same multimodal inputs, achieving RMSE = 0.90 mm and $R^2$ = 0.97 on the test set with the best threshold accuracies.
https://arxiv.org/abs/2512.23906
Academic Papers
svg
e7b2b89ba6f959eebda62abb5cc86feab78ae955bf3d32b7510fbb09bd482cc8
2026-01-01T00:00:00-05:00
Tensor Computing Interface: An Application-Oriented, Lightweight Interface for Portable High-Performance Tensor Network Applications
arXiv:2512.23917v1 Announce Type: cross Abstract: Tensor networks (TNs) are a central computational tool in quantum science and artificial intelligence. However, the lack of unified software interface across tensor-computing frameworks severely limits the portability of TN applications, coupling algorithmic development to specific hardware and software back ends. To address this challenge, we introduce the Tensor Computing Interface (TCI) -- an application-oriented, lightweight application programming interface designed to enable framework-independent, high-performance TN applications. TCI provides a well-defined type system that abstracts tensor objects together with a minimal yet expressive set of core functions covering essential tensor manipulations and tensor linear-algebra operations. Through numerical demonstrations on representative tensor-network applications, we show that codes written against TCI can be migrated seamlessly across heterogeneous hardware and software platforms while achieving performance comparable to native framework implementations. We further release an open-source implementation of TCI based on \textit{Cytnx}, demonstrating its practicality and ease of integration with existing tensor-computing frameworks.
https://arxiv.org/abs/2512.23917
Academic Papers
svg
5efcfaf6f3d736c8b0c1de3c37cde6d0ce250919b6c2822108eaf63517d06ca8
2026-01-01T00:00:00-05:00
Stationary Reweighting Yields Local Convergence of Soft Fitted Q-Iteration
arXiv:2512.23927v1 Announce Type: cross Abstract: Fitted Q-iteration (FQI) and its entropy-regularized variant, soft FQI, are central tools for value-based model-free offline reinforcement learning, but can behave poorly under function approximation and distribution shift. In the entropy-regularized setting, we show that the soft Bellman operator is locally contractive in the stationary norm of the soft-optimal policy, rather than in the behavior norm used by standard FQI. This geometric mismatch explains the instability of soft Q-iteration with function approximation in the absence of Bellman completeness. To restore contraction, we introduce stationary-reweighted soft FQI, which reweights each regression update using the stationary distribution of the current policy. We prove local linear convergence under function approximation with geometrically damped weight-estimation errors, assuming approximate realizability. Our analysis further suggests that global convergence may be recovered by gradually reducing the softmax temperature, and that this continuation approach can extend to the hardmax limit under a mild margin condition.
https://arxiv.org/abs/2512.23927
Academic Papers
svg
0aad164e37b48119dcb7d969024a8d2c822e7344e7a0086dd46ca369dc700399
2026-01-01T00:00:00-05:00
Assessing generative modeling approaches for free energy estimates in condensed matter
arXiv:2512.23930v1 Announce Type: cross Abstract: The accurate estimation of free energy differences between two states is a long-standing challenge in molecular simulations. Traditional approaches generally rely on sampling multiple intermediate states to ensure sufficient overlap in phase space and are, consequently, computationally expensive. Several generative-model-based methods have recently addressed this challenge by learning a direct bridge between distributions, bypassing the need for intermediate states. However, it remains unclear which approaches provide the best trade-off between efficiency, accuracy, and scalability. In this work, we systematically review these methods and benchmark selected approaches with a focus on condensed-matter systems. In particular, we investigate the performance of discrete and continuous normalizing flows in the context of targeted free energy perturbation as well as FEAT (Free energy Estimators with Adaptive Transport) together with the escorted Jarzynski equality, using coarse-grained monatomic ice and Lennard-Jones solids as benchmark systems. We evaluate accuracy, data efficiency, computational cost, and scalability with system size. Our results provide a quantitative framework for selecting effective free energy estimation strategies in condensed-phase systems.
https://arxiv.org/abs/2512.23930
Academic Papers
svg
b7e10b6ed4acbdcf10822e51bfa41a68533957f4617e04cbab767fe8ec353ca5
2026-01-01T00:00:00-05:00
Implicit geometric regularization in flow matching via density weighted Stein operators
arXiv:2512.23956v1 Announce Type: cross Abstract: Flow Matching (FM) has emerged as a powerful paradigm for continuous normalizing flows, yet standard FM implicitly performs an unweighted $L^2$ regression over the entire ambient space. In high dimensions, this leads to a fundamental inefficiency: the vast majority of the integration domain consists of low-density ``void'' regions where the target velocity fields are often chaotic or ill-defined. In this paper, we propose {$\gamma$-Flow Matching ($\gamma$-FM)}, a density-weighted variant that aligns the regression geometry with the underlying probability flow. While density weighting is desirable, naive implementations would require evaluating the intractable target density. We circumvent this by introducing a Dynamic Density-Weighting strategy that estimates the \emph{target} density directly from training particles. This approach allows us to dynamically downweight the regression loss in void regions without compromising the simulation-free nature of FM. Theoretically, we establish that $\gamma$-FM minimizes the transport cost on a statistical manifold endowed with the $\gamma$-Stein metric. Spectral analysis further suggests that this geometry induces an implicit Sobolev regularization, effectively damping high-frequency oscillations in void regions. Empirically, $\gamma$-FM significantly improves vector field smoothness and sampling efficiency on high-dimensional latent datasets, while demonstrating intrinsic robustness to outliers.
https://arxiv.org/abs/2512.23956
Academic Papers
svg
70983fc113ec2a4d4d5f153a497446cbb1c7fe11c46b0148678dcfbd255f5c95
2026-01-01T00:00:00-05:00
Fundamental limits for weighted empirical approximations of tilted distributions
arXiv:2512.23979v1 Announce Type: cross Abstract: Consider the task of generating samples from a tilted distribution of a random vector whose underlying distribution is unknown, but samples from it are available. This finds applications in fields such as finance and climate science, and in rare event simulation. In this article, we discuss the asymptotic efficiency of a self-normalized importance sampler of the tilted distribution. We provide a sharp characterization of its accuracy, given the number of samples and the degree of tilt. Our findings reveal a surprising dichotomy: while the number of samples needed to accurately tilt a bounded random vector increases polynomially in the tilt amount, it increases at a super polynomial rate for unbounded distributions.
https://arxiv.org/abs/2512.23979
Academic Papers
svg
200cdc6a6d441663b51c068ae2a8c6b125e8e0e880fbe83b02347a7bda12b104
2026-01-01T00:00:00-05:00
One-Shot Structured Pruning of Quantum Neural Networks via $q$-Group Engineering and Quantum Geometric Metrics
arXiv:2512.24019v1 Announce Type: cross Abstract: Quantum neural networks (QNNs) suffer from severe gate-level redundancy, which hinders their deployment on noisy intermediate-scale quantum (NISQ) devices. In this work, we propose q-iPrune, a one-shot structured pruning framework grounded in the algebraic structure of $q$-deformed groups and task-conditioned quantum geometry. Unlike prior heuristic or gradient-based pruning methods, q-iPrune formulates redundancy directly at the gate level. Each gate is compared within an algebraically consistent subgroup using a task-conditioned $q$-overlap distance, which measures functional similarity through state overlaps on a task-relevant ensemble. A gate is removed only when its replacement by a subgroup representative provably induces a bounded deviation on all task observables. We establish three rigorous theoretical guarantees. First, we prove completeness of redundancy pruning: no gate that violates the prescribed similarity threshold is removed. Second, we show that the pruned circuit is functionally equivalent up to an explicit, task-conditioned error bound, with a closed-form dependence on the redundancy tolerance and the number of replaced gates. Third, we prove that the pruning procedure is computationally feasible, requiring only polynomial-time comparisons and avoiding exponential enumeration over the Hilbert space. To adapt pruning decisions to hardware imperfections, we introduce a noise-calibrated deformation parameter $\lambda$ that modulates the $q$-geometry and redundancy tolerance. Experiments on standard quantum machine learning benchmarks demonstrate that q-iPrune achieves substantial gate reduction while maintaining bounded task performance degradation, consistent with our theoretical guarantees.
https://arxiv.org/abs/2512.24019
Academic Papers
svg
5b07c7e68f75f0816a1f9051cdc6d1c4ca2f25985e869e7c77c2d42d5cb1fec0
2026-01-01T00:00:00-05:00
Exposed: Shedding Blacklight on Online Privacy
arXiv:2512.24041v1 Announce Type: cross Abstract: To what extent are users surveilled on the web, by what technologies, and by whom? We answer these questions by combining passively observed, anonymized browsing data of a large, representative sample of Americans with domain-level data on tracking from Blacklight. We find that nearly all users ($ > 99\%$) encounter at least one ad tracker or third-party cookie over the observation window. More invasive techniques like session recording, keylogging, and canvas fingerprinting are less widespread, but over half of the users visited a site employing at least one of these within the first 48 hours of the start of tracking. Linking trackers to their parent organizations reveals that a single organization, usually Google, can track over $50\%$ of web activity of more than half the users. Demographic differences in exposure are modest and often attenuate when we account for browsing volume. However, disparities by age and race remain, suggesting that what users browse, not just how much, shapes their surveillance risk.
https://arxiv.org/abs/2512.24041
Academic Papers
svg
29ac61b9bb883b84ecd7a885c9d9ace7292bfcde376c0709b459a00fd58c41ae
2026-01-01T00:00:00-05:00
$L^p$ Estimates for Numerical Approximation of Hamilton-Jacobi Equations
arXiv:2512.24051v1 Announce Type: cross Abstract: We establish $L^p$ error estimates for monotone numerical schemes approximating Hamilton-Jacobi equations on the $d$-dimensional torus. Using the adjoint method, we first prove a $L^1$ error bound of order one for finite-difference and semi-Lagrangian schemes under standard convexity assumptions on the Hamiltonian. By interpolation, we also obtain $L^p$ estimates for every finite $p>1$. Our analysis covers a broad class of schemes, improves several existing results, and provides a unified framework for discrete error estimates.
https://arxiv.org/abs/2512.24051
Academic Papers
svg
509144ef12fec955142cf3e887a3231409e31f46f7e217fd1ab2a6e7c77e9f23
2026-01-01T00:00:00-05:00
Policy Mirror Descent with Temporal Difference Learning: Sample Complexity under Online Markov Data
arXiv:2512.24056v1 Announce Type: cross Abstract: This paper studies the policy mirror descent (PMD) method, which is a general policy optimization framework in reinforcement learning and can cover a wide range of policy gradient methods by specifying difference mirror maps. Existing sample complexity analysis for policy mirror descent either focuses on the generative sampling model, or the Markovian sampling model but with the action values being explicitly approximated to certain pre-specified accuracy. In contrast, we consider the sample complexity of policy mirror descent with temporal difference (TD) learning under the Markovian sampling model. Two algorithms called Expected TD-PMD and Approximate TD-PMD have been presented, which are off-policy and mixed policy algorithms respectively. Under a small enough constant policy update step size, the $\tilde{O}(\varepsilon^{-2})$ (a logarithm factor about $\varepsilon$ is hidden in $\tilde{O}(\cdot)$) sample complexity can be established for them to achieve average-time $\varepsilon$-optimality. The sample complexity is further improved to $O(\varepsilon^{-2})$ (without the hidden logarithm factor) to achieve the last-iterate $\varepsilon$-optimality based on adaptive policy update step sizes.
https://arxiv.org/abs/2512.24056
Academic Papers
svg
086b6d201f03e54d11b9029dda9a3edf364eadd52a2c57236764dc94098c109e
2026-01-01T00:00:00-05:00
Notes on the 33-point Erd\H{o}s--Szekeres problem
arXiv:2512.24061v1 Announce Type: cross Abstract: The determination of $ES(7)$ is the first open case of the planar Erd\H{o}s--Szekeres problem, where the general conjecture predicts $ES(7)=33$. We present a SAT encoding for the 33-point case based on triple-orientation variables and a 4-set convexity criterion for excluding convex 7-gons, together with convex-layer anchoring constraints. The framework yields UNSAT certificates for a collection of anchored subfamilies. We also report pronounced runtime variability across configurations, including heavy-tailed behavior that currently dominates the computational effort and motivates further encoding refinements.
https://arxiv.org/abs/2512.24061
Academic Papers
svg
eef5690e07ab1ab35551a10e32dc3e89b49a42554c19cfb6c3e422c6d12cbf17
2026-01-01T00:00:00-05:00
Constructive Approximation of Random Process via Stochastic Interpolation Neural Network Operators
arXiv:2512.24106v1 Announce Type: cross Abstract: In this paper, we construct a class of stochastic interpolation neural network operators (SINNOs) with random coefficients activated by sigmoidal functions. We establish their boundedness, interpolation accuracy, and approximation capabilities in the mean square sense, in probability, as well as path-wise within the space of second-order stochastic (random) processes \( L^2(\Omega, \mathcal{F},\mathbb{P}) \). Additionally, we provide quantitative error estimates using the modulus of continuity of the processes. These results highlight the effectiveness of SINNOs for approximating stochastic processes with potential applications in COVID-19 case prediction.
https://arxiv.org/abs/2512.24106
Academic Papers
svg
a992bdca68bcda491953fb16d1a15052f6084eddcc86d494df09edacaf516f70
2026-01-01T00:00:00-05:00
Dominion of some graphs
arXiv:2512.24115v1 Announce Type: cross Abstract: Given a graph G equals (V,E), a subset S subset of V is a dominating set if every vertex in V minus S is adjacent to some vertex in S. The dominating set with the least cardinality, gamma, is called a gamma-set which is commonly known as a minimum dominating set. The dominion of a graph G, denoted by zeta(G), is the number of its gamma-sets. Some relations between these two seemingly distinct parameters are established. In particular, we present the dominions of paths, some cycles and the join of any two graphs.
https://arxiv.org/abs/2512.24115
Academic Papers
svg
6b22d1b414bf1753de41653db9cd7efe6363def7e6bf55e70b60b42029e63385
2026-01-01T00:00:00-05:00
Quantitative Understanding of PDF Fits and their Uncertainties
arXiv:2512.24116v1 Announce Type: cross Abstract: Parton Distribution Functions (PDFs) play a central role in describing experimental data at colliders and provide insight into the structure of nucleons. As the LHC enters an era of high-precision measurements, a robust PDF determination with a reliable uncertainty quantification has become mandatory in order to match the experimental precision. The NNPDF collaboration has pioneered the use of Machine Learning (ML) techniques for PDF determinations, using Neural Networks (NNs) to parametrise the unknown PDFs in a flexible and unbiased way. The NNs are then trained on experimental data by means of stochastic gradient descent algorithms. The statistical robustness of the results is validated by extensive closure tests using synthetic data. In this work, we develop a theoretical framework based on the Neural Tangent Kernel (NTK) to analyse the training dynamics of neural networks. This approach allows us to derive, under precise assumptions, an analytical description of the neural network evolution during training, enabling a quantitative understanding of the training process. Having an analytical handle on the training dynamics allows us to clarify the role of the NN architecture and the impact of the experimental data in a transparent way. Similarly, we are able to describe the evolution of the covariance of the NN output during training, providing a quantitative description of how uncertainties are propagated from the data to the fitted function. While our results are not a substitute for PDF fitting, they do provide a powerful diagnostic tool to assess the robustness of current fitting methodologies. Beyond its relevance for particle physics phenomenology, our analysis of PDF determinations provides a testbed to apply theoretical ideas about the learning process developed in the ML community.
https://arxiv.org/abs/2512.24116
Academic Papers
svg
272ddb6862425c4bdc9f213abfaa67ee316266c5166482937f2d863e6a6ae81d
2026-01-01T00:00:00-05:00
Targeted Semantic Segmentation of Himalayan Glacial Lakes Using Time-Series SAR: Towards Automated GLOF Early Warning
arXiv:2512.24117v1 Announce Type: cross Abstract: Glacial Lake Outburst Floods (GLOFs) are one of the most devastating climate change induced hazards. Existing remote monitoring approaches often prioritise maximising spatial coverage to train generalistic models or rely on optical imagery hampered by persistent cloud coverage. This paper presents an end-to-end, automated deep learning pipeline for the targeted monitoring of high-risk Himalayan glacial lakes using time-series Sentinel-1 SAR. We introduce a "temporal-first" training strategy, utilising a U-Net with an EfficientNet-B3 backbone trained on a curated dataset of a cohort of 4 lakes (Tsho Rolpa, Chamlang Tsho, Tilicho and Gokyo Lake). The model achieves an IoU of 0.9130 validating the success and efficacy of the "temporal-first" strategy required for transitioning to Early Warning Systems. Beyond the model, we propose an operational engineering architecture: a Dockerised pipeline that automates data ingestion via the ASF Search API and exposes inference results via a RESTful endpoint. This system shifts the paradigm from static mapping to dynamic and automated early warning, providing a scalable architectural foundation for future development in Early Warning Systems.
https://arxiv.org/abs/2512.24117
Academic Papers
svg
e256d8805b54841a6336e6244648595bbc42beda94b58ae499a027556f27c832
2026-01-01T00:00:00-05:00
Score-based sampling without diffusions: Guidance from a simple and modular scheme
arXiv:2512.24152v1 Announce Type: cross Abstract: Sampling based on score diffusions has led to striking empirical results, and has attracted considerable attention from various research communities. It depends on availability of (approximate) Stein score functions for various levels of additive noise. We describe and analyze a modular scheme that reduces score-based sampling to solving a short sequence of ``nice'' sampling problems, for which high-accuracy samplers are known. We show how to design forward trajectories such that both (a) the terminal distribution, and (b) each of the backward conditional distribution is defined by a strongly log concave (SLC) distribution. This modular reduction allows us to exploit \emph{any} SLC sampling algorithm in order to traverse the backwards path, and we establish novel guarantees with short proofs for both uni-modal and multi-modal densities. The use of high-accuracy routines yields $\varepsilon$-accurate answers, in either KL or Wasserstein distances, with polynomial dependence on $\log(1/\varepsilon)$ and $\sqrt{d}$ dependence on the dimension.
https://arxiv.org/abs/2512.24152
Academic Papers
svg
0f1325df7018425e8d102da733439a87064c4aa97d1d5142fe78b173b4874e36
2026-01-01T00:00:00-05:00
Discovering Optimal Robust Minimum Redundancy Arrays (RMRAs) through Exhaustive Search and Algebraic Formulation of a New Sub-Optimal RMRA
arXiv:2512.24155v1 Announce Type: cross Abstract: Modern sparse arrays are maximally economic in that they retain just as many sensors required to provide a specific aperture while maintaining a hole-free difference coarray. As a result, these are susceptible to the failure of even a single sensor. Contrarily, two-fold redundant sparse arrays (TFRSAs) and robust minimum redundancy arrays (RMRAs) ensure robustness against single-sensor failures due to their inherent redundancy in their coarrays. At present, optimal RMRA configurations are known only for arrays with sensor counts N=6 to N=10. To this end, this paper proposes two objectives: (i) developing a systematic algorithm to discover optimal RMRAs for N>10, and (ii) obtaining a new family of near-/sub-optimal RMRA that can be completely specified using closed-form expressions (CFEs). We solve the combinatorial optimization problem of finding RMRAs using an exhaustive search technique implemented in MATLAB. Optimal RMRAs for N = 11 to 14 were successfully found and near/sub-optimal arrays for N = 15 to 20 were determined using the proposed technique. As a byproduct of the exhaustive search, a large catalogue of valid near- and sub-optimal RMRAs was also obtained. In the second stage, CFEs for a new TFRSA were obtained by applying pattern mining and algebraic generalizations to the arrays obtained through exhaustive search. The proposed family enjoys CFEs for sensor positions, available aperture, and achievable degrees of freedom (DOFs). The CFEs have been thoroughly validated using MATLAB and are found to be valid for $N\geq8$. Hence, it can be concluded that the novelty of this work is two-fold: extending the catalogue of known optimal RMRAs and formulating a sub-optimal RMRA that abides by CFEs.
https://arxiv.org/abs/2512.24155
Academic Papers
svg
eb8f1cfe10dbb61dea681137fc45ce312d3e8b04b61f1a68f387ce910fb7ecbf
2026-01-01T00:00:00-05:00
Variational Quantum Brushes
arXiv:2512.24173v1 Announce Type: cross Abstract: Quantum brushes are computational arts software introduced by Ferreira et al (2025) that leverage quantum behavior to generate novel artistic effects. In this outreach paper, we introduce the mathematical framework and describe the implementation of two quantum brushes based on variational quantum algorithms, Steerable and Chemical. While Steerable uses quantum geometric control theory to merge two works of art, Chemical mimics variational eigensolvers for estimating molecular ground energies to evolve colors on an underlying canvas. The implementation of both brushes is available open-source at https://github.com/moth-quantum/QuantumBrush and is fully compatible with the original quantum brushes.
https://arxiv.org/abs/2512.24173
Academic Papers
svg
4d4d15b0db485ea4d2c2476ca571c7d84ec90a40aed1afef13fe10d5328d0f84
2026-01-01T00:00:00-05:00
Fast reconstruction-based ROI triggering via anomaly detection in the CYGNO optical TPC
arXiv:2512.24290v1 Announce Type: cross Abstract: Optical-readout Time Projection Chambers (TPCs) produce megapixel-scale images whose fine-grained topological information is essential for rare-event searches, but whose size challenges real-time data selection. We present an unsupervised, reconstruction-based anomaly-detection strategy for fast Region-of-Interest (ROI) extraction that operates directly on minimally processed camera frames. A convolutional autoencoder trained exclusively on pedestal images learns the detector noise morphology without labels, simulation, or fine-grained calibration. Applied to standard data-taking frames, localized reconstruction residuals identify particle-induced structures, from which compact ROIs are extracted via thresholding and spatial clustering. Using real data from the CYGNO optical TPC prototype, we compare two pedestal-trained autoencoder configurations that differ only in their training objective, enabling a controlled study of its impact. The best configuration retains (93.0 +/- 0.2)% of reconstructed signal intensity while discarding (97.8 +/- 0.1)% of the image area, with an inference time of approximately 25 ms per frame on a consumer GPU. The results demonstrate that careful design of the training objective is critical for effective reconstruction-based anomaly detection and that pedestal-trained autoencoders provide a transparent and detector-agnostic baseline for online data reduction in optical TPCs.
https://arxiv.org/abs/2512.24290
Academic Papers
svg
d22cf33a6362e9382c4b3ae200307870de9f2a13710252018c805effc758199c
2026-01-01T00:00:00-05:00
On maximum distance separable and completely regular codes
arXiv:2512.24292v1 Announce Type: cross Abstract: We investigate when a maximum distance separable ($MDS$) code over $F_q$ is also completely regular ($CR$). For lengths $n=q+1$ and $n=q+2$ we provide a complete classification of the $MDS$ codes that are $CR$ or at least uniformly packed in the wide sense ($UPWS$). For the more restricted case $n\leq q$ with $q\leq 5$ we obtain a full classification (up to equivalence) of all nontrivial $MDS$ codes: there are none for $q=2$; only the ternary Hamming code for $q=3$; four nontrivial families for $q=4$; and exactly six linear $MDS$ codes for $q=5$ (three of which are $CR$ and one admits a self-dual version). Additionally, we close two gaps left open in a previous classification of self-dual $CR$ codes with covering radius $\rho\leq 3$: we precisely determine over which finite fields the $MDS$ self-dual completely regular codes with parameters $[2,1,2]_q$ and $[4,2,3]_q$ exist.
https://arxiv.org/abs/2512.24292
Academic Papers
svg
316235a9f2f38b4f38b13d0b6764206af93f91c3a94daf6d416bd66d8a155b20
2026-01-01T00:00:00-05:00
Generative Video Compression: Towards 0.01% Compression Rate for Video Transmission
arXiv:2512.24300v1 Announce Type: cross Abstract: Whether a video can be compressed at an extreme compression rate as low as 0.01%? To this end, we achieve the compression rate as 0.02% at some cases by introducing Generative Video Compression (GVC), a new framework that redefines the limits of video compression by leveraging modern generative video models to achieve extreme compression rates while preserving a perception-centric, task-oriented communication paradigm, corresponding to Level C of the Shannon-Weaver model. Besides, How we trade computation for compression rate or bandwidth? GVC answers this question by shifting the burden from transmission to inference: it encodes video into extremely compact representations and delegates content reconstruction to the receiver, where powerful generative priors synthesize high-quality video from minimal transmitted information. Is GVC practical and deployable? To ensure practical deployment, we propose a compression-computation trade-off strategy, enabling fast inference on consume-grade GPUs. Within the AI Flow framework, GVC opens new possibility for video communication in bandwidth- and resource-constrained environments such as emergency rescue, remote surveillance, and mobile edge computing. Through empirical validation, we demonstrate that GVC offers a viable path toward a new effective, efficient, scalable, and practical video communication paradigm.
https://arxiv.org/abs/2512.24300
Academic Papers
svg
bf9d96a9cd467be703851f3fc0d815e55c9c027c4a6ef6b76193f6c43dc0c354
2026-01-01T00:00:00-05:00
Topological Spatial Graph Coarsening
arXiv:2512.24327v1 Announce Type: cross Abstract: Spatial graphs are particular graphs for which the nodes are localized in space (e.g., public transport network, molecules, branching biological structures). In this work, we consider the problem of spatial graph reduction, that aims to find a smaller spatial graph (i.e., with less nodes) with the same overall structure as the initial one. In this context, performing the graph reduction while preserving the main topological features of the initial graph is particularly relevant, due to the additional spatial information. Thus, we propose a topological spatial graph coarsening approach based on a new framework that finds a trade-off between the graph reduction and the preservation of the topological characteristics. The coarsening is realized by collapsing short edges. In order to capture the topological information required to calibrate the reduction level, we adapt the construction of classical topological descriptors made for point clouds (the so-called persistent diagrams) to spatial graphs. This construction relies on the introduction of a new filtration called triangle-aware graph filtration. Our coarsening approach is parameter-free and we prove that it is equivariant under rotations, translations and scaling of the initial spatial graph. We evaluate the performances of our method on synthetic and real spatial graphs, and show that it significantly reduces the graph sizes while preserving the relevant topological information.
https://arxiv.org/abs/2512.24327
Academic Papers
svg
37c14e3894213ca9cf93b62ac9bc332e2907a53c5b34c3ded0b304752d952858
2026-01-01T00:00:00-05:00
OptiVote: Non-Coherent FSO Over-the-Air Majority Vote for Communication-Efficient Distributed Federated Learning in Space Data Centers
arXiv:2512.24334v1 Announce Type: cross Abstract: The rapid deployment of mega-constellations is driving the long-term vision of space data centers (SDCs), where interconnected satellites form in-orbit distributed computing and learning infrastructures. Enabling distributed federated learning in such systems is challenging because iterative training requires frequent aggregation over inter-satellite links that are bandwidth- and energy-constrained, and the link conditions can be highly dynamic. In this work, we exploit over-the-air computation (AirComp) as an in-network aggregation primitive. However, conventional coherent AirComp relies on stringent phase alignment, which is difficult to maintain in space environments due to satellite jitter and Doppler effects. To overcome this limitation, we propose OptiVote, a robust and communication-efficient non-coherent free-space optical (FSO) AirComp framework for federated learning toward Space Data Centers. OptiVote integrates sign stochastic gradient descent (signSGD) with a majority-vote (MV) aggregation principle and pulse-position modulation (PPM), where each satellite conveys local gradient signs by activating orthogonal PPM time slots. The aggregation node performs MV detection via non-coherent energy accumulation, transforming phase-sensitive field superposition into phase-agnostic optical intensity combining, thereby eliminating the need for precise phase synchronization and improving resilience under dynamic impairments. To mitigate aggregation bias induced by heterogeneous FSO channels, we further develop an importance-aware, channel state information (CSI)-free dynamic power control scheme that balances received energies without additional signaling. We provide theoretical analysis by characterizing the aggregate error probability under statistical FSO channels and establishing convergence guarantees for non-convex objectives.
https://arxiv.org/abs/2512.24334
Academic Papers
svg
cc0c3153036f841b828ba144e6520a01f483efb8db7e8119380fbfb63685f53e
2026-01-01T00:00:00-05:00
Deep Learning in Geotechnical Engineering: A Critical Assessment of PINNs and Operator Learning
arXiv:2512.24365v1 Announce Type: cross Abstract: Deep learning methods -- physics-informed neural networks (PINNs), deep operator networks (DeepONet), and graph network simulators (GNS) -- are increasingly proposed for geotechnical problems. This paper tests these methods against traditional solvers on canonical problems: wave propagation and beam-foundation interaction. PINNs run 90,000 times slower than finite difference with larger errors. DeepONet requires thousands of training simulations and breaks even only after millions of evaluations. Multi-layer perceptrons fail catastrophically when extrapolating beyond training data -- the common case in geotechnical prediction. GNS shows promise for geometry-agnostic simulation but faces scaling limits and cannot capture path-dependent soil behavior. For inverse problems, automatic differentiation through traditional solvers recovers material parameters with sub-percent accuracy in seconds. We recommend: use automatic differentiation for inverse problems; apply site-based cross-validation to account for spatial autocorrelation; reserve neural networks for problems where traditional solvers are genuinely expensive and predictions remain within the training envelope. When a method is four orders of magnitude slower with less accuracy, it is not a viable replacement for proven solvers.
https://arxiv.org/abs/2512.24365
Academic Papers
svg
14dfe21025f76173f2d2b3738d035873529f8de978f75904659693fead6cbd69
2026-01-01T00:00:00-05:00
Implicit score matching meets denoising score matching: improved rates of convergence and log-density Hessian estimation
arXiv:2512.24378v1 Announce Type: cross Abstract: We study the problem of estimating the score function using both implicit score matching and denoising score matching. Assuming that the data distribution exhibiting a low-dimensional structure, we prove that implicit score matching is able not only to adapt to the intrinsic dimension, but also to achieve the same rates of convergence as denoising score matching in terms of the sample size. Furthermore, we demonstrate that both methods allow us to estimate log-density Hessians without the curse of dimensionality by simple differentiation. This justifies convergence of ODE-based samplers for generative diffusion models. Our approach is based on Gagliardo-Nirenberg-type inequalities relating weighted $L^2$-norms of smooth functions and their derivatives.
https://arxiv.org/abs/2512.24378
Academic Papers
svg
82a522cd05e0f343de7b276d90e546c0915a53a1613345ee31fcf4a5ebe9ee4e
2026-01-01T00:00:00-05:00
Finite element analysis of very large bone models based on micro-CT scans
arXiv:2512.24401v1 Announce Type: cross Abstract: High-resolution voxel-based micro-finite element ($\mu$FE) models derived from $\mu$CT imaging enable detailed investigation of bone mechanics but remain computationally challenging at anatomically relevant scales. This study presents a comprehensive $\mu$FE framework for large-scale biomechanical analysis of an intact New Zealand White (NZW) rabbit femur, integrating advanced segmentation, scalable finite element solvers, and experimental validation using predominantly open-source libraries. Bone geometries were segmented from $\mu$CT data using the MIA clustering algorithm and converted into voxel-based $\mu$FE meshes, which were solved using the open-source MFEM library with algorithms designed for large-scale linear elasticity systems. The numerical solutions were verified by comparing with a commercial finite element solver, and by evaluating the performance of full assembly and element-by-element formulations within MFEM. Models containing over $8\times10^{8}$ DOFs were solved using moderate HPC resources, demonstrating the feasibility of anatomically realistic $\mu$FE simulations at this scale. Resolution effects were investigated by comparing models with voxel sizes of 20, 40, and 80 $\mu$m, revealing that 40 $\mu$m preserves boundary displacement and principal strain distributions with minimal bias while significantly reducing computational cost. Sensitivity analyses further showed that segmentation parameters influence the global mechanical response. Finally, $\mu$FE predictions were coupled with Digital Image Correlation measurements on an NZW rabbit femur under compression to calibrate effective bone material properties at the micron scale. The results demonstrate that large-scale, experimentally informed $\mu$FE modeling can be achieved using open-source tools, providing a robust foundation for preclinical assessment of bone mechanics and treatment-related risks.
https://arxiv.org/abs/2512.24401
Academic Papers
svg
340d2b75c6b19327ac604b40309543c168a20eb416c905b961547dddfdfadf2a
2026-01-01T00:00:00-05:00
Virasoro Symmetry in Neural Network Field Theories
arXiv:2512.24420v1 Announce Type: cross Abstract: Neural Network Field Theories (NN-FTs) can realize global conformal symmetries via embedding space architectures. These models describe Generalized Free Fields (GFFs) in the infinite width limit. However, they typically lack a local stress-energy tensor satisfying conformal Ward identities. This presents an obstruction to realizing infinite-dimensional, local conformal symmetry typifying 2d Conformal Field Theories (CFTs). We present the first construction of an NN-FT that encodes the full Virasoro symmetry of a 2d CFT. We formulate a neural free boson theory with a local stress tensor $T(z)$ by properly choosing the architecture and prior distribution of network parameters. We verify the analytical results through numerical simulation; computing the central charge and the scaling dimensions of vertex operators. We then construct an NN realization of a Majorana Fermion and an $\mathcal{N}=(1,1)$ scalar multiplet, which then enables an extension of the formalism to include super-Virasoro symmetry. Finally, we extend the framework by constructing boundary NN-FTs that preserve (super-)conformal symmetry via the method of images.
https://arxiv.org/abs/2512.24420
Academic Papers
svg
14fdb4aa2ec0b2c43ffaaf6cd37cb73fd100b3c82aa6fbdd293be17f5836a166
2026-01-01T00:00:00-05:00
Automated Market Making for Energy Sharing
arXiv:2512.24432v1 Announce Type: cross Abstract: We develop an axiomatic theory for Automated Market Makers (AMMs) in local energy sharing markets and analyze the Markov Perfect Equilibrium of the resulting economy with a Mean-Field Game. In this game, heterogeneous prosumers solve a Bellman equation to optimize energy consumption, storage, and exchanges. Our axioms identify a class of mechanisms with linear, Lipschitz continuous payment functions, where prices decrease with the aggregate supply-to-demand ratio of energy. We prove that implementing batch execution and concentrated liquidity allows standard design conditions from decentralized finance-quasi-concavity, monotonicity, and homotheticity-to construct AMMs that satisfy our axioms. The resulting AMMs are budget-balanced and achieve ex-ante efficiency, contrasting with the strategy-proof, expost optimal VCG mechanism. Since the AMM implements a Potential Game, we solve its equilibrium by first computing the social planner's optimum and then decentralizing the allocation. Numerical experiments using data from the Paris administrative region suggest that the prosumer community can achieve gains from trade up to 40% relative to the grid-only benchmark.
https://arxiv.org/abs/2512.24432
Academic Papers
svg
b31ca0718b44947f5a8d7f5bb2a09e164d49e8713fdc163e0e7371089ab0b8fd
2026-01-01T00:00:00-05:00
Quasicrystalline Gibbs states in 4-dimensional lattice-gas models with finite-range interactions
arXiv:2512.24436v1 Announce Type: cross Abstract: We construct a four-dimensional lattice-gas model with finite-range interactions that has non-periodic, ``quasicrystalline'' Gibbs states at low temperatures. Such Gibbs states are probability measures which are small perturbations of non-periodic ground-state configurations corresponding to tilings of the plane with Ammann's aperiodic tiles. Our construction is based on the correspondence between probabilistic cellular automata and Gibbs measures on their space-time trajectories, and a classical result on noise-resilient computing with cellular automata. The cellular automaton is constructed on the basis of Ammann's tiles, which are deterministic in one direction, and has non-periodic space-time trajectories corresponding to each valid tiling. Repetitions along two extra dimensions, together with an error-correction mechanism, ensure stability of the trajectories subjected to noise.
https://arxiv.org/abs/2512.24436
Academic Papers
svg
faa32383d53eb33ac9ae8556cf9eee7b9f0d5153f2ea2897e282a05de464fd1c
2026-01-01T00:00:00-05:00
Towards mechanistic understanding in a data-driven weather model: internal activations reveal interpretable physical features
arXiv:2512.24440v1 Announce Type: cross Abstract: Large data-driven physics models like DeepMind's weather model GraphCast have empirically succeeded in parameterizing time operators for complex dynamical systems with an accuracy reaching or in some cases exceeding that of traditional physics-based solvers. Unfortunately, how these data-driven models perform computations is largely unknown and whether their internal representations are interpretable or physically consistent is an open question. Here, we adapt tools from interpretability research in Large Language Models to analyze intermediate computational layers in GraphCast, leveraging sparse autoencoders to discover interpretable features in the neuron space of the model. We uncover distinct features on a wide range of length and time scales that correspond to tropical cyclones, atmospheric rivers, diurnal and seasonal behavior, large-scale precipitation patterns, specific geographical coding, and sea-ice extent, among others. We further demonstrate how the precise abstraction of these features can be probed via interventions on the prediction steps of the model. As a case study, we sparsely modify a feature corresponding to tropical cyclones in GraphCast and observe interpretable and physically consistent modifications to evolving hurricanes. Such methods offer a window into the black-box behavior of data-driven physics models and are a step towards realizing their potential as trustworthy predictors and scientifically valuable tools for discovery.
https://arxiv.org/abs/2512.24440
Academic Papers
svg
0a623ad00e16c73f3a1bd259f8438b25430b7b58b58b704d619ba63080f70b22
2026-01-01T00:00:00-05:00
The Wigner-Ville Transform as an Information Theoretic Tool in Radio-frequency Signal Analysis
arXiv:2512.24488v1 Announce Type: cross Abstract: This paper presents novel interpretations to the field of classical signal processing of the Wigner-Ville transform as an information measurement tool. The transform's utility in detecting and localizing information-laden signals amidst noisy and cluttered backgrounds, and further providing measure of their information volumes, are detailed herein using Tsallis' entropy and information and related functionals. Example use cases in radio frequency communications are given, where Wigner-Ville-based detection measures can be seen to provide significant sensitivity advantage, for some shown contexts greater than 15~dB advantage, over energy-based measures and without extensive training routines. Such an advantage is particularly significant for applications which have limitations on observation resources including time/space integration pressures and transient and/or feeble signals, where Wigner-Ville-based methods would improve sensing effectiveness by multiple orders of magnitude. The potential for advancement of several such applications is discussed.
https://arxiv.org/abs/2512.24488
Academic Papers
svg
4a4482fef318064311078a642e49619e03d9d2951e4120ae3e03aa76b4f6c44b
2026-01-01T00:00:00-05:00
Automated Classification of First-Trimester Fetal Heart Views Using Ultrasound-Specific Self-Supervised Learning
arXiv:2512.24492v1 Announce Type: cross Abstract: Congenital heart disease remains the most common congenital anomaly and a leading cause of neonatal morbidity and mortality. Although first-trimester fetal echocardiography offers an opportunity for earlier detection, automated analysis at this stage is challenging due to small cardiac structures, low signal-to-noise ratio, and substantial inter-operator variability. In this work, we evaluate a self-supervised ultrasound foundation model, USF-MAE, for first-trimester fetal heart view classification. USF-MAE is pretrained using masked autoencoding modelling on more than 370,000 unlabelled ultrasound images spanning over 40 anatomical regions and is subsequently fine-tuned for downstream classification. As a proof of concept, the pretrained Vision Transformer encoder was fine-tuned on an open-source dataset of 6,720 first-trimester fetal echocardiography images to classify five categories: aorta, atrioventricular flows, V sign, X sign, and Other. Model performance was benchmarked against supervised convolutional neural network baselines (ResNet-18 and ResNet-50) and a Vision Transformer (ViT-B/16) model pretrained on natural images (ImageNet-1k). All models were trained and evaluated using identical preprocessing, data splits, and optimization protocols. On an independent test set, USF-MAE achieved the highest performance across all evaluation metrics, with 90.57% accuracy, 91.15% precision, 90.57% recall, and 90.71% F1-score. This represents an improvement of +2.03% in accuracy and +1.98% in F1-score compared with the strongest baseline, ResNet-18. The proposed approach demonstrated robust performance without reliance on aggressive image preprocessing or region-of-interest cropping and showed improved discrimination of non-diagnostic frames.
https://arxiv.org/abs/2512.24492
Academic Papers
svg
87e9ec42387891001904de8047df70b730bcd6588840b87deb74f74e2ad497a8
2026-01-01T00:00:00-05:00
Improving the stability of the covariance-controlled adaptive Langevin thermostat for large-scale Bayesian sampling
arXiv:2512.24515v1 Announce Type: cross Abstract: Stochastic gradient Langevin dynamics and its variants approximate the likelihood of an entire dataset, via random (and typically much smaller) subsets, in the setting of Bayesian sampling. Due to the (often substantial) improvement of the computational efficiency, they have been widely used in large-scale machine learning applications. It has been demonstrated that the so-called covariance-controlled adaptive Langevin (CCAdL) thermostat, which incorporates an additional term involving the covariance matrix of the noisy force, outperforms popular alternative methods. A moving average is used in CCAdL to estimate the covariance matrix of the noisy force, in which case the covariance matrix will converge to a constant matrix in long-time limit. Moreover, it appears in our numerical experiments that the use of a moving average could reduce the stability of the numerical integrators, thereby limiting the largest usable stepsize. In this article, we propose a modified CCAdL (i.e., mCCAdL) thermostat that uses the scaling part of the scaling and squaring method together with a truncated Taylor series approximation to the exponential to numerically approximate the exact solution to the subsystem involving the additional term proposed in CCAdL. We also propose a symmetric splitting method for mCCAdL, instead of an Euler-type discretisation used in the original CCAdL thermostat. We demonstrate in our numerical experiments that the newly proposed mCCAdL thermostat achieves a substantial improvement in the numerical stability over the original CCAdL thermostat, while significantly outperforming popular alternative stochastic gradient methods in terms of the numerical accuracy for large-scale machine learning applications.
https://arxiv.org/abs/2512.24515
Academic Papers
svg
e139279afd0bf4d4713a1fee8be35c5c036de89a97e6051dece6795923f882b7
2026-01-01T00:00:00-05:00
Power Analysis is Essential: High-Powered Tests Suggest Minimal to No Effect of Rounded Shapes on Click-Through Rates
arXiv:2512.24521v1 Announce Type: cross Abstract: Underpowered studies (below 50%) suffer from the winner's curse: a statistically significant result must exaggerate the true treatment effect to meet the significance threshold. A study by Dipayan Biswas, Annika Abell, and Roger Chacko published in the Journal of Consumer Research (2023) reported that in an A/B test simply rounding the corners of square buttons increased the online click-through rate by 55% (p-value 0.037)$\unicode{x2014}$a striking finding with potentially wide-ranging implications for the digital industry that is seeking to enhance consumer engagement. Drawing on our experience with tens of thousands of A/B tests, many involving similar user interface modifications, we found this dramatic claim implausibly large. To evaluate the claim, we conducted three high-powered A/B tests, each involving over two thousand times more users than the original study. All three experiments yielded effect size estimates that were approximately two orders of magnitude smaller than initially reported, with 95% confidence intervals that include zero, that is, not statistically significant at the 0.05 level. Two additional independent replications by Evidoo found similarly small effects. These findings underscore the critical importance of power analysis and experimental design to increase trust and reproducibility of results.
https://arxiv.org/abs/2512.24521
Academic Papers
svg
09654a0e1e474969f0e5e9c653d78d17ed0e71d12e51b678ddd3d3c49690843c
2026-01-01T00:00:00-05:00
Proper colorings of a graph in linear time using a number of colors linear in the maximum degree of the graph
arXiv:2512.24522v1 Announce Type: cross Abstract: A new algorithm for exactly sampling from the set of proper colorings of a graph is presented. This is the first such algorithm that has an expected running time that is guaranteed to be linear in the size of a graph with maximum degree \( \Delta \) when the number of colors is greater than \( 3.637 \Delta + 1\).
https://arxiv.org/abs/2512.24522
Academic Papers
svg
9996d61bc3847559afd1a8b00fd971c59afb796d898b178b833086bbfb61738e
2026-01-01T00:00:00-05:00
Generative AI-enhanced Sector-based Investment Portfolio Construction
arXiv:2512.24526v1 Announce Type: cross Abstract: This paper investigates how Large Language Models (LLMs) from leading providers (OpenAI, Google, Anthropic, DeepSeek, and xAI) can be applied to quantitative sector-based portfolio construction. We use LLMs to identify investable universes of stocks within S&P 500 sector indices and evaluate how their selections perform when combined with classical portfolio optimization methods. Each model was prompted to select and weight 20 stocks per sector, and the resulting portfolios were compared with their respective sector indices across two distinct out-of-sample periods: a stable market phase (January-March 2025) and a volatile phase (April-June 2025). Our results reveal a strong temporal dependence in LLM portfolio performance. During stable market conditions, LLM-weighted portfolios frequently outperformed sector indices on both cumulative return and risk-adjusted (Sharpe ratio) measures. However, during the volatile period, many LLM portfolios underperformed, suggesting that current models may struggle to adapt to regime shifts or high-volatility environments underrepresented in their training data. Importantly, when LLM-based stock selection is combined with traditional optimization techniques, portfolio outcomes improve in both performance and consistency. This study contributes one of the first multi-model, cross-provider evaluations of generative AI algorithms in investment management. It highlights that while LLMs can effectively complement quantitative finance by enhancing stock selection and interpretability, their reliability remains market-dependent. The findings underscore the potential of hybrid AI-quantitative frameworks, integrating LLM reasoning with established optimization techniques, to produce more robust and adaptive investment strategies.
https://arxiv.org/abs/2512.24526
Academic Papers
svg
1ad442b2c30c744b54614eaf9cf6aa2eb651f7f8d08b549ce952b507b582241b
2026-01-01T00:00:00-05:00
Probabilistic Computers for Neural Quantum States
arXiv:2512.24558v1 Announce Type: cross Abstract: Neural quantum states efficiently represent many-body wavefunctions with neural networks, but the cost of Monte Carlo sampling limits their scaling to large system sizes. Here we address this challenge by combining sparse Boltzmann machine architectures with probabilistic computing hardware. We implement a probabilistic computer on field programmable gate arrays (FPGAs) and use it as a fast sampler for energy-based neural quantum states. For the two-dimensional transverse-field Ising model at criticality, we obtain accurate ground-state energies for lattices up to 80 $\times$ 80 (6400 spins) using a custom multi-FPGA cluster. Furthermore, we introduce a dual-sampling algorithm to train deep Boltzmann machines, replacing intractable marginalization with conditional sampling over auxiliary layers. This enables the training of sparse deep models and improves parameter efficiency relative to shallow networks. Using this algorithm, we train deep Boltzmann machines for a system with 35 $\times$ 35 (1225 spins). Together, these results demonstrate that probabilistic hardware can overcome the sampling bottleneck in variational simulation of quantum many-body systems, opening a path to larger system sizes and deeper variational architectures.
https://arxiv.org/abs/2512.24558
Academic Papers
svg
1b51a6cfa1dfc74f4c6fc778ce83183eda4e3aae3083d5cec08112bad5642132
2026-01-01T00:00:00-05:00
Robust Bayesian Dynamic Programming for On-policy Risk-sensitive Reinforcement Learning
arXiv:2512.24580v1 Announce Type: cross Abstract: We propose a novel framework for risk-sensitive reinforcement learning (RSRL) that incorporates robustness against transition uncertainty. We define two distinct yet coupled risk measures: an inner risk measure addressing state and cost randomness and an outer risk measure capturing transition dynamics uncertainty. Our framework unifies and generalizes most existing RL frameworks by permitting general coherent risk measures for both inner and outer risk measures. Within this framework, we construct a risk-sensitive robust Markov decision process (RSRMDP), derive its Bellman equation, and provide error analysis under a given posterior distribution. We further develop a Bayesian Dynamic Programming (Bayesian DP) algorithm that alternates between posterior updates and value iteration. The approach employs an estimator for the risk-based Bellman operator that combines Monte Carlo sampling with convex optimization, for which we prove strong consistency guarantees. Furthermore, we demonstrate that the algorithm converges to a near-optimal policy in the training environment and analyze both the sample complexity and the computational complexity under the Dirichlet posterior and CVaR. Finally, we validate our approach through two numerical experiments. The results exhibit excellent convergence properties while providing intuitive demonstrations of its advantages in both risk-sensitivity and robustness. Empirically, we further demonstrate the advantages of the proposed algorithm through an application on option hedging.
https://arxiv.org/abs/2512.24580
Academic Papers
svg
7159ac4fb8f09d9911540d568fe6850b917ace59b848567d5e116f756b41ed06
2026-01-01T00:00:00-05:00
On Circular Threshold Words and Other Stronger Versions of Dejean's conjecture
arXiv:2512.24581v1 Announce Type: cross Abstract: Let the root of the word $w$ be the smallest prefix $v$ of $w$ such that $w$ is a prefix of $vvv...$. $per(w)$ is the length of the root of $w$. For any $n\ge5$, an $n$-ary threshold word is a word $w$ such that for any factor (subword) $v$ of $w$ the condition $\frac{|v|}{per(v)}\le\frac{n}{n-1}$ holds. Dejean conjecture (completely proven in 2009) states for $n\ge5$ that exists infinitely many of $n$-ary TWs. This manuscript is based on the author's student works (diplomas of 2011 (bachelor's thesis) and 2013 (master's thesis) years) and presents an edited version (in Russian) of these works with some improvements. In a 2011 work proposed new methods of proving of the Dejean conjecture for some odd cases $n\ge5$, using computer verification in polynomial time (depending on $n$). Moreover, the constructed threshold words (TWs) are ciclic/ring TWs (any cyclic shift is a TW). In the 2013 work, the proof method (of 2011) was improved by reducing the verification conditions. A solution for some even cases $n\ge6$ is also proposed. A 2013 work also proposed a method to construct stronger TWs, using a TW tree with regular exponential growth. Namely, the TWs, where all long factors have an exponent close to 1.
https://arxiv.org/abs/2512.24581
Academic Papers
svg
a1a1cf45d308f8125674b6d9f0916b05c43202f77d1879036e0450731ef78650
2026-01-01T00:00:00-05:00
MultiRisk: Multiple Risk Control via Iterative Score Thresholding
arXiv:2512.24587v1 Announce Type: cross Abstract: As generative AI systems are increasingly deployed in real-world applications, regulating multiple dimensions of model behavior has become essential. We focus on test-time filtering: a lightweight mechanism for behavior control that compares performance scores to estimated thresholds, and modifies outputs when these bounds are violated. We formalize the problem of enforcing multiple risk constraints with user-defined priorities, and introduce two efficient dynamic programming algorithms that leverage this sequential structure. The first, MULTIRISK-BASE, provides a direct finite-sample procedure for selecting thresholds, while the second, MULTIRISK, leverages data exchangeability to guarantee simultaneous control of the risks. Under mild assumptions, we show that MULTIRISK achieves nearly tight control of all constraint risks. The analysis requires an intricate iterative argument, upper bounding the risks by introducing several forms of intermediate symmetrized risk functions, and carefully lower bounding the risks by recursively counting jumps in symmetrized risk functions between appropriate risk levels. We evaluate our framework on a three-constraint Large Language Model alignment task using the PKU-SafeRLHF dataset, where the goal is to maximize helpfulness subject to multiple safety constraints, and where scores are generated by a Large Language Model judge and a perplexity filter. Our experimental results show that our algorithm can control each individual risk at close to the target level.
https://arxiv.org/abs/2512.24587
Academic Papers
svg
cb237294fde856e2be7143bce7579a76fc1566c8acf10849007b0e5e5139af92
2026-01-01T00:00:00-05:00
A Uniform Pilot and Data Payload Optimization Framework for OTFS-Based ISAC
arXiv:2512.24624v1 Announce Type: cross Abstract: The orthogonal time frequency space (OTFS) signal is considered a promising solution for high-mobility wireless environments. It manages Doppler effects by utilizing delay-Doppler (DD) domain processing. However, the relatively long OTFS frame duration could introduce considerable sensing or communication latency when radar and communication are performed separately. By operating in a dual-functional radar and communication (DFRC) mode, the OTFS system performs sensing and data transmission simultaneously, thereby reducing the resulting latency. Nevertheless, the optimal OTFS DFRC signal strategy remains insufficiently explored. This paper investigates the optimal signal design for OTFS DFRC systems, focusing on pilot symbol design and data symbol power allocation. Specifically, we derive a channel capacity lower bound metric for communication that considers channel estimation errors in OTFS. For sensing, we derive an integrated sidelobe level (ISL), accounting for the randomness of the data symbols alongside the deterministic pilot symbols. Leveraging the above metrics, we formulate an optimization problem that balances radar and communication performance, and then solve it using an alternating optimization framework. We validate the proposed signal through numerical analysis and Monte Carlo simulations. Our analysis shows that OTFS DFRC enforces a deterministic pilot signal that is characterized by a concentrated peak in the DD domain, which furnishes a common structure in the DD domain facilitating sensing and channel estimation, with data multiplexed in other DD grids, thereby unifying sensing and communication within a single OTFS signal. Compared with conventional OTFS signals, the proposed OTFS DFRC signal expands the achievable sensing-communication performance region, delivering at least a 9.45 dB ISL suppression for sensing and a 4.82 dB SINR ratio gain for communication.
https://arxiv.org/abs/2512.24624
Academic Papers
svg
66db174841f9f1c34ddaf3fa49b3b058c2e064f6cc41ea8896dd05133517d0f6
2026-01-01T00:00:00-05:00
Soliton profiles: Classical Numerical Schemes vs. Neural Network - Based Solvers
arXiv:2512.24634v1 Announce Type: cross Abstract: We present a comparative study of classical numerical solvers, such as Petviashvili's method or finite difference with Newton iterations, and neural network-based methods for computing ground states or profiles of solitary-wave solutions to the one-dimensional dispersive PDEs that include the nonlinear Schr\"odinger, the nonlinear Klein-Gordon and the generalized KdV equations. We confirm that classical approaches retain high-order accuracy and strong computational efficiency for single-instance problems in the one-dimensional setting. Physics-informed neural networks (PINNs) are also able to reproduce qualitative solutions but are generally less accurate and less efficient in low dimensions than classical solvers due to expensive training and slow convergence. We also investigate the operator-learning methods, which, although computationally intensive during training, can be reused across many parameter instances, providing rapid inference after pretraining, making them attractive for applications involving repeated simulations or real-time predictions. For single-instance computations, however, the accuracy of operator-learning methods remains lower than that of classical methods or PINNs, in general.
https://arxiv.org/abs/2512.24634
Academic Papers
svg
2cc211f273f51751bbcfd14d227137e6a0789af73c84602e17315ceecf25030c
2026-01-01T00:00:00-05:00
A unified spatiotemporal formulation with physics-preserving structure for time-dependent convection-diffusion problems
arXiv:2512.24650v1 Announce Type: cross Abstract: We propose a unified four-dimensional (4D) spatiotemporal formulation for time-dependent convection-diffusion problems that preserves underlying physical structures. By treating time as an additional space-like coordinate, the evolution problem is reformulated as a stationary convection-diffusion equation on a 4D space-time domain. Using exterior calculus, we extend this framework to the full family of convection-diffusion problems posed on $H(\textbf{grad})$, $H(\textbf{curl})$, and $H(\text{div})$. The resulting formulation is based on a 4D Hodge-Laplacian operator with a spatiotemporal diffusion tensor and convection field, augmented by a small temporal perturbation to ensure nondegeneracy. This formulation naturally incorporates fundamental physical constraints, including divergence-free and curl-free conditions. We further introduce an exponentially-fitted 4D spatiotemporal flux operator that symmetrizes the convection-diffusion operator and enables a well-posed variational formulation. Finally, we prove that the temporally-perturbed formulation converges to the original time-dependent convection-diffusion model as the perturbation parameter tends to zero.
https://arxiv.org/abs/2512.24650
Academic Papers
svg
6d20ad52a18bb836c3d92b9878af9a68a5781883778db10964e99201361ea589
2026-01-01T00:00:00-05:00
An Adaptive, Disentangled Representation for Multidimensional MRI Reconstruction
arXiv:2512.24674v1 Announce Type: cross Abstract: We present a new approach for representing and reconstructing multidimensional magnetic resonance imaging (MRI) data. Our method builds on a novel, learned feature-based image representation that disentangles different types of features, such as geometry and contrast, into distinct low-dimensional latent spaces, enabling better exploitation of feature correlations in multidimensional images and incorporation of pre-learned priors specific to different feature types for reconstruction. More specifically, the disentanglement was achieved via an encoderdecoder network and image transfer training using large public data, enhanced by a style-based decoder design. A latent diffusion model was introduced to impose stronger constraints on distinct feature spaces. New reconstruction formulations and algorithms were developed to integrate the learned representation with a zero-shot selfsupervised learning adaptation and subspace modeling. The proposed method has been evaluated on accelerated T1 and T2 parameter mapping, achieving improved performance over state-of-the-art reconstruction methods, without task-specific supervised training or fine-tuning. This work offers a new strategy for learning-based multidimensional image reconstruction where only limited data are available for problem-specific or task-specific training.
https://arxiv.org/abs/2512.24674
Academic Papers
svg
f3852f630d227420e826d07a1346ff1973c781b2eda533ecd28ec3b349db3e52
2026-01-01T00:00:00-05:00
A New Decomposition Paradigm for Graph-structured Nonlinear Programs via Message Passing
arXiv:2512.24676v1 Announce Type: cross Abstract: We study finite-sum nonlinear programs whose decision variables interact locally according to a graph or hypergraph. We propose MP-Jacobi (Message Passing-Jacobi), a graph-compliant decentralized framework that couples min-sum message passing with Jacobi block updates. The (hyper)graph is partitioned into tree clusters. At each iteration, agents update in parallel by solving a cluster subproblem whose objective decomposes into (i) an intra-cluster term evaluated by a single min-sum sweep on the cluster tree (cost-to-go messages) and (ii) inter-cluster couplings handled via a Jacobi correction using neighbors' latest iterates. This design uses only single-hop communication and yields a convergent message-passing method on loopy graphs. For strongly convex objectives we establish global linear convergence and explicit rates that quantify how curvature, coupling strength, and the chosen partition affect scalability and provide guidance for clustering. To mitigate the computation and communication cost of exact message updates, we develop graph-compliant surrogates that preserve convergence while reducing per-iteration complexity. We further extend MP-Jacobi to hypergraphs; in heavily overlapping regimes, a surrogate-based hyperedge-splitting scheme restores finite-time intra-cluster message updates and maintains convergence. Experiments validate the theory and show consistent improvements over decentralized gradient baselines.
https://arxiv.org/abs/2512.24676
Academic Papers
svg
4831194f26e2afbf58d4f7f1651312b9329113c0c35f23762d301194a714ced4
2026-01-01T00:00:00-05:00
Quantum Visual Word Sense Disambiguation: Unraveling Ambiguities Through Quantum Inference Model
arXiv:2512.24687v1 Announce Type: cross Abstract: Visual word sense disambiguation focuses on polysemous words, where candidate images can be easily confused. Traditional methods use classical probability to calculate the likelihood of an image matching each gloss of the target word, summing these to form a posterior probability. However, due to the challenge of semantic uncertainty, glosses from different sources inevitably carry semantic biases, which can lead to biased disambiguation results. Inspired by quantum superposition in modeling uncertainty, this paper proposes a Quantum Inference Model for Unsupervised Visual Word Sense Disambiguation (Q-VWSD). It encodes multiple glosses of the target word into a superposition state to mitigate semantic biases. Then, the quantum circuit is executed, and the results are observed. By formalizing our method, we find that Q-VWSD is a quantum generalization of the method based on classical probability. Building on this, we further designed a heuristic version of Q-VWSD that can run more efficiently on classical computing. The experiments demonstrate that our method outperforms state-of-the-art classical methods, particularly by effectively leveraging non-specialized glosses from large language models, which further enhances performance. Our approach showcases the potential of quantum machine learning in practical applications and provides a case for leveraging quantum modeling advantages on classical computers while quantum hardware remains immature.
https://arxiv.org/abs/2512.24687
Academic Papers
svg
4670a28625cfbc7ed9ee0a504837804cf76bb1728e1006ed6c57882cc6a2f769
2026-01-01T00:00:00-05:00
Fairness-Aware Insurance Pricing: A Multi-Objective Optimization Approach
arXiv:2512.24747v1 Announce Type: cross Abstract: Machine learning improves predictive accuracy in insurance pricing but exacerbates trade-offs between competing fairness criteria across different discrimination measures, challenging regulators and insurers to reconcile profitability with equitable outcomes. While existing fairness-aware models offer partial solutions under GLM and XGBoost estimation methods, they remain constrained by single-objective optimization, failing to holistically navigate a conflicting landscape of accuracy, group fairness, individual fairness, and counterfactual fairness. To address this, we propose a novel multi-objective optimization framework that jointly optimizes all four criteria via the Non-dominated Sorting Genetic Algorithm II (NSGA-II), generating a diverse Pareto front of trade-off solutions. We use a specific selection mechanism to extract a premium on this front. Our results show that XGBoost outperforms GLM in accuracy but amplifies fairness disparities; the Orthogonal model excels in group fairness, while Synthetic Control leads in individual and counterfactual fairness. Our method consistently achieves a balanced compromise, outperforming single-model approaches.
https://arxiv.org/abs/2512.24747
Academic Papers
svg
4a4fc5b3aafeaa86c9b7069aec63b55aaa0c2dce1308dc9af1c4c1345d3325a1
2026-01-01T00:00:00-05:00
AstroReview: An LLM-driven Multi-Agent Framework for Telescope Proposal Peer Review and Refinement
arXiv:2512.24754v1 Announce Type: cross Abstract: Competitive access to modern observatories has intensified as proposal volumes outpace available telescope time, making timely, consistent, and transparent peer review a critical bottleneck for the advancement of astronomy. Automating parts of this process is therefore both scientifically significant and operationally necessary to ensure fair allocation and reproducible decisions at scale. We present AstroReview, an open-source, agent-based framework that automates proposal review in three stages: (i) novelty and scientific merit, (ii) feasibility and expected yield, and (iii) meta-review and reliability verification. Task isolation and explicit reasoning traces curb hallucinations and improve transparency. Without any domain specific fine tuning, AstroReview used in our experiments only for the last stage, correctly identifies genuinely accepted proposals with an accuracy of 87%. The AstroReview in Action module replicates the review and refinement loop; with its integrated Proposal Authoring Agent, the acceptance rate of revised drafts increases by 66% after two iterations, showing that iterative feedback combined with automated meta-review and reliability verification delivers measurable quality gains. Together, these results point to a practical path toward scalable, auditable, and higher throughput proposal review for resource limited facilities.
https://arxiv.org/abs/2512.24754
Academic Papers
svg
6f778d0f4cbcfa779763cf6f5671a0b935c5bc791062ad99672219cc806c94d3
2026-01-01T00:00:00-05:00
Sparse Offline Reinforcement Learning with Corruption Robustness
arXiv:2512.24768v1 Announce Type: cross Abstract: We investigate robustness to strong data corruption in offline sparse reinforcement learning (RL). In our setting, an adversary may arbitrarily perturb a fraction of the collected trajectories from a high-dimensional but sparse Markov decision process, and our goal is to estimate a near optimal policy. The main challenge is that, in the high-dimensional regime where the number of samples $N$ is smaller than the feature dimension $d$, exploiting sparsity is essential for obtaining non-vacuous guarantees but has not been systematically studied in offline RL. We analyse the problem under uniform coverage and sparse single-concentrability assumptions. While Least Square Value Iteration (LSVI), a standard approach for robust offline RL, performs well under uniform coverage, we show that integrating sparsity into LSVI is unnatural, and its analysis may break down due to overly pessimistic bonuses. To overcome this, we propose actor-critic methods with sparse robust estimator oracles, which avoid the use of pointwise pessimistic bonuses and provide the first non-vacuous guarantees for sparse offline RL under single-policy concentrability coverage. Moreover, we extend our results to the contaminated setting and show that our algorithm remains robust under strong contamination. Our results provide the first non-vacuous guarantees in high-dimensional sparse MDPs with single-policy concentrability coverage and corruption, showing that learning a near-optimal policy remains possible in regimes where traditional robust offline RL techniques may fail.
https://arxiv.org/abs/2512.24768
Academic Papers
svg
c9fbdcfcea8c0e10c76377a356a6123fc8ad351067067e50a0ef8012c0121a34
2026-01-01T00:00:00-05:00
Structured Production Systems: Viability
arXiv:2512.24777v1 Announce Type: cross Abstract: This paper introduces a novel framework for analysing equilibrium in structured production systems incorporating a static social division of labour by distinguishing between consumption goods traded in competitive markets and intermediate goods exchanged through bilateral relationships. We develop the concept of viability -- the requirement that all producers earn positive incomes -- as a foundational equilibrium prerequisite. Our main theoretical contribution establishes that acyclic production systems -- those without circular conversion processes among goods -- are always viable, a condition that implies coherence. We characterise completely viable systems through input restrictions demonstrating that prohibiting consumption goods as inputs for other consumption goods is necessary for ensuring viable prices exist for all consumption good price vectors. The analysis reveals fundamental relationships between production system architectural design and economic sustainability. The introduced framework bridges Leontief-Sraffa production theory with modern network economics while capturing institutional realities of contemporary production systems. This also results in a contribution of the literature on the existence of a positive output price system and the Hawkins-Simon condition.
https://arxiv.org/abs/2512.24777
Academic Papers
svg
e21535866ed8d2711c966d8bb7e7668f4e36aaff0e6ff87b4c41d9a25e39cde1
2026-01-01T00:00:00-05:00
Limits of quantum generative models with classical sampling hardness
arXiv:2512.24801v1 Announce Type: cross Abstract: Sampling tasks have been successful in establishing quantum advantages both in theory and experiments. This has fueled the use of quantum computers for generative modeling to create samples following the probability distribution underlying a given dataset. In particular, the potential to build generative models on classically hard distributions would immediately preclude classical simulability, due to theoretical separations. In this work, we study quantum generative models from the perspective of output distributions, showing that models that anticoncentrate are not trainable on average, including those exhibiting quantum advantage. In contrast, models outputting data from sparse distributions can be trained. We consider special cases to enhance trainability, and observe that this opens the path for classical algorithms for surrogate sampling. This observed trade-off is linked to verification of quantum processes. We conclude that quantum advantage can still be found in generative models, although its source must be distinct from anticoncentration.
https://arxiv.org/abs/2512.24801
Academic Papers
svg
28704f83a09d5a8bf1109c34456fd342fe18f7e8a51c3aa89d3358714e4f94ea
2026-01-01T00:00:00-05:00
Learning Temporally Consistent Turbulence Between Sparse Snapshots via Diffusion Models
arXiv:2512.24813v1 Announce Type: cross Abstract: We investigate the statistical accuracy of temporally interpolated spatiotemporal flow sequences between sparse, decorrelated snapshots of turbulent flow fields using conditional Denoising Diffusion Probabilistic Models (DDPMs). The developed method is presented as a proof-of-concept generative surrogate for reconstructing coherent turbulent dynamics between sparse snapshots, demonstrated on a 2D Kolmogorov Flow, and a 3D Kelvin-Helmholtz Instability (KHI). We analyse the generated flow sequences through the lens of statistical turbulence, examining the time-averaged turbulent kinetic energy spectra over generated sequences, and temporal decay of turbulent structures. For the non-stationary Kelvin-Helmholtz Instability, we assess the ability of the proposed method to capture evolving flow statistics across the most strongly time-varying flow regime. We additionally examine instantaneous fields and physically motivated metrics at key stages of the KHI flow evolution.
https://arxiv.org/abs/2512.24813
Academic Papers
svg
942225e3d71003145864dd79e67ba8fec776a31d4f162c1d09daa2d0b8701ce1
2026-01-01T00:00:00-05:00
Advances in Agentic AI: Back to the Future
arXiv:2512.24856v1 Announce Type: cross Abstract: In light of the recent convergence between Agentic AI and our field of Algorithmization, this paper seeks to restore conceptual clarity and provide a structured analytical framework for an increasingly fragmented discourse. First, (a) it examines the contemporary landscape and proposes precise definitions for the key notions involved, ranging from intelligence to Agentic AI. Second, (b) it reviews our prior body of work to contextualize the evolution of methodologies and technological advances developed over the past decade, highlighting their interdependencies and cumulative trajectory. Third, (c) by distinguishing Machine and Learning efforts within the field of Machine Learning (d) it introduces the first Machine in Machine Learning (M1) as the underlying platform enabling today's LLM-based Agentic AI, conceptualized as an extension of B2C information-retrieval user experiences now being repurposed for B2B transformation. Building on this distinction, (e) the white paper develops the notion of the second Machine in Machine Learning (M2) as the architectural prerequisite for holistic, production-grade B2B transformation, characterizing it as Strategies-based Agentic AI and grounding its definition in the structural barriers-to-entry that such systems must overcome to be operationally viable. Further, (f) it offers conceptual and technical insight into what appears to be the first fully realized implementation of an M2. Finally, drawing on the demonstrated accuracy of the two previous decades of professional and academic experience in developing the foundational architectures of Algorithmization, (g) it outlines a forward-looking research and transformation agenda for the coming two decades.
https://arxiv.org/abs/2512.24856
Academic Papers
svg
5bd244b5bd9d0c17de319dd55182cb8cecafc242731e10867890daf477a02c85
2026-01-01T00:00:00-05:00
Approximate Computation via Le Cam Simulability
arXiv:2512.24860v1 Announce Type: cross Abstract: We propose a decision-theoretic framework for computational complexity, complementary to classical theory: moving from syntactic exactness (Turing / Shannon) to semantic simulability (Le Cam). While classical theory classifies problems by the cost of exact solution, modern computation often seeks only decision-valid approximations. We introduce a framework where "computation" is viewed as the efficient simulation of a target statistical experiment within a bounded risk distortion (Le Cam deficiency). We formally define computational deficiency ($\delta_{\text{poly}}$) and use it to construct the complexity class LeCam-P (Decision-Robust Polynomial Time), characterizing problems that may be syntactically hard but semantically easy to approximate. We show that classical Karp reductions can be viewed as zero-deficiency simulations, and that approximate reductions correspond to bounded deficiency. Furthermore, we establish the No-Free-Transfer Inequality, showing that strictly invariant representations inevitably destroy decision-relevant information. This framework offers a statistical perspective on approximation theory, bridging the gap between algorithmic complexity and decision theory.
https://arxiv.org/abs/2512.24860
Academic Papers
svg
0da5532f4b6dd3f02edc68227e0ce95ca09121bfa990fd3e9142e9e94dc12603
2026-01-01T00:00:00-05:00
On Prime Matrix Product Factorizations
arXiv:2512.24864v1 Announce Type: cross Abstract: A graph $G$ factors into graphs $H$ and $K$ via a matrix product if $A = BC$, where $A$, $B$, and $C$ are the adjacency matrices of $G$, $H$, and $K$, respectively. The graph $G$ is prime if, in every such factorization, one of the factors is a perfect matching that is, it corresponds to a permutation matrix. We characterize all prime graphs, then using this result we classify all factorable forests, answering a question of Akbari et al. [\emph{Linear Algebra and its Applications} (2025)]. We prove that every torus is factorable, and we characterize all possible factorizations of grids, addressing two questions posed by Maghsoudi et al. [\emph{Journal of Algebraic Combinatorics} (2025)].
https://arxiv.org/abs/2512.24864
Academic Papers
svg