id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
5a0910f768dfb9580e37f7b59447bb3953e1747a7d687dfaea1d138ac380463b
2026-01-07T00:00:00-05:00
Accurate Table Question Answering with Accessible LLMs
arXiv:2601.03137v1 Announce Type: new Abstract: Given a table T in a database and a question Q in natural language, the table question answering (TQA) task aims to return an accurate answer to Q based on the content of T. Recent state-of-the-art solutions leverage large language models (LLMs) to obtain high-quality answers. However, most rely on proprietary, large-scale LLMs with costly API access, posing a significant financial barrier. This paper instead focuses on TQA with smaller, open-weight LLMs that can run on a desktop or laptop. This setting is challenging, as such LLMs typically have weaker capabilities than large proprietary models, leading to substantial performance degradation with existing methods. We observe that a key reason for this degradation is that prior approaches often require the LLM to solve a highly sophisticated task using long, complex prompts, which exceed the capabilities of small open-weight LLMs. Motivated by this observation, we present Orchestra, a multi-agent approach that unlocks the potential of accessible LLMs for high-quality, cost-effective TQA. Orchestra coordinates a group of LLM agents, each responsible for a relatively simple task, through a structured, layered workflow to solve complex TQA problems -- akin to an orchestra. By reducing the prompt complexity faced by each agent, Orchestra significantly improves output reliability. We implement Orchestra on top of AgentScope, an open-source multi-agent framework, and evaluate it on multiple TQA benchmarks using a wide range of open-weight LLMs. Experimental results show that Orchestra achieves strong performance even with small- to medium-sized models. For example, with Qwen2.5-14B, Orchestra reaches 72.1% accuracy on WikiTQ, approaching the best prior result of 75.3% achieved with GPT-4; with larger Qwen, Llama, or DeepSeek models, Orchestra outperforms all prior methods and establishes new state-of-the-art results across all benchmarks.
https://arxiv.org/abs/2601.03137
Academic Papers
svg
aa19daeed21ce3ca8572edb0c587aa3c28c7b349c6a236e8070106ab2a957f10
2026-01-07T00:00:00-05:00
Time-Varying Kinematics Control for Magnetically-Actuated Satellite Swarm without Additional Actuator
arXiv:2601.03143v1 Announce Type: new Abstract: Electromagnetic Formation Flight is a technology that uses electromagnetic forces and torques to control multiple satellites without conventional fuel-based propulsion. In this paper, the controllability of the system is discussed based on the conservation of the entire system's angular momentum, which constitutes a nonholonomic constraint. This paper designs a new controller for multiple satellites without an additional attitude actuator.
https://arxiv.org/abs/2601.03143
Academic Papers
svg
bce9c5ef0896c3203b399a75617921b726496c39716e78fa4b39384b8546e2c4
2026-01-07T00:00:00-05:00
Self-Verification is All You Need To Pass The Japanese Bar Examination
arXiv:2601.03144v1 Announce Type: new Abstract: Despite rapid advances in large language models (LLMs), achieving reliable performance on highly professional and structured examinations remains a significant challenge. The Japanese bar examination is a particularly demanding benchmark, requiring not only advanced legal reasoning but also strict adherence to complex answer formats that involve joint evaluation of multiple propositions. While recent studies have reported improvements by decomposing such questions into simpler true--false judgments, these approaches have not been systematically evaluated under the original exam format and scoring scheme, leaving open the question of whether they truly capture exam-level competence. In this paper, we present a self-verification model trained on a newly constructed dataset that faithfully replicates the authentic format and evaluation scale of the exam. Our model is able to exceed the official passing score when evaluated on the actual exam scale, marking the first demonstration, to our knowledge, of an LLM passing the Japanese bar examination without altering its original question structure or scoring rules. We further conduct extensive comparisons with alternative strategies, including multi-agent inference and decomposition-based supervision, and find that these methods fail to achieve comparable performance. Our results highlight the importance of format-faithful supervision and consistency verification, and suggest that carefully designed single-model approaches can outperform more complex systems in high-stakes professional reasoning tasks. Our dataset and codes are publicly available.
https://arxiv.org/abs/2601.03144
Academic Papers
svg
2d87ab8f0dadb8a91b0cf6d6bc19f91233a329d47811fdc854474e4b3ccb1a23
2026-01-07T00:00:00-05:00
PersonaLedger: Generating Realistic Financial Transactions with Persona Conditioned LLMs and Rule Grounded Feedback
arXiv:2601.03149v1 Announce Type: new Abstract: Strict privacy regulations limit access to real transaction data, slowing open research in financial AI. Synthetic data can bridge this gap, but existing generators do not jointly achieve behavioral diversity and logical groundedness. Rule-driven simulators rely on hand-crafted workflows and shallow stochasticity, which miss the richness of human behavior. Learning-based generators such as GANs capture correlations yet often violate hard financial constraints and still require training on private data. We introduce PersonaLedger, a generation engine that uses a large language model conditioned on rich user personas to produce diverse transaction streams, coupled with an expert configurable programmatic engine that maintains correctness. The LLM and engine interact in a closed loop: after each event, the engine updates the user state, enforces financial rules, and returns a context aware "nextprompt" that guides the LLM toward feasible next actions. With this engine, we create a public dataset of 30 million transactions from 23,000 users and a benchmark suite with two tasks, illiquidity classification and identity theft segmentation. PersonaLedger offers a realistic, privacy preserving resource that supports rigorous evaluation of forecasting and anomaly detection models. PersonaLedger offers the community a rich, realistic, and privacy preserving resource -- complete with code, rules, and generation logs -- to accelerate innovation in financial AI and enable rigorous, reproducible evaluation.
https://arxiv.org/abs/2601.03149
Academic Papers
svg
22083ec4a596cef1bfd6e6d760ecc4428fd040133e20041c0ae967e5d83effd6
2026-01-07T00:00:00-05:00
Conditioning Aircraft Trajectory Prediction on Meteorological Data with a Physics-Informed Machine Learning Approach
arXiv:2601.03152v1 Announce Type: new Abstract: Accurate aircraft trajectory prediction (TP) in air traffic management systems is confounded by a number of epistemic uncertainties, dominated by uncertain meteorological conditions and operator specific procedures. Handling this uncertainty necessitates the use of probabilistic, machine learned models for generating trajectories. However, the trustworthiness of such models is limited if generated trajectories are not physically plausible. For this reason we propose a physics-informed approach in which aircraft thrust and airspeed are learned from data and are used to condition the existing Base of Aircraft Data (BADA) model, which is physics-based and enforces energy-based constraints on generated trajectories. A set of informative features are identified and used to condition a probabilistic model of aircraft thrust and airspeed, with the proposed scheme demonstrating a 20% improvement in skilfulness across a set of six metrics, compared against a baseline probabilistic model that ignores contextual information such as meteorological conditions.
https://arxiv.org/abs/2601.03152
Academic Papers
svg
3e6c5054e94e4f0095eb3b497255eb43214fa2c15b87127afb80b67d3d894b31
2026-01-07T00:00:00-05:00
Parallel Latent Reasoning for Sequential Recommendation
arXiv:2601.03153v1 Announce Type: new Abstract: Capturing complex user preferences from sparse behavioral sequences remains a fundamental challenge in sequential recommendation. Recent latent reasoning methods have shown promise by extending test-time computation through multi-step reasoning, yet they exclusively rely on depth-level scaling along a single trajectory, suffering from diminishing returns as reasoning depth increases. To address this limitation, we propose \textbf{Parallel Latent Reasoning (PLR)}, a novel framework that pioneers width-level computational scaling by exploring multiple diverse reasoning trajectories simultaneously. PLR constructs parallel reasoning streams through learnable trigger tokens in continuous latent space, preserves diversity across streams via global reasoning regularization, and adaptively synthesizes multi-stream outputs through mixture-of-reasoning-streams aggregation. Extensive experiments on three real-world datasets demonstrate that PLR substantially outperforms state-of-the-art baselines while maintaining real-time inference efficiency. Theoretical analysis further validates the effectiveness of parallel reasoning in improving generalization capability. Our work opens new avenues for enhancing reasoning capacity in sequential recommendation beyond existing depth scaling.
https://arxiv.org/abs/2601.03153
Academic Papers
svg
9d2c431671e735ec221dfbc71cd4e12119ab7666895933791820b974af9d0775
2026-01-07T00:00:00-05:00
Decoupling the Effect of Chain-of-Thought Reasoning: A Human Label Variation Perspective
arXiv:2601.03154v1 Announce Type: new Abstract: Reasoning-tuned LLMs utilizing long Chain-of-Thought (CoT) excel at single-answer tasks, yet their ability to model Human Label Variation--which requires capturing probabilistic ambiguity rather than resolving it--remains underexplored. We investigate this through systematic disentanglement experiments on distribution-based tasks, employing Cross-CoT experiments to isolate the effect of reasoning text from intrinsic model priors. We observe a distinct "decoupled mechanism": while CoT improves distributional alignment, final accuracy is dictated by CoT content (99% variance contribution), whereas distributional ranking is governed by model priors (over 80%). Step-wise analysis further shows that while CoT's influence on accuracy grows monotonically during the reasoning process, distributional structure is largely determined by LLM's intrinsic priors. These findings suggest that long CoT serves as a decisive LLM decision-maker for the top option but fails to function as a granular distribution calibrator for ambiguous tasks.
https://arxiv.org/abs/2601.03154
Academic Papers
svg
c75f64cd6302bc5286d58ce837f74fba7a9f7b298b0374b407a1a461266a5e9a
2026-01-07T00:00:00-05:00
Prompt-Counterfactual Explanations for Generative AI System Behavior
arXiv:2601.03156v1 Announce Type: new Abstract: As generative AI systems become integrated into real-world applications, organizations increasingly need to be able to understand and interpret their behavior. In particular, decision-makers need to understand what causes generative AI systems to exhibit specific output characteristics. Within this general topic, this paper examines a key question: what is it about the input -the prompt- that causes an LLM-based generative AI system to produce output that exhibits specific characteristics, such as toxicity, negative sentiment, or political bias. To examine this question, we adapt a common technique from the Explainable AI literature: counterfactual explanations. We explain why traditional counterfactual explanations cannot be applied directly to generative AI systems, due to several differences in how generative AI systems function. We then propose a flexible framework that adapts counterfactual explanations to non-deterministic, generative AI systems in scenarios where downstream classifiers can reveal key characteristics of their outputs. Based on this framework, we introduce an algorithm for generating prompt-counterfactual explanations (PCEs). Finally, we demonstrate the production of counterfactual explanations for generative AI systems with three case studies, examining different output characteristics (viz., political leaning, toxicity, and sentiment). The case studies further show that PCEs can streamline prompt engineering to suppress undesirable output characteristics and can enhance red-teaming efforts to uncover additional prompts that elicit undesirable outputs. Ultimately, this work lays a foundation for prompt-focused interpretability in generative AI: a capability that will become indispensable as these models are entrusted with higher-stakes tasks and subject to emerging regulatory requirements for transparency and accountability.
https://arxiv.org/abs/2601.03156
Academic Papers
svg
8b18752d9d83d31c17720c99559b1775343224932624e87de4ee94ff3cf5db4d
2026-01-07T00:00:00-05:00
Rapid Augmentations for Time Series (RATS): A High-Performance Library for Time Series Augmentation
arXiv:2601.03159v1 Announce Type: new Abstract: Time series augmentation is critical for training robust deep learning models, particularly in domains where labelled data is scarce and expensive to obtain. However, existing augmentation libraries for time series, mainly written in Python, suffer from performance bottlenecks, where running time grows exponentially as dataset sizes increase -- an aspect limiting their applicability in large-scale, production-grade systems. We introduce RATS (Rapid Augmentations for Time Series), a high-performance library for time series augmentation written in Rust with Python bindings (RATSpy). RATS implements multiple augmentation methods spanning basic transformations, frequency-domain operations and time warping techniques, all accessible through a unified pipeline interface with built-in parallelisation. Comprehensive benchmarking of RATSpy versus a commonly used library (tasug) on 143 datasets demonstrates that RATSpy achieves an average speedup of 74.5\% over tsaug (up to 94.8\% on large datasets), with up to 47.9\% less peak memory usage.
https://arxiv.org/abs/2601.03159
Academic Papers
svg
221bdcf0814924f3cd01c26db077c0c1e30dd7eb56036a645632c6692766f3ca
2026-01-07T00:00:00-05:00
Stability, convergence, and geometric properties of second-order-in-time space-time discretizations for linear and semilinear wave equations
arXiv:2601.03160v1 Announce Type: new Abstract: We revisit second-order-in-time space-time discretizations of the linear and semilinear wave equations by establishing precise equivalences with first-order-in-time formulations. Focusing on schemes using continuous piecewise-polynomial trial functions in time, we analyze their stability, convergence, and geometric properties. We consider first a weak space-time formulation with test functions projected onto discontinuous polynomials of one degree lower in time, showing that it is equivalent to the scheme proposed in [French, Peterson 1996] in the linear case, and extended in [Karakashian, Makridakis 2005] to the semilinear case. In particular, this equivalence shows that this method conserves energy at mesh nodes but is not symplectic. We then introduce two symplectic variants, obtained through Gauss-Legendre and Gauss-Lobatto quadratures in time, and show that they correspond to specific Runge-Kutta time integrators. These connections clarify the geometric structure of the space-time methods considered.
https://arxiv.org/abs/2601.03160
Academic Papers
svg
1562cdc02174bedd5a64b5d1ac058201fd235190023dd5f0169a238e595906d7
2026-01-07T00:00:00-05:00
On the Convergence Behavior of Preconditioned Gradient Descent Toward the Rich Learning Regime
arXiv:2601.03162v1 Announce Type: new Abstract: Spectral bias, the tendency of neural networks to learn low frequencies first, can be both a blessing and a curse. While it enhances the generalization capabilities by suppressing high-frequency noise, it can be a limitation in scientific tasks that require capturing fine-scale structures. The delayed generalization phenomenon known as grokking is another barrier to rapid training of neural networks. Grokking has been hypothesized to arise as learning transitions from the NTK to the feature-rich regime. This paper explores the impact of preconditioned gradient descent (PGD), such as Gauss-Newton, on spectral bias and grokking phenomena. We demonstrate through theoretical and empirical results how PGD can mitigate issues associated with spectral bias. Additionally, building on the rich learning regime grokking hypothesis, we study how PGD can be used to reduce delays associated with grokking. Our conjecture is that PGD, without the impediment of spectral bias, enables uniform exploration of the parameter space in the NTK regime. Our experimental results confirm this prediction, providing strong evidence that grokking represents a transitional behavior between the lazy regime characterized by the NTK and the rich regime. These findings deepen our understanding of the interplay between optimization dynamics, spectral bias, and the phases of neural network learning.
https://arxiv.org/abs/2601.03162
Academic Papers
svg
38c1625620f8f8b88b25b501f4ddda92d45a01aebfdf709841b4b3c337da198e
2026-01-07T00:00:00-05:00
LSP-DETR: Efficient and Scalable Nuclei Segmentation in Whole Slide Images
arXiv:2601.03163v1 Announce Type: new Abstract: Precise and scalable instance segmentation of cell nuclei is essential for computational pathology, yet gigapixel Whole-Slide Images pose major computational challenges. Existing approaches rely on patch-based processing and costly post-processing for instance separation, sacrificing context and efficiency. We introduce LSP-DETR (Local Star Polygon DEtection TRansformer), a fully end-to-end framework that uses a lightweight transformer with linear complexity to process substantially larger images without additional computational cost. Nuclei are represented as star-convex polygons, and a novel radial distance loss function allows the segmentation of overlapping nuclei to emerge naturally, without requiring explicit overlap annotations or handcrafted post-processing. Evaluations on PanNuke and MoNuSeg show strong generalization across tissues and state-of-the-art efficiency, with LSP-DETR being over five times faster than the next-fastest leading method. Code and models are available at https://github.com/RationAI/lsp-detr.
https://arxiv.org/abs/2601.03163
Academic Papers
svg
8cc4c72b9d3637fda8a0679499c39f8ef6ca1c8acde4a9e66ec555f470f75559
2026-01-07T00:00:00-05:00
WebAnchor: Anchoring Agent Planning to Stabilize Long-Horizon Web Reasoning
arXiv:2601.03164v1 Announce Type: new Abstract: Large Language Model(LLM)-based agents have shown strong capabilities in web information seeking, with reinforcement learning (RL) becoming a key optimization paradigm. However, planning remains a bottleneck, as existing methods struggle with long-horizon strategies. Our analysis reveals a critical phenomenon, plan anchor, where the first reasoning step disproportionately impacts downstream behavior in long-horizon web reasoning tasks. Current RL algorithms, fail to account for this by uniformly distributing rewards across the trajectory. To address this, we propose Anchor-GRPO, a two-stage RL framework that decouples planning and execution. In Stage 1, the agent optimizes its first-step planning using fine-grained rubrics derived from self-play experiences and human calibration. In Stage 2, execution is aligned with the initial plan through sparse rewards, ensuring stable and efficient tool usage. We evaluate Anchor-GRPO on four benchmarks: BrowseComp, BrowseComp-Zh, GAIA, and XBench-DeepSearch. Across models from 3B to 30B, Anchor-GRPO outperforms baseline GRPO and First-step GRPO, improving task success and tool efficiency. Notably, WebAnchor-30B achieves 46.0% pass@1 on BrowseComp and 76.4% on GAIA. Anchor-GRPO also demonstrates strong scalability, getting higher accuracy as model size and context length increase.
https://arxiv.org/abs/2601.03164
Academic Papers
svg
eb980f1392d9fc691bd4899f1e1f1b0575fac86095cda0e270814197c3f26d3a
2026-01-07T00:00:00-05:00
On the Euclidean duals of the cyclic codes generated by cyclotomic polynomials
arXiv:2601.03165v1 Announce Type: new Abstract: In this article, we determine the minimum distance of the Euclidean dual of the cyclic code $\mathcal{C}_n$ generated by the $n$th cyclotomic polynomial $Q_n(x)$ over $\mathbb{F}_q$, for every positive integer $n$ co-prime to $q$. In particular, we prove that the minimum distance of $\mathcal{C}_{n}^{\perp}$ is a function of $n$, namely $2^{\omega(n)}$. This was precisely the conjecture posed by us in \cite{BHAGAT2025}.
https://arxiv.org/abs/2601.03165
Academic Papers
svg
e7c03eba70fc526f200547407f3bcd3ec75746218070dafce91839397c43a93b
2026-01-07T00:00:00-05:00
Dynamic Hyperparameter Importance for Efficient Multi-Objective Optimization
arXiv:2601.03166v1 Announce Type: new Abstract: Choosing a suitable ML model is a complex task that can depend on several objectives, e.g., accuracy, model size, fairness, inference time, or energy consumption. In practice, this requires trading off multiple, often competing, objectives through multi-objective optimization (MOO). However, existing MOO methods typically treat all hyperparameters as equally important, overlooking that hyperparameter importance (HPI) can vary significantly depending on the trade-off between objectives. We propose a novel dynamic optimization approach that prioritizes the most influential hyperparameters based on varying objective trade-offs during the search process, which accelerates empirical convergence and leads to better solutions. Building on prior work on HPI for MOO post-analysis, we now integrate HPI, calculated with HyperSHAP, into the optimization. For this, we leverage the objective weightings naturally produced by the MOO algorithm ParEGO and adapt the configuration space by fixing the unimportant hyperparameters, allowing the search to focus on the important ones. Eventually, we validate our method with diverse tasks from PyMOO and YAHPO-Gym. Empirical results demonstrate improvements in convergence speed and Pareto front quality compared to baselines.
https://arxiv.org/abs/2601.03166
Academic Papers
svg
0ddaa8de2afc6113fae79128b8f66701cbc870fe8500ed547744b965e8fbafcf
2026-01-07T00:00:00-05:00
Can Embedding Similarity Predict Cross-Lingual Transfer? A Systematic Study on African Languages
arXiv:2601.03168v1 Announce Type: new Abstract: Cross-lingual transfer is essential for building NLP systems for low-resource African languages, but practitioners lack reliable methods for selecting source languages. We systematically evaluate five embedding similarity metrics across 816 transfer experiments spanning three NLP tasks, three African-centric multilingual models, and 12 languages from four language families. We find that cosine gap and retrieval-based metrics (P@1, CSLS) reliably predict transfer success ($\rho = 0.4-0.6$), while CKA shows negligible predictive power ($\rho \approx 0.1$). Critically, correlation signs reverse when pooling across models (Simpson's Paradox), so practitioners must validate per-model. Embedding metrics achieve comparable predictive power to URIEL linguistic typology. Our results provide concrete guidance for source language selection and highlight the importance of model-specific analysis.
https://arxiv.org/abs/2601.03168
Academic Papers
svg
260cea035e255443ccbb50b6ac8a8a114f51ed234e6af37b246aa333637d138b
2026-01-07T00:00:00-05:00
Segment-Aware Conditioning for Training-Free Intra-Utterance Emotion and Duration Control in Text-to-Speech
arXiv:2601.03170v1 Announce Type: new Abstract: While controllable Text-to-Speech (TTS) has achieved notable progress, most existing methods remain limited to inter-utterance-level control, making fine-grained intra-utterance expression challenging due to their reliance on non-public datasets or complex multi-stage training. In this paper, we propose a training-free controllable framework for pretrained zero-shot TTS to enable intra-utterance emotion and duration expression. Specifically, we propose a segment-aware emotion conditioning strategy that combines causal masking with monotonic stream alignment filtering to isolate emotion conditioning and schedule mask transitions, enabling smooth intra-utterance emotion shifts while preserving global semantic coherence. Based on this, we further propose a segment-aware duration steering strategy to combine local duration embedding steering with global EOS logit modulation, allowing local duration adjustment while ensuring globally consistent termination. To eliminate the need for segment-level manual prompt engineering, we construct a 30,000-sample multi-emotion and duration-annotated text dataset to enable LLM-based automatic prompt construction. Extensive experiments demonstrate that our training-free method not only achieves state-of-the-art intra-utterance consistency in multi-emotion and duration control, but also maintains baseline-level speech quality of the underlying TTS model. Audio samples are available at https://aclanonymous111.github.io/TED-TTS-DemoPage/.
https://arxiv.org/abs/2601.03170
Academic Papers
svg
9d7beb2b014403fd812add6d4d2b0f27b1c8a5c7d36fa800e635bf232504eeac
2026-01-07T00:00:00-05:00
Eco-WakeLoc: An Energy-Neutral and Cooperative UWB Real-Time Locating System
arXiv:2601.03171v1 Announce Type: new Abstract: Indoor localization systems face a fundamental trade-off between efficiency and responsiveness, which is especially important for emerging use cases such as mobile robots operating in GPS-denied environments. Traditional RTLS either require continuously powered infrastructure, limiting their scalability, or are limited by their responsiveness. This work presents Eco-WakeLoc, designed to achieve centimeter-level UWB localization while remaining energy-neutral by combining ultra-low power wake-up radios (WuRs) with solar energy harvesting. By activating anchor nodes only on demand, the proposed system eliminates constant energy consumption while achieving centimeter-level positioning accuracy. To reduce coordination overhead and improve scalability, Eco-WakeLoc employs cooperative localization where active tags initiate ranging exchanges (trilateration), while passive tags opportunistically reuse these messages for TDOA positioning. An additive-increase/multiplicative-decrease (AIMD)-based energy-aware scheduler adapts localization rates according to the harvested energy, thereby maximizing the overall performance of the sensor network while ensuring long-term energy neutrality. The measured energy consumption is only 3.22mJ per localization for active tags, 951uJ for passive tags, and 353uJ for anchors. Real-world deployment on a quadruped robot with nine anchors confirms the practical feasibility, achieving an average accuracy of 43cm in dynamic indoor environments. Year-long simulations show that tags achieve an average of 2031 localizations per day, retaining over 7% battery capacity after one year -- demonstrating that the RTLS achieves sustained energy-neutral operation. Eco-WakeLoc demonstrates that high-accuracy indoor localization can be achieved at scale without continuous infrastructure operation, combining energy neutrality, cooperative positioning, and adaptive scheduling.
https://arxiv.org/abs/2601.03171
Academic Papers
svg
1a4599bdf383dd9ed60e208c644a95cb9d88e585202995afe12ccb7d8ea0af1c
2026-01-07T00:00:00-05:00
Predicting Time Pressure of Powered Two-Wheeler Riders for Proactive Safety Interventions
arXiv:2601.03173v1 Announce Type: new Abstract: Time pressure critically influences risky maneuvers and crash proneness among powered two-wheeler riders, yet its prediction remains underexplored in intelligent transportation systems. We present a large-scale dataset of 129,000+ labeled multivariate time-series sequences from 153 rides by 51 participants under No, Low, and High Time Pressure conditions. Each sequence captures 63 features spanning vehicle kinematics, control inputs, behavioral violations, and environmental context. Our empirical analysis shows High Time Pressure induces 48% higher speeds, 36.4% greater speed variability, 58% more risky turns at intersections, 36% more sudden braking, and 50% higher rear brake forces versus No Time Pressure. To benchmark this dataset, we propose MotoTimePressure, a deep learning model combining convolutional preprocessing, dual-stage temporal attention, and Squeeze-and-Excitation feature recalibration, achieving 91.53% accuracy and 98.93% ROC AUC, outperforming eight baselines. Since time pressure cannot be directly measured in real time, we demonstrate its utility in collision prediction and threshold determination. Using MTPS-predicted time pressure as features, improves Informer-based collision risk accuracy from 91.25% to 93.51%, approaching oracle performance (93.72%). Thresholded time pressure states capture rider cognitive stress and enable proactive ITS interventions, including adaptive alerts, haptic feedback, V2I signaling, and speed guidance, supporting safer two-wheeler mobility under the Safe System Approach.
https://arxiv.org/abs/2601.03173
Academic Papers
svg
113dac47a04888a13e352bfb3bcaa76f689c4b8fa6e1a4e3d83337d1509e1768
2026-01-07T00:00:00-05:00
DiffBench Meets DiffAgent: End-to-End LLM-Driven Diffusion Acceleration Code Generation
arXiv:2601.03178v1 Announce Type: new Abstract: Diffusion models have achieved remarkable success in image and video generation. However, their inherently multiple step inference process imposes substantial computational overhead, hindering real-world deployment. Accelerating diffusion models is therefore essential, yet determining how to combine multiple model acceleration techniques remains a significant challenge. To address this issue, we introduce a framework driven by large language models (LLMs) for automated acceleration code generation and evaluation. First, we present DiffBench, a comprehensive benchmark that implements a three stage automated evaluation pipeline across diverse diffusion architectures, optimization combinations and deployment scenarios. Second, we propose DiffAgent, an agent that generates optimal acceleration strategies and codes for arbitrary diffusion models. DiffAgent employs a closed-loop workflow in which a planning component and a debugging component iteratively refine the output of a code generation component, while a genetic algorithm extracts performance feedback from the execution environment to guide subsequent code refinements. We provide a detailed explanation of the DiffBench construction and the design principles underlying DiffAgent. Extensive experiments show that DiffBench offers a thorough evaluation of generated codes and that DiffAgent significantly outperforms existing LLMs in producing effective diffusion acceleration strategies.
https://arxiv.org/abs/2601.03178
Academic Papers
svg
f3554c449dd4672704225fc84712978dfab4b143ad7ef019e705faebde333cc8
2026-01-07T00:00:00-05:00
Multi-Modal Data-Enhanced Foundation Models for Prediction and Control in Wireless Networks: A Survey
arXiv:2601.03181v1 Announce Type: new Abstract: Foundation models (FMs) are recognized as a transformative breakthrough that has started to reshape the future of artificial intelligence (AI) across both academia and industry. The integration of FMs into wireless networks is expected to enable the development of general-purpose AI agents capable of handling diverse network management requests and highly complex wireless-related tasks involving multi-modal data. Inspired by these ideas, this work discusses the utilization of FMs, especially multi-modal FMs in wireless networks. We focus on two important types of tasks in wireless network management: prediction tasks and control tasks. In particular, we first discuss FMs-enabled multi-modal contextual information understanding in wireless networks. Then, we explain how FMs can be applied to prediction and control tasks, respectively. Following this, we introduce the development of wireless-specific FMs from two perspectives: available datasets for development and the methodologies used. Finally, we conclude with a discussion of the challenges and future directions for FM-enhanced wireless networks.
https://arxiv.org/abs/2601.03181
Academic Papers
svg
6b46b84a530f7a24db5ba6cc86e039f28447424157a7045e29c103b2d9903b7a
2026-01-07T00:00:00-05:00
Decentralized Autoregressive Generation
arXiv:2601.03184v1 Announce Type: new Abstract: We present a theoretical analysis of decentralization of autoregressive generation. We define the Decentralized Discrete Flow Matching objective, by expressing probability generating velocity as a linear combination of expert flows. We also conduct experiments demonstrat- ing the equivalence between decentralized and centralized training settings for multimodal language models across diverse set of benchmarks. Specifically, we compare two distinct paradigms: LLaVA and InternVL 2.5-1B, which uses a fixed CLIP vision encoder and per- forms full-parameter fine-tuning (ViT+MLP+LLM) during the instruction tuning stage.
https://arxiv.org/abs/2601.03184
Academic Papers
svg
d626f9e854401a12cf437137a8f000e6ab83e960392f2b08dfbf1df559059b72
2026-01-07T00:00:00-05:00
TaNG: Modeling Packet Classification with TSS-assisted Neural Networks on GPUs
arXiv:2601.03187v1 Announce Type: new Abstract: Packet classification is a core function in software-defined networks, and learning-based methods have recently shown significant throughput gains on large-scale rulesets. However, existing learning-based approaches struggle with overlapping rules, leading to incomplete model coverage or excessive rule replication. Their limited GPU integration also hampers performance with large-scale rulesets. To address these issues, we propose TaNG, which utilizes a single neural network trained on multi-dimensional features to ensure complete coverage without duplicating rules. TaNG employs a semi-structured design that combines the neural network model with a tuple space, reducing model complexity. Furthermore, we develop a mechanism based on the semi-structure for rule updates. Finally, we implement a CPU-GPU hybrid streaming framework tailored for learning-based methods, further enhancing throughput. On our GPU-based classification framework with 512k rulesets, TaNG achieves 12.19x and 9.37x higher throughput and 98.84x and 156.98x higher performance stability compared to two state-of-the-art learning methods NuevoMatch and NeuTree, respectively.
https://arxiv.org/abs/2601.03187
Academic Papers
svg
b5b03c2bd7649f78a07758199567acad453d32074318a69519474d2da16de7ee
2026-01-07T00:00:00-05:00
Maximizing Local Entropy Where It Matters: Prefix-Aware Localized LLM Unlearning
arXiv:2601.03190v1 Announce Type: new Abstract: Machine unlearning aims to forget sensitive knowledge from Large Language Models (LLMs) while maintaining general utility. However, existing approaches typically treat all tokens in a response indiscriminately and enforce uncertainty over the entire vocabulary. This global treatment results in unnecessary utility degradation and extends optimization to content-agnostic regions. To address these limitations, we propose PALU (Prefix-Aware Localized Unlearning), a framework driven by a local entropy maximization objective across both temporal and vocabulary dimensions. PALU reveals that (i) suppressing the sensitive prefix alone is sufficient to sever the causal generation link, and (ii) flattening only the top-$k$ logits is adequate to maximize uncertainty in the critical subspace. These findings allow PALU to avoid redundant optimization across the full vocabulary and parameter space while minimizing collateral damage to general model performance. Extensive experiments validate that PALU achieves superior forgetting efficacy and utility preservation compared to state-of-the-art baselines.
https://arxiv.org/abs/2601.03190
Academic Papers
svg
829136a0f76d8737f226842aa6b6641a1d21d925cf455e08cfc5da03f0d0a1f8
2026-01-07T00:00:00-05:00
AnatomiX, an Anatomy-Aware Grounded Multimodal Large Language Model for Chest X-Ray Interpretation
arXiv:2601.03191v1 Announce Type: new Abstract: Multimodal medical large language models have shown impressive progress in chest X-ray interpretation but continue to face challenges in spatial reasoning and anatomical understanding. Although existing grounding techniques improve overall performance, they often fail to establish a true anatomical correspondence, resulting in incorrect anatomical understanding in the medical domain. To address this gap, we introduce AnatomiX, a multitask multimodal large language model explicitly designed for anatomically grounded chest X-ray interpretation. Inspired by the radiological workflow, AnatomiX adopts a two stage approach: first, it identifies anatomical structures and extracts their features, and then leverages a large language model to perform diverse downstream tasks such as phrase grounding, report generation, visual question answering, and image understanding. Extensive experiments across multiple benchmarks demonstrate that AnatomiX achieves superior anatomical reasoning and delivers over 25% improvement in performance on anatomy grounding, phrase grounding, grounded diagnosis and grounded captioning tasks compared to existing approaches. Code and pretrained model are available at https://github.com/aneesurhashmi/anatomix
https://arxiv.org/abs/2601.03191
Academic Papers
svg
e9f1d546ba584a2dc40c0c7460ae201c219f82b8d907c52256e6e447e196dfb2
2026-01-07T00:00:00-05:00
MemRL: Self-Evolving Agents via Runtime Reinforcement Learning on Episodic Memory
arXiv:2601.03192v1 Announce Type: new Abstract: The hallmark of human intelligence is the ability to master new skills through Constructive Episodic Simulation-retrieving past experiences to synthesize solutions for novel tasks. While Large Language Models possess strong reasoning capabilities, they struggle to emulate this self-evolution: fine-tuning is computationally expensive and prone to catastrophic forgetting, while existing memory-based methods rely on passive semantic matching that often retrieves noise. To address these challenges, we propose MemRL, a framework that enables agents to self-evolve via non-parametric reinforcement learning on episodic memory. MemRL explicitly separates the stable reasoning of a frozen LLM from the plastic, evolving memory. Unlike traditional methods, MemRL employs a Two-Phase Retrieval mechanism that filters candidates by semantic relevance and then selects them based on learned Q-values (utility). These utilities are continuously refined via environmental feedback in an trial-and-error manner, allowing the agent to distinguish high-value strategies from similar noise. Extensive experiments on HLE, BigCodeBench, ALFWorld, and Lifelong Agent Bench demonstrate that MemRL significantly outperforms state-of-the-art baselines. Our analysis experiments confirm that MemRL effectively reconciles the stability-plasticity dilemma, enabling continuous runtime improvement without weight updates.
https://arxiv.org/abs/2601.03192
Academic Papers
svg
5e735396554649404015daebafca825b20aef1ada5d3442c694573394c9376a2
2026-01-07T00:00:00-05:00
UniCorn: Towards Self-Improving Unified Multimodal Models through Self-Generated Supervision
arXiv:2601.03193v1 Announce Type: new Abstract: While Unified Multimodal Models (UMMs) have achieved remarkable success in cross-modal comprehension, a significant gap persists in their ability to leverage such internal knowledge for high-quality generation. We formalize this discrepancy as Conduction Aphasia, a phenomenon where models accurately interpret multimodal inputs but struggle to translate that understanding into faithful and controllable synthesis. To address this, we propose UniCorn, a simple yet elegant self-improvement framework that eliminates the need for external data or teacher supervision. By partitioning a single UMM into three collaborative roles: Proposer, Solver, and Judge, UniCorn generates high-quality interactions via self-play and employs cognitive pattern reconstruction to distill latent understanding into explicit generative signals. To validate the restoration of multimodal coherence, we introduce UniCycle, a cycle-consistency benchmark based on a Text to Image to Text reconstruction loop. Extensive experiments demonstrate that UniCorn achieves comprehensive and substantial improvements over the base model across six general image generation benchmarks. Notably, it achieves SOTA performance on TIIF(73.8), DPG(86.8), CompBench(88.5), and UniCycle while further delivering substantial gains of +5.0 on WISE and +6.5 on OneIG. These results highlight that our method significantly enhances T2I generation while maintaining robust comprehension, demonstrating the scalability of fully self-supervised refinement for unified multimodal intelligence.
https://arxiv.org/abs/2601.03193
Academic Papers
svg
1a9db7ea49fd22c6ef79dccb31a519478538c9bbf60d495f248583b19b0a63e8
2026-01-07T00:00:00-05:00
X-MuTeST: A Multilingual Benchmark for Explainable Hate Speech Detection and A Novel LLM-consulted Explanation Framework
arXiv:2601.03194v1 Announce Type: new Abstract: Hate speech detection on social media faces challenges in both accuracy and explainability, especially for underexplored Indic languages. We propose a novel explainability-guided training framework, X-MuTeST (eXplainable Multilingual haTe Speech deTection), for hate speech detection that combines high-level semantic reasoning from large language models (LLMs) with traditional attention-enhancing techniques. We extend this research to Hindi and Telugu alongside English by providing benchmark human-annotated rationales for each word to justify the assigned class label. The X-MuTeST explainability method computes the difference between the prediction probabilities of the original text and those of unigrams, bigrams, and trigrams. Final explanations are computed as the union between LLM explanations and X-MuTeST explanations. We show that leveraging human rationales during training enhances both classification performance and explainability. Moreover, combining human rationales with our explainability method to refine the model attention yields further improvements. We evaluate explainability using Plausibility metrics such as Token-F1 and IOU-F1 and Faithfulness metrics such as Comprehensiveness and Sufficiency. By focusing on under-resourced languages, our work advances hate speech detection across diverse linguistic contexts. Our dataset includes token-level rationale annotations for 6,004 Hindi, 4,492 Telugu, and 6,334 English samples. Data and code are available on https://github.com/ziarehman30/X-MuTeST
https://arxiv.org/abs/2601.03194
Academic Papers
svg
7c96e7c472c342b3682895d0ab2173fb3b6b11ffb1f4ec4df31664f0ca8e7cb3
2026-01-07T00:00:00-05:00
Sparse Knowledge Distillation: A Mathematical Framework for Probability-Domain Temperature Scaling and Multi-Stage Compression
arXiv:2601.03195v1 Announce Type: new Abstract: We develop a unified theoretical framework for sparse knowledge distillation based on probability-domain softening operators. While the equivalence $p^{1/T} \propto \mathrm{softmax}(z/T)$ is well known, our contribution is an operator-level analytical framework built on this foundation rather than the equivalence itself. The framework comprises four core components: (i) operator-agnostic bias--variance decompositions that characterize when sparse students outperform dense teachers, (ii) a homotopy path formalization of multi-stage pruning in function space explaining why iterative compression succeeds where one-shot pruning fails, (iii) convergence guarantees establishing $O(1/n)$ rates for $n$-stage distillation with explicit parameter dependence, and (iv) equivalence class characterizations identifying distinct probability-domain operators that yield identical student models under capacity constraints. We introduce an axiomatic definition of probability-domain softening operators based on ranking preservation, continuity, entropy monotonicity, identity, and boundary behavior, and show that multiple non-equivalent operator families satisfy these axioms. All learning-theoretic guarantees are shown to hold uniformly across this operator class, independent of implementation details. These results provide theoretical grounding for black-box teacher distillation, partial-access settings such as top-$k$ truncation and text-only outputs, and privacy-preserving model compression.
https://arxiv.org/abs/2601.03195
Academic Papers
svg
47ff6cbf06eb26f107dc88c218b6039a594d93e466b8e91ee68137315d779c88
2026-01-07T00:00:00-05:00
Software-Defined Agentic Serving
arXiv:2601.03197v1 Announce Type: new Abstract: As multi-agent LLM pipelines grow in complexity, existing serving paradigms fail to adapt to the dynamic serving conditions. We argue that agentic serving systems should be programmable and system-aware, unlike existing serving which statically encode the parameters. In this work, we propose a new SDN-inspired agentic serving framework that helps control the key attributes of communication based on runtime state. This architecture enables serving-efficient, responsive agent systems and paves the way for high-level intent-driven agentic serving.
https://arxiv.org/abs/2601.03197
Academic Papers
svg
c00baafd2b7f2d3e88df9557c1f58ecb6e15cfea177e75f8a81c6d4c75151933
2026-01-07T00:00:00-05:00
Empowering Reliable Visual-Centric Instruction Following in MLLMs
arXiv:2601.03198v1 Announce Type: new Abstract: Evaluating the instruction-following (IF) capabilities of Multimodal Large Language Models (MLLMs) is essential for rigorously assessing how faithfully model outputs adhere to user-specified intentions. Nevertheless, existing benchmarks for evaluating MLLMs' instruction-following capability primarily focus on verbal instructions in the textual modality. These limitations hinder a thorough analysis of instruction-following capabilities, as they overlook the implicit constraints embedded in the semantically rich visual modality. To address this gap, we introduce VC-IFEval, a new benchmark accompanied by a systematically constructed dataset that evaluates MLLMs' instruction-following ability under multimodal settings. Our benchmark systematically incorporates vision-dependent constraints into instruction design, enabling a more rigorous and fine-grained assessment of how well MLLMs align their outputs with both visual input and textual instructions. Furthermore, by fine-tuning MLLMs on our dataset, we achieve substantial gains in visual instruction-following accuracy and adherence. Through extensive evaluation across representative MLLMs, we provide new insights into the strengths and limitations of current models.
https://arxiv.org/abs/2601.03198
Academic Papers
svg
7db6a02e216582b6fa3e9981eea8b29d834965c32aa7ac489b4df52666a018d2
2026-01-07T00:00:00-05:00
DIP: Dynamic In-Context Planner For Diffusion Language Models
arXiv:2601.03199v1 Announce Type: new Abstract: Diffusion language models (DLMs) have shown strong potential for general natural language tasks with in-context examples. However, due to the bidirectional attention mechanism, DLMs incur substantial computational cost as context length increases. This work addresses this issue with a key discovery: unlike the sequential generation in autoregressive language models (ARLMs), the diffusion generation paradigm in DLMs allows \textit{efficient dynamic adjustment of the context} during generation. Building on this insight, we propose \textbf{D}ynamic \textbf{I}n-Context \textbf{P}lanner (DIP), a context-optimization method that dynamically selects and inserts in-context examples during generation, rather than providing all examples in the prompt upfront. Results show DIP maintains generation quality while achieving up to 12.9$\times$ inference speedup over standard inference and 1.17$\times$ over KV cache-enhanced inference.
https://arxiv.org/abs/2601.03199
Academic Papers
svg
3c0108ec8d029d13e3ba98df427ff7ec56a943f2218f4f26a432757d9d2dc376
2026-01-07T00:00:00-05:00
A High-Fidelity Digital Twin for Robotic Manipulation Based on 3D Gaussian Splatting
arXiv:2601.03200v1 Announce Type: new Abstract: Developing high-fidelity, interactive digital twins is crucial for enabling closed-loop motion planning and reliable real-world robot execution, which are essential to advancing sim-to-real transfer. However, existing approaches often suffer from slow reconstruction, limited visual fidelity, and difficulties in converting photorealistic models into planning-ready collision geometry. We present a practical framework that constructs high-quality digital twins within minutes from sparse RGB inputs. Our system employs 3D Gaussian Splatting (3DGS) for fast, photorealistic reconstruction as a unified scene representation. We enhance 3DGS with visibility-aware semantic fusion for accurate 3D labelling and introduce an efficient, filter-based geometry conversion method to produce collision-ready models seamlessly integrated with a Unity-ROS2-MoveIt physics engine. In experiments with a Franka Emika Panda robot performing pick-and-place tasks, we demonstrate that this enhanced geometric accuracy effectively supports robust manipulation in real-world trials. These results demonstrate that 3DGS-based digital twins, enriched with semantic and geometric consistency, offer a fast, reliable, and scalable path from perception to manipulation in unstructured environments.
https://arxiv.org/abs/2601.03200
Academic Papers
svg
10cbcf09d541dc29c9c914e3b54f4060b14e411f3f1276bc0d44b75aa1547aa3
2026-01-07T00:00:00-05:00
Recursive querying of neural networks via weighted structures
arXiv:2601.03201v1 Announce Type: new Abstract: Expressive querying of machine learning models - viewed as a form of intentional data - enables their verification and interpretation using declarative languages, thereby making learned representations of data more accessible. Motivated by the querying of feedforward neural networks, we investigate logics for weighted structures. In the absence of a bound on neural network depth, such logics must incorporate recursion; thereto we revisit the functional fixpoint mechanism proposed by Gr\"adel and Gurevich. We adopt it in a Datalog-like syntax; we extend normal forms for fixpoint logics to weighted structures; and show an equivalent "loose" fixpoint mechanism that allows values of inductively defined weight functions to be overwritten. We propose a "scalar" restriction of functional fixpoint logic, of polynomial-time data complexity, and show it can express all PTIME model-agnostic queries over reduced networks with polynomially bounded weights. In contrast, we show that very simple model-agnostic queries are already NP-complete. Finally, we consider transformations of weighted structures by iterated transductions.
https://arxiv.org/abs/2601.03201
Academic Papers
svg
49495ac0ff57c76f7f47462672ab10535d244213023832cee380265bca0bd45a
2026-01-07T00:00:00-05:00
Counterfactual Fairness with Graph Uncertainty
arXiv:2601.03203v1 Announce Type: new Abstract: Evaluating machine learning (ML) model bias is key to building trustworthy and robust ML systems. Counterfactual Fairness (CF) audits allow the measurement of bias of ML models with a causal framework, yet their conclusions rely on a single causal graph that is rarely known with certainty in real-world scenarios. We propose CF with Graph Uncertainty (CF-GU), a bias evaluation procedure that incorporates the uncertainty of specifying a causal graph into CF. CF-GU (i) bootstraps a Causal Discovery algorithm under domain knowledge constraints to produce a bag of plausible Directed Acyclic Graphs (DAGs), (ii) quantifies graph uncertainty with the normalized Shannon entropy, and (iii) provides confidence bounds on CF metrics. Experiments on synthetic data show how contrasting domain knowledge assumptions support or refute audits of CF, while experiments on real-world data (COMPAS and Adult datasets) pinpoint well-known biases with high confidence, even when supplied with minimal domain knowledge constraints.
https://arxiv.org/abs/2601.03203
Academic Papers
svg
90cb3756e1a97edb85232a302dc182cff864fcd3882363755e61cb31f70a0271
2026-01-07T00:00:00-05:00
InfiAgent: An Infinite-Horizon Framework for General-Purpose Autonomous Agents
arXiv:2601.03204v1 Announce Type: new Abstract: LLM agents can reason and use tools, but they often break down on long-horizon tasks due to unbounded context growth and accumulated errors. Common remedies such as context compression or retrieval-augmented prompting introduce trade-offs between information fidelity and reasoning stability. We present InfiAgent, a general-purpose framework that keeps the agent's reasoning context strictly bounded regardless of task duration by externalizing persistent state into a file-centric state abstraction. At each step, the agent reconstructs context from a workspace state snapshot plus a fixed window of recent actions. Experiments on DeepResearch and an 80-paper literature review task show that, without task-specific fine-tuning, InfiAgent with a 20B open-source model is competitive with larger proprietary systems and maintains substantially higher long-horizon coverage than context-centric baselines. These results support explicit state externalization as a practical foundation for stable long-horizon agents. Github Repo:https://github.com/ChenglinPoly/infiAgent
https://arxiv.org/abs/2601.03204
Academic Papers
svg
35f78b9f8fae6b24ccaa8d16aef143b8e0938381c7e94c64b6231b5106e47350
2026-01-07T00:00:00-05:00
UltraLogic: Enhancing LLM Reasoning through Large-Scale Data Synthesis and Bipolar Float Reward
arXiv:2601.03205v1 Announce Type: new Abstract: While Large Language Models (LLMs) have demonstrated significant potential in natural language processing , complex general-purpose reasoning requiring multi-step logic, planning, and verification remains a critical bottleneck. Although Reinforcement Learning with Verifiable Rewards (RLVR) has succeeded in specific domains , the field lacks large-scale, high-quality, and difficulty-calibrated data for general reasoning. To address this, we propose UltraLogic, a framework that decouples the logical core of a problem from its natural language expression through a Code-based Solving methodology to automate high-quality data production. The framework comprises hundreds of unique task types and an automated calibration pipeline across ten difficulty levels. Furthermore, to mitigate binary reward sparsity and the Non-negative Reward Trap, we introduce the Bipolar Float Reward (BFR) mechanism, utilizing graded penalties to effectively distinguish perfect responses from those with logical flaws. Our experiments demonstrate that task diversity is the primary driver for reasoning enhancement , and that BFR, combined with a difficulty matching strategy, significantly improves training efficiency, guiding models toward global logical optima.
https://arxiv.org/abs/2601.03205
Academic Papers
svg
5fd5bd8e7de67514c1dde95b7cb161a9bca4fc4abf19aabff1f6a052d43b3f82
2026-01-07T00:00:00-05:00
Fine-tuning Small Language Models as Efficient Enterprise Search Relevance Labelers
arXiv:2601.03211v1 Announce Type: new Abstract: In enterprise search, building high-quality datasets at scale remains a central challenge due to the difficulty of acquiring labeled data. To resolve this challenge, we propose an efficient approach to fine-tune small language models (SLMs) for accurate relevance labeling, enabling high-throughput, domain-specific labeling comparable or even better in quality to that of state-of-the-art large language models (LLMs). To overcome the lack of high-quality and accessible datasets in the enterprise domain, our method leverages on synthetic data generation. Specifically, we employ an LLM to synthesize realistic enterprise queries from a seed document, apply BM25 to retrieve hard negatives, and use a teacher LLM to assign relevance scores. The resulting dataset is then distilled into an SLM, producing a compact relevance labeler. We evaluate our approach on a high-quality benchmark consisting of 923 enterprise query-document pairs annotated by trained human annotators, and show that the distilled SLM achieves agreement with human judgments on par with or better than the teacher LLM. Furthermore, our fine-tuned labeler substantially improves throughput, achieving 17 times increase while also being 19 times more cost-effective. This approach enables scalable and cost-effective relevance labeling for enterprise-scale retrieval applications, supporting rapid offline evaluation and iteration in real-world settings.
https://arxiv.org/abs/2601.03211
Academic Papers
svg
989a5bb87a5e0520902fe9379c1bf4369faefd9a69ae5f3c282d0d6b50dc763c
2026-01-07T00:00:00-05:00
Critic-Guided Reinforcement Unlearning in Text-to-Image Diffusion
arXiv:2601.03213v1 Announce Type: new Abstract: Machine unlearning in text-to-image diffusion models aims to remove targeted concepts while preserving overall utility. Prior diffusion unlearning methods typically rely on supervised weight edits or global penalties; reinforcement-learning (RL) approaches, while flexible, often optimize sparse end-of-trajectory rewards, yielding high-variance updates and weak credit assignment. We present a general RL framework for diffusion unlearning that treats denoising as a sequential decision process and introduces a timestep-aware critic with noisy-step rewards. Concretely, we train a CLIP-based reward predictor on noisy latents and use its per-step signal to compute advantage estimates for policy-gradient updates of the reverse diffusion kernel. Our algorithm is simple to implement, supports off-policy reuse, and plugs into standard text-to-image backbones. Across multiple concepts, the method achieves better or comparable forgetting to strong baselines while maintaining image quality and benign prompt fidelity; ablations show that (i) per-step critics and (ii) noisy-conditioned rewards are key to stability and effectiveness. We release code and evaluation scripts to facilitate reproducibility and future research on RL-based diffusion unlearning.
https://arxiv.org/abs/2601.03213
Academic Papers
svg
b50a66ffaa087d1d7cbddacee4a3990d2f19b1e52caa769e7fada324b057940c
2026-01-07T00:00:00-05:00
oneTwin: Online Digital Network Twin via Neural Radio Radiance Field
arXiv:2601.03216v1 Announce Type: new Abstract: Digital network twin is a promising technology that replicates real-world networks in real-time and assists with the design, operation, and management of next-generation networks. However, existing approaches (e.g., simulator-based and neural-based) cannot effectively realize the digital network twin, in terms of fidelity, synchronicity, and tractability. In this paper, we propose oneTwin, the first online digital twin system, for the prediction of physical layer metrics. We architect the oneTwin system with two primary components: an enhanced simulator and a neural radio radiance field (NRRF). On the one hand, we achieve the enhanced simulator by designing a material tuning algorithm that incrementally optimizes the building materials to minimize the twin-to-real gap. On the other hand, we achieve the NRRF by designing a neural learning algorithm that continually updates its DNNs based on both online and simulated data from the enhanced simulator. We implement oneTwin system using Sionna RT as the simulator and developing new DNNs as the NRRF, under a public cellular network. Extensive experimental results show that, compared to state-of-the-art solutions, oneTwin achieves real-time updating (0.98s), with 36.39% and 57.50% reductions of twin-to-real gap under in-distribution and out-of-distribution test datasets, respectively.
https://arxiv.org/abs/2601.03216
Academic Papers
svg
749690d95b3ce2b301826a6c06c3889034d8a88ee576a2b22d7e90961008ee3b
2026-01-07T00:00:00-05:00
MalruleLib: Large-Scale Executable Misconception Reasoning with Step Traces for Modeling Student Thinking in Mathematics
arXiv:2601.03217v1 Announce Type: new Abstract: Student mistakes in mathematics are often systematic: a learner applies a coherent but wrong procedure and repeats it across contexts. We introduce MalruleLib, a learning-science-grounded framework that translates documented misconceptions into executable procedures, drawing on 67 learning-science and mathematics education sources, and generates step-by-step traces of malrule-consistent student work. We formalize a core student-modeling problem as Malrule Reasoning Accuracy (MRA): infer a misconception from one worked mistake and predict the student's next answer under cross-template rephrasing. Across nine language models (4B-120B), accuracy drops from 66% on direct problem solving to 40% on cross-template misconception prediction. MalruleLib encodes 101 malrules over 498 parameterized problem templates and produces paired dual-path traces for both correct reasoning and malrule-consistent student reasoning. Because malrules are executable and templates are parameterizable, MalruleLib can generate over one million instances, enabling scalable supervision and controlled evaluation. Using MalruleLib, we observe cross-template degradations of 10-21%, while providing student step traces improves prediction by 3-15%. We release MalruleLib as infrastructure for educational AI that models student procedures across contexts, enabling diagnosis and feedback that targets the underlying misconception.
https://arxiv.org/abs/2601.03217
Academic Papers
svg
d265bcb8280257e725fbfa3650cda5df9a4758ffbf3dfadf088d053efa0d7566
2026-01-07T00:00:00-05:00
Enhancing Safety in Automated Ports: A Virtual Reality Study of Pedestrian-Autonomous Vehicle Interactions under Time Pressure, Visual Constraints, and Varying Vehicle Size
arXiv:2601.03218v1 Announce Type: new Abstract: Autonomous driving improves traffic efficiency but presents safety challenges in complex port environments. This study investigates how environmental factors, traffic factors, and pedestrian characteristics influence interaction safety between autonomous vehicles and pedestrians in ports. Using virtual reality (VR) simulations of typical port scenarios, 33 participants completed pedestrian crossing tasks under varying visibility, vehicle sizes, and time pressure conditions. Results indicate that low-visibility conditions, partial occlusions and larger vehicle sizes significantly increase perceived risk, prompting pedestrians to wait longer and accept larger gaps. Specifically, pedestrians tended to accept larger gaps and waited longer when interacting with large autonomous truck platoons, reflecting heightened caution due to their perceived threat. However, local obstructions also reduce post-encroachment time, compressing safety margins. Individual attributes such as age, gender, and driving experience further shape decision-making, while time pressure undermines compensatory behaviors and increases risk. Based on these findings, safety strategies are proposed, including installing wide-angle cameras at multiple viewpoints, enabling real-time vehicle-infrastructure communication, enhancing port lighting and signage, and strengthening pedestrian safety training. This study offers practical recommendations for improving the safety and deployment of vision-based autonomous systems in port settings.
https://arxiv.org/abs/2601.03218
Academic Papers
svg
489d7ad4abbe8c976a656e9798f4ff33617500253529b7a0e37b7eee99be3a8e
2026-01-07T00:00:00-05:00
inRAN: Interpretable Online Bayesian Learning for Network Automation in Open Radio Access Networks
arXiv:2601.03219v1 Announce Type: new Abstract: Emerging AI/ML techniques have been showing great potential in automating network control in open radio access networks (Open RAN). However, existing approaches heavily rely on blackbox policies parameterized by deep neural networks, which inherently lack interpretability, explainability, and transparency, and create substantial obstacles in practical network deployment. In this paper, we propose inRAN, a novel interpretable online Bayesian learning framework for network automation in Open RAN. The core idea is to integrate interpretable surrogate models and safe optimization solvers to continually optimize control actions, while adapting to non-stationary dynamics in real-world networks. We achieve the inRAN framework with three key components: 1) an interpretable surrogate model via ensembling Kolmogorov-Arnold Networks (KANs); 2) safe optimization solvers via integrating genetic search and trust-region descent method; 3) an online dynamics tracker via continual model learning and adaptive threshold offset. We implement inRAN in an end-to-end O-RAN-compliant network testbed, and conduct extensive over-the-air experiments with the focused use case of network slicing. The results show that, inRAN substantially outperforms state-of-the-art works, by guaranteeing the chance-based constraint with a 92.67% assurance ratio with comparative resource usage throughout the online network control, under unforeseeable time-evolving network dynamics.
https://arxiv.org/abs/2601.03219
Academic Papers
svg
cbf6c5694f136213a494c18fe2896caefe62049596920b2484d3d4ae30dc0966
2026-01-07T00:00:00-05:00
From Entropy to Epiplexity: Rethinking Information for Computationally Bounded Intelligence
arXiv:2601.03220v1 Announce Type: new Abstract: Can we learn more from data than existed in the generating process itself? Can new and useful information be constructed from merely applying deterministic transformations to existing data? Can the learnable content in data be evaluated without considering a downstream task? On these questions, Shannon information and Kolmogorov complexity come up nearly empty-handed, in part because they assume observers with unlimited computational capacity and fail to target the useful information content. In this work, we identify and exemplify three seeming paradoxes in information theory: (1) information cannot be increased by deterministic transformations; (2) information is independent of the order of data; (3) likelihood modeling is merely distribution matching. To shed light on the tension between these results and modern practice, and to quantify the value of data, we introduce epiplexity, a formalization of information capturing what computationally bounded observers can learn from data. Epiplexity captures the structural content in data while excluding time-bounded entropy, the random unpredictable content exemplified by pseudorandom number generators and chaotic dynamical systems. With these concepts, we demonstrate how information can be created with computation, how it depends on the ordering of the data, and how likelihood modeling can produce more complex programs than present in the data generating process itself. We also present practical procedures to estimate epiplexity which we show capture differences across data sources, track with downstream performance, and highlight dataset interventions that improve out-of-distribution generalization. In contrast to principles of model selection, epiplexity provides a theoretical foundation for data selection, guiding how to select, generate, or transform data for learning systems.
https://arxiv.org/abs/2601.03220
Academic Papers
svg
abeb7d8e14493a2cabccb93610b35aa66afbc6726609af23f27f43b915c08d9f
2026-01-07T00:00:00-05:00
The Fake Friend Dilemma: Trust and the Political Economy of Conversational AI
arXiv:2601.03222v1 Announce Type: new Abstract: As conversational AI systems become increasingly integrated into everyday life, they raise pressing concerns about user autonomy, trust, and the commercial interests that influence their behavior. To address these concerns, this paper develops the Fake Friend Dilemma (FFD), a sociotechnical condition in which users place trust in AI agents that appear supportive while pursuing goals that are misaligned with the user's own. The FFD provides a critical framework for examining how anthropomorphic AI systems facilitate subtle forms of manipulation and exploitation. Drawing on literature in trust, AI alignment, and surveillance capitalism, we construct a typology of harms, including covert advertising, political propaganda, behavioral nudging, and surveillance. We then assess possible mitigation strategies, including both structural and technical interventions. By focusing on trust as a vector of asymmetrical power, the FFD offers a lens for understanding how AI systems may undermine user autonomy while maintaining the appearance of helpfulness.
https://arxiv.org/abs/2601.03222
Academic Papers
svg
c6cc140461e6cbe4a448751124928b19678f24668cffa1f208f19f5d3b077689
2026-01-07T00:00:00-05:00
Are eHMIs always helpful? Investigating how eHMIs interfere with pedestrian behavior on multi-lane streets: An eye-tracking virtual reality experiment
arXiv:2601.03223v1 Announce Type: new Abstract: Appropriate communication is crucial for efficient and safe interactions between pedestrians and autonomous vehicles (AVs). External human-machine interfaces (eHMIs) on AVs, which can be categorized as allocentric or egocentric, are considered a promising solution. While the effectiveness of eHMIs has been extensively studied, in complex environments, such as unsignalized multi-lane streets, their potential to interfere with pedestrian crossing behavior remains underexplored. Hence, a virtual reality-based experiment was conducted to examine how different types of eHMIs displayed on AVs affect the crossing behavior of pedestrians in multi-lane streets environments, with a focus on the gaze patterns of pedestrians during crossing. The results revealed that the presence of eHMIs significantly influenced the cognitive load on pedestrians and increased the possibility of distraction, even misleading pedestrians in cases involving multiple AVs on multi-lane streets. Notably, allocentric eHMIs induced higher cognitive loads and greater distraction in pedestrians than egocentric eHMIs. This was primarily evidenced by longer gaze time and higher proportions of attention for the eHMI on the interacting vehicle, as well as a broader distribution of gaze toward vehicles in the non-interacting lane. However, misleading behavior was mainly triggered by eHMI signals from yielding vehicles in the non-interacting lane. Under such asymmetric signal configurations, egocentric eHMIs resulted in a higher misjudgment rate than allocentric eHMIs. These findings highlight the importance of enhancing eHMI designs to balance the clarity and consistency of the displayed information across different perspectives, especially in complex multi-lane traffic scenarios. This study provides valuable insights regarding the application and standardization of future eHMI systems for AVs.
https://arxiv.org/abs/2601.03223
Academic Papers
svg
2b3977e36f57fef6c42b7b4f956e1c0efb4b6401570be4a0930748c5ce1815f9
2026-01-07T00:00:00-05:00
Wait or cross? Understanding the influence of behavioral tendency, trust, and risk perception on pedestrian gap-acceptance of automated truck platoons
arXiv:2601.03225v1 Announce Type: new Abstract: Although automated trucks have the potential to improve freight efficiency, reduce costs, and address driver shortages, organizing two or more trucks in a convoy has raised considerable concerns for pedestrian safety. This study conducted a controlled experiment to examine the influence of behavioral tendency, trust, and risk perception on pedestrian intention to cross in front of an automated truck platoon. A total of 603 subjects participated in the virtual reality video-based questionnaire survey. By fusing the merits of structural equation modeling and artificial neural networks, a two-stage, hybrid model was developed to examine complex relationships between latent variables and gap-acceptance behaviors. Our results indicated that subjects watched an average of five vehicle gaps before starting crossing and the average time gap accepted was about 5.35 seconds. Risk perception not only played the most dominant role in shaping pedestrian crossing decisions, but also served as the strong bone, mediating the effects of behavioral tendency and trust on gap-acceptance. Participants who frequently violated traffic rules were more likely to accept a smaller time gap, while those who showed positive behaviors to other road users tended to wait for a larger time gap. Participants who often committed errors, showed aggressive behaviors, and held greater trust in the safety of automated trucks generally reported a lower level of risk for road-crossing in front of automated truck platoons. Built on these findings, a range of tailored countermeasures were proposed to ensure safer and smother interactions between pedestrians and automated truck platoons.
https://arxiv.org/abs/2601.03225
Academic Papers
svg
40eeb70a2b7a6340d36ce83e1e2df39d3e9ca56443b943233e10b7f6a4a7e9e7
2026-01-07T00:00:00-05:00
The Sonar Moment: Benchmarking Audio-Language Models in Audio Geo-Localization
arXiv:2601.03227v1 Announce Type: new Abstract: Geo-localization aims to infer the geographic origin of a given signal. In computer vision, geo-localization has served as a demanding benchmark for compositional reasoning and is relevant to public safety. In contrast, progress on audio geo-localization has been constrained by the lack of high-quality audio-location pairs. To address this gap, we introduce AGL1K, the first audio geo-localization benchmark for audio language models (ALMs), spanning 72 countries and territories. To extract reliably localizable samples from a crowd-sourced platform, we propose the Audio Localizability metric that quantifies the informativeness of each recording, yielding 1,444 curated audio clips. Evaluations on 16 ALMs show that ALMs have emerged with audio geo-localization capability. We find that closed-source models substantially outperform open-source models, and that linguistic clues often dominate as a scaffold for prediction. We further analyze ALMs' reasoning traces, regional bias, error causes, and the interpretability of the localizability metric. Overall, AGL1K establishes a benchmark for audio geo-localization and may advance ALMs with better geospatial reasoning capability.
https://arxiv.org/abs/2601.03227
Academic Papers
svg
7632f8f0c6225c9da1b2298b964be7be2fbd43562a5a92fe2c729b537c13d444
2026-01-07T00:00:00-05:00
SpANNS: Optimizing Approximate Nearest Neighbor Search for Sparse Vectors Using Near Memory Processing
arXiv:2601.03229v1 Announce Type: new Abstract: Approximate Nearest Neighbor Search (ANNS) is a fundamental operation in vector databases, enabling efficient similarity search in high-dimensional spaces. While dense ANNS has been optimized using specialized hardware accelerators, sparse ANNS remains limited by CPU-based implementations, hindering scalability. This limitation is increasingly critical as hybrid retrieval systems, combining sparse and dense embeddings, become standard in Information Retrieval (IR) pipelines. We propose SpANNS, a near-memory processing architecture for sparse ANNS. SpANNS combines a hybrid inverted index with efficient query management and runtime optimizations. The architecture is built on a CXL Type-2 near-memory platform, where a specialized controller manages query parsing and cluster filtering, while compute-enabled DIMMs perform index traversal and distance computations close to the data. It achieves 15.2x to 21.6x faster execution over the state-of-the-art CPU baselines, offering scalable and efficient solutions for sparse vector search.
https://arxiv.org/abs/2601.03229
Academic Papers
svg
f5eea5b3b6d3b7732492a716c084b1a3bb5106356af7e2712af2200e6517bd86
2026-01-07T00:00:00-05:00
Multi-RADS Synthetic Radiology Report Dataset and Head-to-Head Benchmarking of 41 Open-Weight and Proprietary Language Models
arXiv:2601.03232v1 Announce Type: new Abstract: Background: Reporting and Data Systems (RADS) standardize radiology risk communication but automated RADS assignment from narrative reports is challenging because of guideline complexity, output-format constraints, and limited benchmarking across RADS frameworks and model sizes. Purpose: To create RXL-RADSet, a radiologist-verified synthetic multi-RADS benchmark, and compare validity and accuracy of open-weight small language models (SLMs) with a proprietary model for RADS assignment. Materials and Methods: RXL-RADSet contains 1,600 synthetic radiology reports across 10 RADS (BI-RADS, CAD-RADS, GB-RADS, LI-RADS, Lung-RADS, NI-RADS, O-RADS, PI-RADS, TI-RADS, VI-RADS) and multiple modalities. Reports were generated by LLMs using scenario plans and simulated radiologist styles and underwent two-stage radiologist verification. We evaluated 41 quantized SLMs (12 families, 0.135-32B parameters) and GPT-5.2 under a fixed guided prompt. Primary endpoints were validity and accuracy; a secondary analysis compared guided versus zero-shot prompting. Results: Under guided prompting GPT-5.2 achieved 99.8% validity and 81.1% accuracy (1,600 predictions). Pooled SLMs (65,600 predictions) achieved 96.8% validity and 61.1% accuracy; top SLMs in the 20-32B range reached ~99% validity and mid-to-high 70% accuracy. Performance scaled with model size (inflection between =10B) and declined with RADS complexity primarily due to classification difficulty rather than invalid outputs. Guided prompting improved validity (99.2% vs 96.7%) and accuracy (78.5% vs 69.6%) compared with zero-shot. Conclusion: RXL-RADSet provides a radiologist-verified multi-RADS benchmark; large SLMs (20-32B) can approach proprietary-model performance under guided prompting, but gaps remain for higher-complexity schemes.
https://arxiv.org/abs/2601.03232
Academic Papers
svg
b3226fa1bfffe9d2492ac142aa36d76b0da89cd3a487ff13974ed10594fb5e9a
2026-01-07T00:00:00-05:00
LTX-2: Efficient Joint Audio-Visual Foundation Model
arXiv:2601.03233v1 Announce Type: new Abstract: Recent text-to-video diffusion models can generate compelling video sequences, yet they remain silent -- missing the semantic, emotional, and atmospheric cues that audio provides. We introduce LTX-2, an open-source foundational model capable of generating high-quality, temporally synchronized audiovisual content in a unified manner. LTX-2 consists of an asymmetric dual-stream transformer with a 14B-parameter video stream and a 5B-parameter audio stream, coupled through bidirectional audio-video cross-attention layers with temporal positional embeddings and cross-modality AdaLN for shared timestep conditioning. This architecture enables efficient training and inference of a unified audiovisual model while allocating more capacity for video generation than audio generation. We employ a multilingual text encoder for broader prompt understanding and introduce a modality-aware classifier-free guidance (modality-CFG) mechanism for improved audiovisual alignment and controllability. Beyond generating speech, LTX-2 produces rich, coherent audio tracks that follow the characters, environment, style, and emotion of each scene -- complete with natural background and foley elements. In our evaluations, the model achieves state-of-the-art audiovisual quality and prompt adherence among open-source systems, while delivering results comparable to proprietary models at a fraction of their computational cost and inference time. All model weights and code are publicly released.
https://arxiv.org/abs/2601.03233
Academic Papers
svg
bae4cacc5f2ed85379d4cd86fc1818f10dfe6dd534cf26ecece19883ed11f700
2026-01-07T00:00:00-05:00
MAGMA: A Multi-Graph based Agentic Memory Architecture for AI Agents
arXiv:2601.03236v1 Announce Type: new Abstract: Memory-Augmented Generation (MAG) extends Large Language Models with external memory to support long-context reasoning, but existing approaches largely rely on semantic similarity over monolithic memory stores, entangling temporal, causal, and entity information. This design limits interpretability and alignment between query intent and retrieved evidence, leading to suboptimal reasoning accuracy. In this paper, we propose MAGMA, a multi-graph agentic memory architecture that represents each memory item across orthogonal semantic, temporal, causal, and entity graphs. MAGMA formulates retrieval as policy-guided traversal over these relational views, enabling query-adaptive selection and structured context construction. By decoupling memory representation from retrieval logic, MAGMA provides transparent reasoning paths and fine-grained control over retrieval. Experiments on LoCoMo and LongMemEval demonstrate that MAGMA consistently outperforms state-of-the-art agentic memory systems in long-horizon reasoning tasks.
https://arxiv.org/abs/2601.03236
Academic Papers
svg
d69435060e22d29b179b6ea8450d72b5038e508562129b5984a454422b9ff599
2026-01-07T00:00:00-05:00
PET-TURTLE: Deep Unsupervised Support Vector Machines for Imbalanced Data Clusters
arXiv:2601.03237v1 Announce Type: new Abstract: Foundation vision, audio, and language models enable zero-shot performance on downstream tasks via their latent representations. Recently, unsupervised learning of data group structure with deep learning methods has gained popularity. TURTLE, a state of the art deep clustering algorithm, uncovers data labeling without supervision by alternating label and hyperplane updates, maximizing the hyperplane margin, in a similar fashion to support vector machines (SVMs). However, TURTLE assumes clusters are balanced; when data is imbalanced, it yields non-ideal hyperplanes that cause higher clustering error. We propose PET-TURTLE, which generalizes the cost function to handle imbalanced data distributions by a power law prior. Additionally, by introducing sparse logits in the labeling process, PET-TURTLE optimizes a simpler search space that in turn improves accuracy for balanced datasets. Experiments on synthetic and real data show that PET-TURTLE improves accuracy for imbalanced sources, prevents over-prediction of minority clusters, and enhances overall clustering.
https://arxiv.org/abs/2601.03237
Academic Papers
svg
ad947b2936fb547974ef8dbac7f0b6f8bc6a0d234c01cef3770c8f838ec2db65
2026-01-07T00:00:00-05:00
On the Capacity Region of Individual Key Rates in Vector Linear Secure Aggregation
arXiv:2601.03241v1 Announce Type: new Abstract: We provide new insights into an open problem recently posed by Yuan-Sun [ISIT 2025], concerning the minimum individual key rate required in the vector linear secure aggregation problem. Consider a distributed system with $K$ users, where each user $k\in [K]$ holds a data stream $W_k$ and an individual key $Z_k$. A server aims to compute a linear function $\mathbf{F}[W_1;\ldots;W_K]$ without learning any information about another linear function $\mathbf{G}[W_1;\ldots;W_K]$, where $[W_1;\ldots;W_K]$ denotes the row stack of $W_1,\ldots,W_K$. The open problem is to determine the minimum required length of $Z_k$, denoted as $R_k$, $k\in [K]$. In this paper, we characterize a new achievable region for the rate tuple $(R_1,\ldots,R_K)$. The region is polyhedral, with vertices characterized by a binary rate assignment $(R_1,\ldots,R_K) = (\mathbf{1}(1 \in \mathcal{I}),\ldots,\mathbf{1}(K\in \mathcal{I}))$, where $\mathcal{I}\subseteq [K]$ satisfies the \textit{rank-increment condition}: $\mathrm{rank}\left(\bigl[\mathbf{F}_{\mathcal{I}};\mathbf{G}_{\mathcal{I}}\bigr]\right) =\mathrm{rank}\bigl(\mathbf{F}_{\mathcal{I}}\bigr)+N$. Here, $\mathbf{F}_\mathcal{I}$ and $\mathbf{G}_\mathcal{I}$ are the submatrices formed by the columns indexed by $\mathcal{I}$. Our results uncover the novel fact that it is not necessary for every user to hold a key, thereby strictly enlarging the best-known achievable region in the literature. Furthermore, we provide a converse analysis to demonstrate its optimality when minimizing the number of users that hold keys.
https://arxiv.org/abs/2601.03241
Academic Papers
svg
826d5dec5795e1a9b0de3baf76769beafd6f2932744af9310d6d322fabea3005
2026-01-07T00:00:00-05:00
SLIM: Stealthy Low-Coverage Black-Box Watermarking via Latent-Space Confusion Zones
arXiv:2601.03242v1 Announce Type: new Abstract: Training data is a critical and often proprietary asset in Large Language Model (LLM) development, motivating the use of data watermarking to embed model-transferable signals for usage verification. We identify low coverage as a vital yet largely overlooked requirement for practicality, as individual data owners typically contribute only a minute fraction of massive training corpora. Prior methods fail to maintain stealthiness, verification feasibility, or robustness when only one or a few sequences can be modified. To address these limitations, we introduce SLIM, a framework enabling per-user data provenance verification under strict black-box access. SLIM leverages intrinsic LLM properties to induce a Latent-Space Confusion Zone by training the model to map semantically similar prefixes to divergent continuations. This manifests as localized generation instability, which can be reliably detected via hypothesis testing. Experiments demonstrate that SLIM achieves ultra-low coverage capability, strong black-box verification performance, and great scalability while preserving both stealthiness and model utility, offering a robust solution for protecting training data in modern LLM pipelines.
https://arxiv.org/abs/2601.03242
Academic Papers
svg
59aefcbedc32bc8129909c5a6e5b7cc8edce1390fcd71bb123de1924a79d7132
2026-01-07T00:00:00-05:00
$\mathsf{QAC}^0$ Contains $\mathsf{TC}^0$ (with Many Copies of the Input)
arXiv:2601.03243v1 Announce Type: new Abstract: $\mathsf{QAC}^0$ is the class of constant-depth polynomial-size quantum circuits constructed from arbitrary single-qubit gates and generalized Toffoli gates. It is arguably the smallest natural class of constant-depth quantum computation which has not been shown useful for computing any non-trivial Boolean function. Despite this, many attempts to port classical $\mathsf{AC}^0$ lower bounds to $\mathsf{QAC}^0$ have failed. We give one possible explanation of this: $\mathsf{QAC}^0$ circuits are significantly more powerful than their classical counterparts. We show the unconditional separation $\mathsf{QAC}^0\not\subset\mathsf{AC}^0[p]$ for decision problems, which also resolves for the first time whether $\mathsf{AC}^0$ could be more powerful than $\mathsf{QAC}^0$. Moreover, we prove that $\mathsf{QAC}^0$ circuits can compute a wide range of Boolean functions if given multiple copies of the input: $\mathsf{TC}^0 \subseteq \mathsf{QAC}^0 \circ \mathsf{NC}^0$. Along the way, we introduce an amplitude amplification technique that makes several approximate constant-depth constructions exact.
https://arxiv.org/abs/2601.03243
Academic Papers
svg
377325808dd04b98f43700e92ec9765faa11d9cc9487d7593f9c671de66482a4
2026-01-07T00:00:00-05:00
STReasoner: Empowering LLMs for Spatio-Temporal Reasoning in Time Series via Spatial-Aware Reinforcement Learning
arXiv:2601.03248v1 Announce Type: new Abstract: Spatio-temporal reasoning in time series involves the explicit synthesis of temporal dynamics, spatial dependencies, and textual context. This capability is vital for high-stakes decision-making in systems such as traffic networks, power grids, and disease propagation. However, the field remains underdeveloped because most existing works prioritize predictive accuracy over reasoning. To address the gap, we introduce ST-Bench, a benchmark consisting of four core tasks, including etiological reasoning, entity identification, correlation reasoning, and in-context forecasting, developed via a network SDE-based multi-agent data synthesis pipeline. We then propose STReasoner, which empowers LLM to integrate time series, graph structure, and text for explicit reasoning. To promote spatially grounded logic, we introduce S-GRPO, a reinforcement learning algorithm that rewards performance gains specifically attributable to spatial information. Experiments show that STReasoner achieves average accuracy gains between 17% and 135% at only 0.004X the cost of proprietary models and generalizes robustly to real-world data.
https://arxiv.org/abs/2601.03248
Academic Papers
svg
2cad9a4296320eb0e3fe08a5e0b5e50caee35c3c8c7f91344d69053571898a5e
2026-01-07T00:00:00-05:00
Proceedings 16th International Workshop on Graph Computation Models
arXiv:2601.03249v1 Announce Type: new Abstract: This volume contains the post-proceedings of the Sixteenth International Workshop on Graph Computation Models (GCM 2025). The workshops took place in Koblenz, Germany on June 10 as part of STAF (Software Technologies: Applications and Foundations). Graphs are common mathematical structures that are visual and intuitive. They constitute a natural and seamless way for system modeling in science, engineering, and beyond, including computer science, biology, and business process modeling. Graph computation models constitute a class of very high-level models where graphs are first-class citizens. The aim of the International GCM Workshop series is to bring together researchers interested in all aspects of computation models based on graphs and graph transformation. It promotes the cross-fertilizing exchange of ideas and experiences among senior and young researchers from the different communities interested in the foundations, applications, and implementations of graph computation models and related areas.
https://arxiv.org/abs/2601.03249
Academic Papers
svg
b20d7846ae3c14471a445f8b1e7cc476767f5dfc0cfc6394c7bdeb53f46468ee
2026-01-07T00:00:00-05:00
A Versatile Multimodal Agent for Multimedia Content Generation
arXiv:2601.03250v1 Announce Type: new Abstract: With the advancement of AIGC (AI-generated content) technologies, an increasing number of generative models are revolutionizing fields such as video editing, music generation, and even film production. However, due to the limitations of current AIGC models, most models can only serve as individual components within specific application scenarios and are not capable of completing tasks end-to-end in real-world applications. In real-world applications, editing experts often work with a wide variety of images and video inputs, producing multimodal outputs -- a video typically includes audio, text, and other elements. This level of integration across multiple modalities is something current models are unable to achieve effectively. However, the rise of agent-based systems has made it possible to use AI tools to tackle complex content generation tasks. To deal with the complex scenarios, in this paper, we propose a MultiMedia-Agent designed to automate complex content creation. Our agent system includes a data generation pipeline, a tool library for content creation, and a set of metrics for evaluating preference alignment. Notably, we introduce the skill acquisition theory to model the training data curation and agent training. We designed a two-stage correlation strategy for plan optimization, including self-correlation and model preference correlation. Additionally, we utilized the generated plans to train the MultiMedia-Agent via a three stage approach including base/success plan finetune and preference optimization. The comparison results demonstrate that the our approaches are effective and the MultiMedia-Agent can generate better multimedia content compared to novel models.
https://arxiv.org/abs/2601.03250
Academic Papers
svg
fcb9e8a8e0a041262005bfb7974892005c30520a3e2d16df2c0345d4814b3926
2026-01-07T00:00:00-05:00
NavAI: A Generalizable LLM Framework for Navigation Tasks in Virtual Reality Environments
arXiv:2601.03251v1 Announce Type: new Abstract: Navigation is one of the fundamental tasks for automated exploration in Virtual Reality (VR). Existing technologies primarily focus on path optimization in 360-degree image datasets and 3D simulators, which cannot be directly applied to immersive VR environments. To address this gap, we present NavAI, a generalizable large language model (LLM)-based navigation framework that supports both basic actions and complex goal-directed tasks across diverse VR applications. We evaluate NavAI in three distinct VR environments through goal-oriented and exploratory tasks. Results show that it achieves high accuracy, with an 89% success rate in goal-oriented tasks. Our analysis also highlights current limitations of relying entirely on LLMs, particularly in scenarios that require dynamic goal assessment. Finally, we discuss the limitations observed during the experiments and offer insights for future research directions.
https://arxiv.org/abs/2601.03251
Academic Papers
svg
2ba8d2eac40ad3973f7f75a51177d2dff4f12c74ab854999fd20603a17ae82b8
2026-01-07T00:00:00-05:00
InfiniDepth: Arbitrary-Resolution and Fine-Grained Depth Estimation with Neural Implicit Fields
arXiv:2601.03252v1 Announce Type: new Abstract: Existing depth estimation methods are fundamentally limited to predicting depth on discrete image grids. Such representations restrict their scalability to arbitrary output resolutions and hinder the geometric detail recovery. This paper introduces InfiniDepth, which represents depth as neural implicit fields. Through a simple yet effective local implicit decoder, we can query depth at continuous 2D coordinates, enabling arbitrary-resolution and fine-grained depth estimation. To better assess our method's capabilities, we curate a high-quality 4K synthetic benchmark from five different games, spanning diverse scenes with rich geometric and appearance details. Extensive experiments demonstrate that InfiniDepth achieves state-of-the-art performance on both synthetic and real-world benchmarks across relative and metric depth estimation tasks, particularly excelling in fine-detail regions. It also benefits the task of novel view synthesis under large viewpoint shifts, producing high-quality results with fewer holes and artifacts.
https://arxiv.org/abs/2601.03252
Academic Papers
svg
ec809140d9d4dc1f2b9fabd3634db8cf42a5b9280a5a18b6bf3832212944f3dc
2026-01-07T00:00:00-05:00
Automated Semantic Rules Detection (ASRD) for Emergent Communication Interpretation
arXiv:2601.03254v1 Announce Type: new Abstract: The field of emergent communication within multi-agent systems examines how autonomous agents can independently develop communication strategies, without explicit programming, and adapt them to varied environments. However, few studies have focused on the interpretability of emergent languages. The research exposed in this paper proposes an Automated Semantic Rules Detection (ASRD) algorithm, which extracts relevant patterns in messages exchanged by agents trained with two different datasets on the Lewis Game, which is often studied in the context of emergent communication. ASRD helps at the interpretation of the emergent communication by relating the extracted patterns to specific attributes of the input data, thereby considerably simplifying subsequent analysis.
https://arxiv.org/abs/2601.03254
Academic Papers
svg
97c5e9115ef3dcf78e1e7b300ebf9a7bc2f78b6f2a99088cdacda653cea13624
2026-01-07T00:00:00-05:00
Muses: Designing, Composing, Generating Nonexistent Fantasy 3D Creatures without Training
arXiv:2601.03256v1 Announce Type: new Abstract: We present Muses, the first training-free method for fantastic 3D creature generation in a feed-forward paradigm. Previous methods, which rely on part-aware optimization, manual assembly, or 2D image generation, often produce unrealistic or incoherent 3D assets due to the challenges of intricate part-level manipulation and limited out-of-domain generation. In contrast, Muses leverages the 3D skeleton, a fundamental representation of biological forms, to explicitly and rationally compose diverse elements. This skeletal foundation formalizes 3D content creation as a structure-aware pipeline of design, composition, and generation. Muses begins by constructing a creatively composed 3D skeleton with coherent layout and scale through graph-constrained reasoning. This skeleton then guides a voxel-based assembly process within a structured latent space, integrating regions from different objects. Finally, image-guided appearance modeling under skeletal conditions is applied to generate a style-consistent and harmonious texture for the assembled shape. Extensive experiments establish Muses' state-of-the-art performance in terms of visual fidelity and alignment with textual descriptions, and potential on flexible 3D object editing. Project page: https://luhexiao.github.io/Muses.github.io/.
https://arxiv.org/abs/2601.03256
Academic Papers
svg
bbd6c5f508c6cbf0c0725a576d6448c38d7a36b5372b41f426aaf7c66e7e02b2
2026-01-07T00:00:00-05:00
TWIST: Training-free and Label-free Short Text Clustering through Iterative Vector Updating with LLMs
arXiv:2510.06747v1 Announce Type: cross Abstract: In this paper, we propose a training-free and label-free method for short text clustering that can be used on top of any existing embedder. In the context of customer-facing chatbots, companies are dealing with large amounts of user utterances that need to be clustered according to their intent. In these commercial settings, no labeled data is typically available, and the number of clusters is not known. Our method is based on iterative vector updating: it constructs sparse vectors based on representative texts, and then iteratively refines them through LLM guidance. Our method achieves comparable or superior results to state-of-the-art methods that use contrastive learning, but without assuming prior knowledge of clusters or labels. Experiments on diverse datasets and smaller LLMs show that our method is model agnostic and can be applied to any embedder, with relatively small LLMs, and different clustering methods. We also show that our method scales to large datasets, reducing the computational cost of the LLM. These low-resource, adaptable settings and the scalability of our method make it more aligned with real-world scenarios than existing clustering methods.
https://arxiv.org/abs/2510.06747
Academic Papers
svg
00f30f172693d9f13938ba8a7d24a8e468c9c9822e8274f03f42204f414100fe
2026-01-07T00:00:00-05:00
Effect of Electric Charge on Biotherapeutic Transport, Binding and Absorption: A Computational Study
arXiv:2601.00505v1 Announce Type: cross Abstract: This study explores the effects of electric charge on the dynamics of drug transport and absorption in subcutaneous injections of monoclonal antibodies (mAbs). We develop a novel mathematical and computational model, based on the Nernst-Planck equations and porous media flow theory, to investigate the complex interactions between mAbs and charged species in subcutaneous tissue. The model enables us to study short-term transport dynamics and long-term binding and absorption for two mAbs with different electric properties. We examine the influence of buffer pH, body mass index, injection depth, and formulation concentration on drug distribution and compare our numerical results with experimental data from the literature.
https://arxiv.org/abs/2601.00505
Academic Papers
svg
398b24aab6fbfa05313eaba31b0ae98b35bd429639fd9d1ef2a0d6413968089d
2026-01-07T00:00:00-05:00
On (Newcomb-)Benford's law: a tale of two papers and of their disproportionate citations. How citation counts can become biased
arXiv:2601.02395v1 Announce Type: cross Abstract: The first digit (FD) phenomenon i.e., the significant digits of numbers in large data are often distributed according to a logarithmically decreasing function was first reported by S. Newcomb and then many decades later independently by F. Benford. After its century long neglect the last three decades have seen huge growth in the number of relevant publications. However, notwithstanding the rising popularity the two independent proponents of the phenomenon are not equally acknowledged an indication of which is disproportionate number of citations accumulated by Newcomb (1881) and Benford (1938). In the present study use citation analysis to show that the formalization of the eponym Benford's law, a name questionable itself for overlooking Newcomb's contribution, by Raimi (1976) had a strong adverse effect on the future citations of Newcomb (1881). Furthermore, we identify the papers published over various decades of the developmental history of the FD phenomenon, which latter turned out to be amongst the most cited ones in the field. We find that lack of its consideration, intentional or occasionally out of ignorance for referencing by the prominent papers, is responsible for a far lesser number of citations of Newcomb (1881) in comparison to Benford (1938).
https://arxiv.org/abs/2601.02395
Academic Papers
svg
036cbeceaa6f3ff3bf3807875ed096525af21a4c58dd9c915ee8a7fa0af2f8b2
2026-01-07T00:00:00-05:00
Detecting and Mitigating Treatment Leakage in Text-Based Causal Inference: Distillation and Sensitivity Analysis
arXiv:2601.02400v1 Announce Type: cross Abstract: Text-based causal inference increasingly employs textual data as proxies for unobserved confounders, yet this approach introduces a previously undertheorized source of bias: treatment leakage. Treatment leakage occurs when text intended to capture confounding information also contains signals predictive of treatment status, thereby inducing post-treatment bias in causal estimates. Critically, this problem can arise even when documents precede treatment assignment, as authors may employ future-referencing language that anticipates subsequent interventions. Despite growing recognition of this issue, no systematic methods exist for identifying and mitigating treatment leakage in text-as-confounder applications. This paper addresses this gap through three contributions. First, we provide formal statistical and set-theoretic definitions of treatment leakage that clarify when and why bias occurs. Second, we propose four text distillation methods -- similarity-based passage removal, distant supervision classification, salient feature removal, and iterative nullspace projection -- designed to eliminate treatment-predictive content while preserving confounder information. Third, we validate these methods through simulations using synthetic text and an empirical application examining International Monetary Fund structural adjustment programs and child mortality. Our findings indicate that moderate distillation optimally balances bias reduction against confounder retention, whereas overly stringent approaches degrade estimate precision.
https://arxiv.org/abs/2601.02400
Academic Papers
svg
92937ca0b813d10847539c4814f5c64f2208754778690cae42ce5a1673b83893
2026-01-07T00:00:00-05:00
OpenFOAM computational fluid dynamics (CFD) solver for magnetohydrodynamic open cycles, applied to the Sakhalin pulsed magnetohydrodynamic generator (PMHDG)
arXiv:2601.02406v1 Announce Type: cross Abstract: In the current study, we present a mathematical and computational fluid dynamics (CFD) model for simulating open-cycle linear Faraday-type continuous-electrode channels of magnetohydrodynamic (MHD) power generators, operating on combustion plasma. The model extends the Favre-averaged Navier-Stokes equations to account for the electric properties of the flowing plasma gas and its reaction to the applied magnetic field. The model takes into account various effects, such as the Lorentz force, turbulence, compressibility, and energy extraction from the plasma, and it adopts an electric potential technique along with the low magnetic Reynolds number (Rem) approximation. The model is numerically implemented using the multiphysics open-source computer programming environment "OpenFOAM," which combines the finite volume method (FVM) and the object-oriented programming (OOP) concept. The capabilities of the model are demonstrated by simulating the supersonic channel of the large-scale pulsed MHD generator (PMHDG) called "Sakhalin", with the aid of collected data and empirical expressions in the literature about its tested operation. Sakhalin was the world's largest PMHDG, with a demonstrated peak electric power output of 510 MW. Sakhalin operated on solid-propellant plasma (SPP), and it had a single supersonic divergent Faraday-type continuous-electrode channel with a length of 4.5 m. We check the validity of the model through comparisons with independent results for the Sakhalin PMHDG. Then, we process our three-dimensional simulation results to provide scalar characteristics of the Sakhalin channel, one-dimensional profiles along the longitudinal centerline, and three-dimensional distributions in the entire channel.
https://arxiv.org/abs/2601.02406
Academic Papers
svg
24d7b9ec1f7b86f8a1855c12b63b7583cd25bb97293590bcc227fe361fe38c58
2026-01-07T00:00:00-05:00
A large-scale nanocrystal database with aligned synthesis and properties enabling generative inverse design
arXiv:2601.02424v1 Announce Type: cross Abstract: The synthesis of nanocrystals has been highly dependent on trial-and-error, due to the complex correlation between synthesis parameters and physicochemical properties. Although deep learning offers a potential methodology to achieve generative inverse design, it is still hindered by the scarcity of high-quality datasets that align nanocrystal synthesis routes with their properties. Here, we present the construction of a large-scale, aligned Nanocrystal Synthesis-Property (NSP) database and demonstrate its capability for generative inverse design. To extract structured synthesis routes and their corresponding product properties from literature, we develop NanoExtractor, a large language model (LLM) enhanced by well-designed augmentation strategies. NanoExtractor is validated against human experts, achieving a weighted average score of 88% on the test set, significantly outperforming chemistry-specialized (3%) and general-purpose LLMs (38%). The resulting NSP database contains nearly 160,000 aligned entries and serves as training data for our NanoDesigner, an LLM for inverse synthesis design. The generative capability of NanoDesigner is validated through the successful design of viable synthesis routes for both well-established PbSe nanocrystals and rarely reported MgF2 nanocrystals. Notably, the model recommends a counter-intuitive, non-stoichiometric precursor ratio (1:1) for MgF2 nanocrystals, which is experimentally confirmed as critical for suppressing byproducts. Our work bridges the gap between unstructured literature and data-driven synthesis, and also establishes a powerful human-AI collaborative paradigm for accelerating nanocrystal discovery.
https://arxiv.org/abs/2601.02424
Academic Papers
svg
c90405c328bcb11fcb1c5fc16ffd2efc34881b42e19e93313ce0c877c0fa6a8f
2026-01-07T00:00:00-05:00
Formal Modeling and Verification of Grover's Algorithm
arXiv:2601.02435v1 Announce Type: cross Abstract: Grover's algorithm relies on the superposition and interference of quantum mechanics, which is more efficient than classical computing in specific tasks such as searching an unsorted database. Due to the high complexity of quantum mechanics, the correctness of quantum algorithms is difficult to guarantee through traditional simulation methods. By contrast, the fundamental concepts and mathematical structure of Grover's algorithm can be formalized into logical expressions and verified by higher-order logical reasoning. In this paper, we formally model and verify Grover's algorithm in the HOL Light theorem prover. We focus on proving key properties such as the unitarity of its oracle and diffusion operators, the monotonicity of the success probability with respect to the number of iterations, and an exact expression for the optimal iteration count. By analyzing a concrete application to integer factorization, we demonstrate the practicality and prospects of our work.
https://arxiv.org/abs/2601.02435
Academic Papers
svg
d3f58de0db35628ae8dcaa22bb81eda88edb9660af5c7a487bb75c9d7a3a67c4
2026-01-07T00:00:00-05:00
Deep Learning Superresolution for 7T Knee MR Imaging: Impact on Image Quality and Diagnostic Performance
arXiv:2601.02436v1 Announce Type: cross Abstract: Background: Deep learning superresolution (SR) may enhance musculoskeletal MR image quality, but its diagnostic value in knee imaging at 7T is unclear. Objectives: To compare image quality and diagnostic performance of SR, low-resolution (LR), and high-resolution (HR) 7T knee MRI. Methods: In this prospective study, 42 participants underwent 7T knee MRI with LR (0.8*0.8*2 mm3) and HR (0.4*0.4*2 mm3) sequences. SR images were generated from LR data using a Hybrid Attention Transformer model. Three radiologists assessed image quality, anatomic conspicuity, and detection of knee pathologies. Arthroscopy served as reference in 10 cases. Results: SR images showed higher overall quality than LR (median score 5 vs 4, P=.095). Conclusions: Deep learning superresolution improved subjective image quality in 7T knee MRI but did not increase diagnostic accuracy compared with standard LR imaging.
https://arxiv.org/abs/2601.02436
Academic Papers
svg
47acc33b14977e8745c6587970e77eab33ef89aca6b663ced2975dac25fda71a
2026-01-07T00:00:00-05:00
Mitigating Long-Tailed Anomaly Score Distributions with Importance-Weighted Loss
arXiv:2601.02440v1 Announce Type: cross Abstract: Anomaly detection is crucial in industrial applications for identifying rare and unseen patterns to ensure system reliability. Traditional models, trained on a single class of normal data, struggle with real-world distributions where normal data exhibit diverse patterns, leading to class imbalance and long-tailed anomaly score distributions (LTD). This imbalance skews model training and degrades detection performance, especially for minority instances. To address this issue, we propose a novel importance-weighted loss designed specifically for anomaly detection. Compared to the previous method for LTD in classification, our method does not require prior knowledge of normal data classes. Instead, we introduce a weighted loss function that incorporates importance sampling to align the distribution of anomaly scores with a target Gaussian, ensuring a balanced representation of normal data. Extensive experiments on three benchmark image datasets and three real-world hyperspectral imaging datasets demonstrate the robustness of our approach in mitigating LTD-induced bias. Our method improves anomaly detection performance by 0.043, highlighting its effectiveness in real-world applications.
https://arxiv.org/abs/2601.02440
Academic Papers
svg
fd6876f5b54a4707e4fc710e308766241cc6b21c41e08c3f8d2ed717e1afebab
2026-01-07T00:00:00-05:00
Star Formation in Galaxy Collisions: Dependence on Impact Velocity and Gas Mass of Galaxies in GADGET-4 Simulations
arXiv:2601.02506v1 Announce Type: cross Abstract: This work investigates variations in the star formation rate during galaxy collisions when the initial conditions of velocity and gas mass are altered. For this purpose, hydrodynamic simulations were performed using the GADGET-4 code, with initial conditions generated by the Galstep and SnapshotJoiner programs. Systems of two galaxies on a head-on collision course were modeled with relative initial velocities ranging from 100 km/s to 1000 km/s, considering two scenarios: the first with identical galaxies, and the second with galaxies of different sizes. In simulations of systems with higher initial relative velocities, both found more intense peaks in the star formation rate, triggered by the first contact of the collision, followed by a strong decline caused by gas dispersion. In contrast, for systems with lower initial velocities, mergers between galaxies were observed, leading to multiple peaks in the star formation rate. A greater initial distance between galaxies has also been linked to whether or not the galaxy system merges, since it implies longer timescales for gravitational action, which leads to higher relative velocities at the moment of collision. Furthermore, the star formation rate in galaxies was found to have a clear dependence on initial gas content. Furthermore, the initial gas content in galaxies was found to have a clear dependence on star formation rates. Overall, our results show that the relative impact velocity, the initial distance between the galaxies, and the gas content are important parameters for analyzing the star formation rate in colliding galaxies.
https://arxiv.org/abs/2601.02506
Academic Papers
svg
4f8333c1957eb86c7c87d91cddabe40ccdd5881b3ee7a4b6c1c5737672d3cfb3
2026-01-07T00:00:00-05:00
Compressed Qubit Noise Spectroscopy: Piecewise-Linear Modeling and Rademacher Measurements
arXiv:2601.02516v1 Announce Type: cross Abstract: Random pulse sequences are a powerful method for qubit noise spectroscopy, enabling efficient reconstruction of sparse noise spectra. Here, we advance this method in two complementary directions. First, we extend the method using a regularizer based on the total generalized variation (TGV) norm, in order to reconstruct a larger class of noise spectra, namely piecewise-linear noise spectra, which more realistically model many physical systems. We show through numerical simulations that the new method resolves finer spectral features, while maintaining an order-of-magnitude speedup over conventional approaches to noise spectroscopy. Second, we simplify the experimental implementation of the method, by introducing Rademacher measurements for reconstructing sparse noise spectra. These measurements use pseudorandom pulse sequences that can be generated in real time from a short random seed, reducing experimental complexity without compromising reconstruction accuracy. Together, these developments broaden the reach of random pulse sequences for accurate and efficient noise characterization in realistic quantum systems.
https://arxiv.org/abs/2601.02516
Academic Papers
svg
4aa9728eeb8bdfc43e37fac3f20447a2595390c2fba7867dd5348dc8fcb53c4e
2026-01-07T00:00:00-05:00
Diffusion Computation versus Quantum Computation: A Comparative Model for Order Finding and Factoring
arXiv:2601.02518v1 Announce Type: cross Abstract: We study a hybrid computational model for integer factorization in which the only non-classical resource is access to an \emph{iterated diffusion process} on a finite graph. Concretely, a \emph{diffusion step} is defined to be one application of a symmetric stochastic matrix (the half-lazy walk operator) to an $\ell^{1}$--normalized state vector, followed by an optional readout of selected coordinates. Let $N\ge 3$ be an odd integer which is neither prime nor a prime power, and let $b\in(\mathbb{Z}/N\mathbb{Z})^\ast$ have odd multiplicative order $r={\rm ord}_N(b)$. We construct, without knowing $r$ in advance, a weighted Cayley graph whose vertex set is the cyclic subgroup $\langle b\rangle$ and whose edges correspond to the powers $b^{\pm 2^t}$ for $t\le \lfloor \log_2 N\rfloor+1$. Using an explicit spectral decomposition together with an elementary doubling lemma, we show that $r$ can be recovered from a single heat-kernel value after at most $O((\log_2 N)^2)$ diffusion steps, with an effective bound. We then combine this order-finding model with the standard reduction from factoring to order finding (in the spirit of Shor's framework) to obtain a randomized factorization procedure whose success probability depends only on the number $m$ of distinct prime factors of $N$. Our comparison with Shor's algorithm is \emph{conceptual and model-based}. We replace unitary $\ell^2$ evolution by Markovian $\ell^1$ evolution, and we report complexity in two cost measures: digital steps and diffusion steps. Finally, we include illustrative examples and discussion of practical implementations.
https://arxiv.org/abs/2601.02518
Academic Papers
svg
3e4c6443c8b96d981a10fa2678dea76fd2eceec7caae11730721ad5f96d8c47c
2026-01-07T00:00:00-05:00
First Provably Optimal Asynchronous SGD for Homogeneous and Heterogeneous Data
arXiv:2601.02523v1 Announce Type: cross Abstract: Artificial intelligence has advanced rapidly through large neural networks trained on massive datasets using thousands of GPUs or TPUs. Such training can occupy entire data centers for weeks and requires enormous computational and energy resources. Yet the optimization algorithms behind these runs have not kept pace. Most large scale training still relies on synchronous methods, where workers must wait for the slowest device, wasting compute and amplifying the effects of hardware and network variability. Removing synchronization seems like a simple fix, but asynchrony introduces staleness, meaning updates computed on outdated models. This makes analysis difficult, especially when delays arise from system level randomness rather than algorithmic choices. As a result, the time complexity of asynchronous methods remains poorly understood. This dissertation develops a rigorous framework for asynchronous first order stochastic optimization, focusing on the core challenge of heterogeneous worker speeds. Within this framework, we show that with proper design, asynchronous SGD can achieve optimal time complexity, matching guarantees previously known only for synchronous methods. Our first contribution, Ringmaster ASGD, attains optimal time complexity in the homogeneous data setting by selectively discarding stale updates. The second, Ringleader ASGD, extends optimality to heterogeneous data, common in federated learning, using a structured gradient table mechanism. Finally, ATA improves resource efficiency by learning worker compute time distributions and allocating tasks adaptively, achieving near optimal wall clock time with less computation. Together, these results establish asynchronous optimization as a theoretically sound and practically efficient foundation for distributed learning, showing that coordination without synchronization can be both feasible and optimal.
https://arxiv.org/abs/2601.02523
Academic Papers
svg
bf2428cc58451882bfe24f0fecbfd2058c42b55bdcbf47f4c5e65d478fe4c577
2026-01-07T00:00:00-05:00
A Green Solution for Breast Region Segmentation Using Deep Active Learning
arXiv:2601.02538v1 Announce Type: cross Abstract: Purpose: Annotation of medical breast images is an essential step toward better diagnostic but a time consuming task. This research aims to focus on different selecting sample strategies within deep active learning on Breast Region Segmentation (BRS) to lessen computational cost of training and effective use of resources. Methods: The Stavanger breast MRI dataset containing 59 patients was used in this study, with FCN-ResNet50 adopted as a sustainable deep learning (DL) model. A novel sample selection approach based on Breast Anatomy Geometry (BAG) analysis was introduced to group data with similar informative features for DL. Patient positioning and Breast Size were considered the key selection criteria in this process. Four selection strategies including Random Selection, Nearest Point, Breast Size, and a hybrid of all three strategies were evaluated using an active learning framework. Four training data proportions of 10%, 20%, 30%, and 40% were used for model training, with the remaining data reserved for testing. Model performance was assessed using Dice score, Intersection over Union, precision, and recall, along with 5-fold cross-validation to enhance generalizability. Results: Increasing the training data proportion from 10% to 40% improved segmentation performance for nearly all strategies, except for Random Selection. The Nearest Point strategy consistently achieved the lowest carbon footprint at 30% and 40% data proportions. Overall, combining the Nearest Point strategy with 30% of the training data provided the best balance between segmentation performance, efficiency, and environmental sustainability. Keywords: Deep Active Learning, Breast Region Segmentation, Human-center analysis
https://arxiv.org/abs/2601.02538
Academic Papers
svg
73a7118c77d81cae1e781775018443c31344bc2b3285646f897ead7b782dc066
2026-01-07T00:00:00-05:00
AI-exposed jobs deteriorated before ChatGPT
arXiv:2601.02554v1 Announce Type: cross Abstract: Public debate links worsening job prospects for AI-exposed occupations to the release of ChatGPT in late 2022. Using monthly U.S. unemployment insurance records, we measure occupation- and location-specific unemployment risk and find that risk rose in AI-exposed occupations beginning in early 2022, months before ChatGPT. Analyzing millions of LinkedIn profiles, we show that graduate cohorts from 2021 onward entered AI-exposed jobs at lower rates than earlier cohorts, with gaps opening before late 2022. Finally, from millions of university syllabi, we find that graduates taking more AI-exposed curricula had higher first-job pay and shorter job searches after ChatGPT. Together, these results point to forces pre-dating generative AI and to the ongoing value of LLM-relevant education.
https://arxiv.org/abs/2601.02554
Academic Papers
svg
c8265555aa10183fbddaeeafebf1285c58cef0a4d84e6616bab4066da79178e2
2026-01-07T00:00:00-05:00
Comparative Analysis of Binarization Methods For Medical Image Hashing On Odir Dataset
arXiv:2601.02564v1 Announce Type: cross Abstract: In this study, we evaluated four binarization methods. Locality-Sensitive Hashing (LSH), Iterative Quantization (ITQ), Kernel-based Supervised Hashing (KSH), and Supervised Discrete Hashing (SDH) on the ODIR dataset using deep feature embeddings. Experimental results show that SDH achieved the best performance, with an mAP@100 of 0.9184 using only 32-bit codes, outperforming LSH, ITQ, and KSH. Compared with prior studies, our method proved highly competitive: Fang et al. reported 0.7528 (Fundus-iSee, 48 bits) and 0.8856 (ASOCT-Cataract, 48 bits), while Wijesinghe et al. achieved 94.01 (KVASIR, 256 bits). Despite using significantly fewer bits, our SDH-based framework reached retrieval accuracy close to the state-of-the-art. These findings demonstrate that SDH is the most effective approach among those tested, offering a practical balance of accuracy, storage, and efficiency for medical image retrieval and device inventory management.
https://arxiv.org/abs/2601.02564
Academic Papers
svg
7a96904d432c2d0d2f15cb6113bc52d8637482f943eef590e4c0e15fc5dd44d2
2026-01-07T00:00:00-05:00
Annealed Langevin Posterior Sampling (ALPS): A Rapid Algorithm for Image Restoration with Multiscale Energy Models
arXiv:2601.02594v1 Announce Type: cross Abstract: Solving inverse problems in imaging requires models that support efficient inference, uncertainty quantification, and principled probabilistic reasoning. Energy-Based Models (EBMs), with their interpretable energy landscapes and compositional structure, are well-suited for this task but have historically suffered from high computational costs and training instability. To overcome the historical shortcomings of EBMs, we introduce a fast distillation strategy to transfer the strengths of pre-trained diffusion models into multi-scale EBMs. These distilled EBMs enable efficient sampling and preserve the interpretability and compositionality inherent to potential-based frameworks. Leveraging EBM compositionality, we propose Annealed Langevin Posterior Sampling (ALPS) algorithm for Maximum-A-Posteriori (MAP), Minimum Mean Square Error (MMSE), and uncertainty estimates for inverse problems in imaging. Unlike diffusion models that use complex guidance strategies for latent variables, we perform annealing on static posterior distributions that are well-defined and composable. Experiments on image inpainting and MRI reconstruction demonstrate that our method matches or surpasses diffusion-based baselines in both accuracy and efficiency, while also supporting MAP recovery. Overall, our framework offers a scalable and principled solution for inverse problems in imaging, with potential for practical deployment in scientific and clinical settings. ALPS code is available at the GitHub repository \href{https://github.com/JyoChand/ALPS}{ALPS}.
https://arxiv.org/abs/2601.02594
Academic Papers
svg
03a86cecb8faed50a471905c71028d308d13bda215e286a9da8bde22d734c699
2026-01-07T00:00:00-05:00
Structural reducibility of hypergraphs
arXiv:2601.02603v1 Announce Type: cross Abstract: Higher-order interactions provide a nuanced understanding of the relational structure of complex systems beyond traditional pairwise interactions. However, higher-order network analyses also incur more cumbersome interpretations and greater computational demands than their pairwise counterparts. Here we present an information-theoretic framework for determining the extent to which a hypergraph representation of a networked system is structurally redundant, and for identifying its most critical higher orders of interaction that allow us to remove these redundancies while preserving essential higher-order structure.
https://arxiv.org/abs/2601.02603
Academic Papers
svg
3c097f06dfa8f7423621bfa326a76165f1b121d69db4fb15342bbc2a9903ed32
2026-01-07T00:00:00-05:00
Extremum Seeking Control for Wave-PDE Actuation with Distributed Effects
arXiv:2601.02607v1 Announce Type: cross Abstract: This paper deals with the gradient-based extremum seeking control (ESC) with actuation dynamics governed by distributed wave partial differential equations (PDEs). To achieve the control objective of real-time optimization for this class of infinite-dimensional systems, we first solve the trajectory generation problem to re-design the additive perturbation signal of the ESC system. Then, we develop a boundary control law through the backstepping method to compensate for the wave PDE with distributed effects, which ensures the exponential stability of the average closed-loop system by means of a Lyapunov-based analysis. At last, by employing the averaging theory for infinite-dimensional systems, we prove that the closed-loop trajectories converge to a small neighborhood surrounding the optimal point. Numerical simulations are presented to illustrate the effectiveness of the proposed method.
https://arxiv.org/abs/2601.02607
Academic Papers
svg
8f61b8901182490365eb38946ddcd7c3d85b17e2a2042589e2d0376edc9d269c
2026-01-07T00:00:00-05:00
Hierarchical temporal receptive windows and zero-shot timescale generalization in biologically constrained scale-invariant deep networks
arXiv:2601.02618v1 Announce Type: cross Abstract: Human cognition integrates information across nested timescales. While the cortex exhibits hierarchical Temporal Receptive Windows (TRWs), local circuits often display heterogeneous time constants. To reconcile this, we trained biologically constrained deep networks, based on scale-invariant hippocampal time cells, on a language classification task mimicking the hierarchical structure of language (e.g., 'letters' forming 'words'). First, using a feedforward model (SITHCon), we found that a hierarchy of TRWs emerged naturally across layers, despite the network having an identical spectrum of time constants within layers. We then distilled these inductive priors into a biologically plausible recurrent architecture, SITH-RNN. Training a sequence of architectures ranging from generic RNNs to this restricted subset showed that the scale-invariant SITH-RNN learned faster with orders-of-magnitude fewer parameters, and generalized zero-shot to out-of-distribution timescales. These results suggest the brain employs scale-invariant, sequential priors - coding "what" happened "when" - making recurrent networks with such priors particularly well-suited to describe human cognition.
https://arxiv.org/abs/2601.02618
Academic Papers
svg
7d7bc31fb66e64dc6d9393f0a82251e1b9a208ca44e84431d9ee6b26c3a02690
2026-01-07T00:00:00-05:00
Statistical Inference for Fuzzy Clustering
arXiv:2601.02656v1 Announce Type: cross Abstract: Clustering is a central tool in biomedical research for discovering heterogeneous patient subpopulations, where group boundaries are often diffuse rather than sharply separated. Traditional methods produce hard partitions, whereas soft clustering methods such as fuzzy $c$-means (FCM) allow mixed memberships and better capture uncertainty and gradual transitions. Despite the widespread use of FCM, principled statistical inference for fuzzy clustering remains limited. We develop a new framework for weighted fuzzy $c$-means (WFCM) for settings with potential cluster size imbalance. Cluster-specific weights rebalance the classical FCM criterion so that smaller clusters are not overwhelmed by dominant groups, and the weighted objective induces a normalized density model with scale parameter $\sigma$ and fuzziness parameter $m$. Estimation is performed via a blockwise majorize--minimize (MM) procedure that alternates closed-form membership and centroid updates with likelihood-based updates of $(\sigma,\bw)$. The intractable normalizing constant is approximated by importance sampling using a data-adaptive Gaussian mixture proposal. We further provide likelihood ratio tests for comparing cluster centers and bootstrap-based confidence intervals. We establish consistency and asymptotic normality of the maximum likelihood estimator, validate the method through simulations, and illustrate it using single-cell RNA-seq and Alzheimer disease Neuroimaging Initiative (ADNI) data. These applications demonstrate stable uncertainty quantification and biologically meaningful soft memberships, ranging from well-separated cell populations under imbalance to a graded AD versus non-AD continuum consistent with disease progression.
https://arxiv.org/abs/2601.02656
Academic Papers
svg
9107c8dfddc1ce147632af7415f2a1d6e0ffd12c5696ef0115cc2565027966e8
2026-01-07T00:00:00-05:00
Branching $k$-path vertex cover of forests
arXiv:2601.02685v1 Announce Type: cross Abstract: We define a set $P$ to be a branching $k$-path vertex cover of an undirected forest $F$ if all leaves and isolated vertices (vertices of degree at most $1$) of $F$ belong to $P$ and every path on $k$ vertices (of length $k-1$) contains either a branching vertex (a vertex of degree at least $3$) or a vertex belonging to $P$. We define the branching $k$-path vertex cover number of an undirected forest $F$, denoted by $\psi_b(F,k)$, to be the number of vertices in the smallest branching $k$-path vertex cover of $F$. These notions for a rooted directed forest are defined similarly, with natural adjustments. We prove the lower bound $\psi_b(F,k) \geq \frac{n+3k-1}{2k}$ for undirected forests, the lower bound $\psi_b(F,k) \geq \frac{n+k}{2k}$ for rooted directed forests, and that both of them are tight.
https://arxiv.org/abs/2601.02685
Academic Papers
svg
033164ba54904145b6b0dca2f77e300f78b0e152bb233dde7fcd20bb0637eb3a
2026-01-07T00:00:00-05:00
Transform and Entropy Coding in AV2
arXiv:2601.02712v1 Announce Type: cross Abstract: AV2 is the successor to the AV1 royalty-free video coding standard developed by the Alliance for Open Media (AOMedia). Its primary objective is to deliver substantial compression gains and subjective quality improvements while maintaining low-complexity encoder and decoder operations. This paper describes the transform, quantization and entropy coding design in AV2, including redesigned transform kernels and data-driven transforms, expanded transform partitioning, and a mode & coefficient dependent transform signaling. AV2 introduces several new coding tools including Intra/Inter Secondary Transforms (IST), Trellis Coded Quantization (TCQ), Adaptive Transform Coding (ATC), Probability Adaptation Rate Adjustment (PARA), Forward Skip Coding (FSC), Cross Chroma Component Transforms (CCTX), Parity Hiding (PH) tools and improved lossless coding. These advances enable AV2 to deliver the highest quality video experience for video applications at a significantly reduced bitrate.
https://arxiv.org/abs/2601.02712
Academic Papers
svg
16d82987201716d4c7211d4059628c084b8d78ac53c7b010fef2c82079afdd0c
2026-01-07T00:00:00-05:00
Fast Conformal Prediction using Conditional Interquantile Intervals
arXiv:2601.02769v1 Announce Type: cross Abstract: We introduce Conformal Interquantile Regression (CIR), a conformal regression method that efficiently constructs near-minimal prediction intervals with guaranteed coverage. CIR leverages black-box machine learning models to estimate outcome distributions through interquantile ranges, transforming these estimates into compact prediction intervals while achieving approximate conditional coverage. We further propose CIR+ (Conditional Interquantile Regression with More Comparison), which enhances CIR by incorporating a width-based selection rule for interquantile intervals. This refinement yields narrower prediction intervals while maintaining comparable coverage, though at the cost of slightly increased computational time. Both methods address key limitations of existing distributional conformal prediction approaches: they handle skewed distributions more effectively than Conformalized Quantile Regression, and they achieve substantially higher computational efficiency than Conformal Histogram Regression by eliminating the need for histogram construction. Extensive experiments on synthetic and real-world datasets demonstrate that our methods optimally balance predictive accuracy and computational efficiency compared to existing approaches.
https://arxiv.org/abs/2601.02769
Academic Papers
svg
0e658870d04aaa72126e5bab24375d783022b11387cc2e314a7155bd05768ee7
2026-01-07T00:00:00-05:00
The Sequence Reconstruction of Permutations under Hamming Metric with Small Errors
arXiv:2601.02844v1 Announce Type: cross Abstract: The sequence reconstruction problem asks for the recovery of a sequence from multiple noisy copies, where each copy may contain up to $r$ errors. In the case of permutations on \(n\) letters under the Hamming metric, this problem is closely related to the parameter $N(n,r)$, the maximum intersection size of two Hamming balls of radius $r$. While previous work has resolved \(N(n,r)\) for small radii (\(r \leq 4\)) and established asymptotic bounds for larger \(r\), we present new exact formulas for \(r \in \{5,6,7\}\) using group action techniques. In addition, we develop a formula for \(N(n,r)\) based on the irreducible characters of the symmetric group \(S_n\), along with an algorithm that enables computation of \(N(n,r)\) for larger parameters, including cases such as \(N(43,8)\) and \(N(24,14)\).
https://arxiv.org/abs/2601.02844
Academic Papers
svg
753db3ac069be3dc6ca2ef5e867a5120c267f6d3ec3f5ccb069a70cbf5b60022
2026-01-07T00:00:00-05:00
Lesion Segmentation in FDG-PET/CT Using Swin Transformer U-Net 3D: A Robust Deep Learning Framework
arXiv:2601.02864v1 Announce Type: cross Abstract: Accurate and automated lesion segmentation in Positron Emission Tomography / Computed Tomography (PET/CT) imaging is essential for cancer diagnosis and therapy planning. This paper presents a Swin Transformer UNet 3D (SwinUNet3D) framework for lesion segmentation in Fluorodeoxyglucose Positron Emission Tomography / Computed Tomography (FDG-PET/CT) scans. By combining shifted window self-attention with U-Net style skip connections, the model captures both global context and fine anatomical detail. We evaluate SwinUNet3D on the AutoPET III FDG dataset and compare it against a baseline 3D U-Net. Results show that SwinUNet3D achieves a Dice score of 0.88 and IoU of 0.78, surpassing 3D U-Net (Dice 0.48, IoU 0.32) while also delivering faster inference times. Qualitative analysis demonstrates improved detection of small and irregular lesions, reduced false positives, and more accurate PET/CT fusion. While the framework is currently limited to FDG scans and trained under modest GPU resources, it establishes a strong foundation for future multi-tracer, multi-center evaluations and benchmarking against other transformer-based architectures. Overall, SwinUNet3D represents an efficient and robust approach to PET/CT lesion segmentation, advancing the integration of transformer-based models into oncology imaging workflows.
https://arxiv.org/abs/2601.02864
Academic Papers
svg
298c7ed2145fd7b360b0f5cbe8a8f380e5d139cd351ee0bca4c053b7d2664ebd
2026-01-07T00:00:00-05:00
STIPP: Space-time in situ postprocessing over the French Alps using proper scoring rules
arXiv:2601.02882v1 Announce Type: cross Abstract: We propose Space-time in situ postprocessing (STIPP), a machine learning model that generates spatio-temporally consistent weather forecasts for a network of station locations. Gridded forecasts from classical numerical weather prediction or data-driven models often lack the necessary precision due to unresolved local effects. Typical statistical postprocessing methods correct these biases, but often degrade spatio-temporal correlation structures in doing so. Recent works based on generative modeling successfully improve spatial correlation structures but have to forecast every lead time independently. In contrast, STIPP makes joint spatio-temporal forecasts which have increased accuracy for surface temperature, wind, relative humidity and precipitation when compared to baseline methods. It makes hourly ensemble predictions given only a six-hourly deterministic forecast, blending the boundaries of postprocessing and temporal interpolation. By leveraging a multivariate proper scoring rule for training, STIPP contributes to ongoing work data-driven atmospheric models supervised only with distribution marginals.
https://arxiv.org/abs/2601.02882
Academic Papers
svg
66120439eb4fcfe7b345d7e797f4f93cb23f159bbbfa7fb97cb605ba74a910a5
2026-01-07T00:00:00-05:00
Enhanced 3D Gravity Inversion Using ResU-Net with Density Logging Constraints: A Dual-Phase Training Approach
arXiv:2601.02890v1 Announce Type: cross Abstract: Gravity exploration has become an important geophysical method due to its low cost and high efficiency. With the rise of artificial intelligence, data-driven gravity inversion methods based on deep learning (DL) possess physical property recovery capabilities that conventional regularization methods lack. However, existing DL methods suffer from insufficient prior information constraints, which leads to inversion models with large data fitting errors and unreliable results. Moreover, the inversion results lack constraints and matching from other exploration methods, leading to results that may contradict known geological conditions. In this study, we propose a novel approach that integrates prior density well logging information to address the above issues. First, we introduce a depth weighting function to the neural network (NN) and train it in the weighted density parameter domain. The NN, under the constraint of the weighted forward operator, demonstrates improved inversion performance, with the resulting inversion model exhibiting smaller data fitting errors. Next, we divide the entire network training into two phases: first training a large pre-trained network Net-I, and then using the density logging information as the constraint to get the optimized fine-tuning network Net-II. Through testing and comparison in synthetic models and Bishop Model, the inversion quality of our method has significantly improved compared to the unconstrained data-driven DL inversion method. Additionally, we also conduct a comparison and discussion between our method and both the conventional focusing inversion (FI) method and its well logging constrained variant. Finally, we apply this method to the measured data from the San Nicolas mining area in Mexico, comparing and analyzing it with two recent gravity inversion methods based on DL.
https://arxiv.org/abs/2601.02890
Academic Papers
svg
17613e90e61baac531bc3004dc3c6c4393bbb1631992de57e4893f624d099cf2
2026-01-07T00:00:00-05:00
Transducing Linear Decompositions of Tournaments
arXiv:2601.02999v1 Announce Type: cross Abstract: Boja\'nczyk, Pilipczuk, and Grohe [LICS '18] proved that for graphs of bounded linear clique-width, clique-decompositions of bounded width can be produced by a CMSO transduction. We show that in the case of tournaments, a first-order transduction suffices. This implies that the logics CMSO and existential MSO are equivalent over bounded linear clique-width tournaments.
https://arxiv.org/abs/2601.02999
Academic Papers
svg
d5f796293acf03853d296b4cdac073f4cd17045c269c975107a293d1aa528f9b
2026-01-07T00:00:00-05:00
DNACHUNKER: Learnable Tokenization for DNA Language Models
arXiv:2601.03019v1 Announce Type: cross Abstract: DNA language models have emerged as powerful tools for decoding the complex language of DNA sequences. However, the performance of these models is heavily affected by their tokenization strategy, i.e., a method used to parse DNA sequences into a shorter sequence of chunks. In this work, we propose DNACHUNKER, which integrates a learnable dynamic DNA tokenization mechanism and is trained as a masked language model. Adopting the dynamic chunking procedure proposed by H-Net, our model learns to segment sequences into variable-length chunks. This dynamic chunking offers two key advantages: it's resilient to shifts and mutations in the DNA, and it allocates more detail to important functional areas. We demonstrate the performance of DNACHUNKER by training it on the human reference genome (HG38) and testing it on the Nucleotide Transformer and Genomic benchmarks. Further ablative experiments reveal that DNACHUNKER learns tokenization that grasps biological grammar and uses smaller chunks to preserve detail in important functional elements such as promoters and exons, while using larger chunks for repetitive, redundant regions.
https://arxiv.org/abs/2601.03019
Academic Papers
svg
69bafaffd8a414b5d1df7f7a2af742f8b1ab1d9eb719d40564a8e04e972e0f10
2026-01-07T00:00:00-05:00
Similarity-Sensitive Entropy: Induced Kernels and Data-Processing Inequalities
arXiv:2601.03064v1 Announce Type: cross Abstract: We study an entropy functional $H_K$ that is sensitive to a prescribed similarity structure on a state space. For finite spaces, $H_K$ coincides with the order-1 similarity-sensitive entropy of Leinster and Cobbold. We work in the general measure-theoretic setting of kernelled probability spaces $(\Omega,\mu,K)$ introduced by Leinster and Roff, and develop basic structural properties of $H_K$. Our main results concern the behavior of $H_K$ under coarse-graining. For a measurable map $f:\Omega\to Y$ and input law $\mu$, we define a law-induced kernel on $Y$ whose pullback minimally dominates $K$, and show that it yields a coarse-graining inequality and a data-processing inequality for $H_K$, for both deterministic maps and general Markov kernels. We also introduce conditional similarity-sensitive entropy and an associated mutual information, and compare their behavior to the classical Shannon case.
https://arxiv.org/abs/2601.03064
Academic Papers
svg
3e21ba937bc695f648f434246756daf9e5ee82d50b3bf07761a23c363c297ef1
2026-01-07T00:00:00-05:00
Computationally Efficient Estimation of Localized Treatment Effects in High-Dimensional Design Spaces using Gaussian Process Regression
arXiv:2601.03105v1 Announce Type: cross Abstract: Population-scale agent-based simulations of the opioid epidemic help evaluate intervention strategies and overdose outcomes in heterogeneous communities and provide estimates of localized treatment effects, which support the design of locally-tailored policies for precision public health. However, it is prohibitively costly to run simulations of all treatment conditions in all communities because the number of possible treatments grows exponentially with the number of interventions and levels at which they are applied. To address this need efficiently, we develop a metamodel framework, whereby treatment outcomes are modeled using a response function whose coefficients are learned through Gaussian process regression (GPR) on locally-contextualized covariates. We apply this framework to efficiently estimate treatment effects on overdose deaths in Pennsylvania counties. In contrast to classical designs such as fractional factorial design or Latin hypercube sampling, our approach leverages spatial correlations and posterior uncertainty to sequentially sample the most informative counties and treatment conditions. Using a calibrated agent-based opioid epidemic model, informed by county-level overdose mortality and baseline dispensing rate data for different treatments, we obtained county-level estimates of treatment effects on overdose deaths per 100,000 population for all treatment conditions in Pennsylvania, achieving approximately 5% average relative error using one-tenth the number of simulation runs required for exhaustive evaluation. Our bi-level framework provides a computationally efficient approach to decision support for policy makers, enabling rapid evaluation of alternative resource-allocation strategies to mitigate the opioid epidemic in local communities. The same analytical framework can be applied to guide precision public health interventions in other epidemic settings.
https://arxiv.org/abs/2601.03105
Academic Papers
svg
a8f16da8fca90292eeb494da725884a44003c5154f76d5ca7006defe0649cc7b
2026-01-07T00:00:00-05:00
DiT-JSCC: Rethinking Deep JSCC with Diffusion Transformers and Semantic Representations
arXiv:2601.03112v1 Announce Type: cross Abstract: Generative joint source-channel coding (GJSCC) has emerged as a new Deep JSCC paradigm for achieving high-fidelity and robust image transmission under extreme wireless channel conditions, such as ultra-low bandwidth and low signal-to-noise ratio. Recent studies commonly adopt diffusion models as generative decoders, but they frequently produce visually realistic results with limited semantic consistency. This limitation stems from a fundamental mismatch between reconstruction-oriented JSCC encoders and generative decoders, as the former lack explicit semantic discriminability and fail to provide reliable conditional cues. In this paper, we propose DiT-JSCC, a novel GJSCC backbone that can jointly learn a semantics-prioritized representation encoder and a diffusion transformer (DiT) based generative decoder, our open-source project aims to promote the future research in GJSCC. Specifically, we design a semantics-detail dual-branch encoder that aligns naturally with a coarse-to-fine conditional DiT decoder, prioritizing semantic consistency under extreme channel conditions. Moreover, a training-free adaptive bandwidth allocation strategy inspired by Kolmogorov complexity is introduced to further improve the transmission efficiency, thereby indeed redefining the notion of information value in the era of generative decoding. Extensive experiments demonstrate that DiT-JSCC consistently outperforms existing JSCC methods in both semantic consistency and visual quality, particularly in extreme regimes.
https://arxiv.org/abs/2601.03112
Academic Papers
svg
5ffcfbc0e7e994c18ae89e439e3300d4f7f2f33cadf376648d12774b2ef9dcf6
2026-01-07T00:00:00-05:00
Transformers self-organize like newborn visual systems when trained in prenatal worlds
arXiv:2601.03117v1 Announce Type: cross Abstract: Do transformers learn like brains? A key challenge in addressing this question is that transformers and brains are trained on fundamentally different data. Brains are initially "trained" on prenatal sensory experiences (e.g., retinal waves), whereas transformers are typically trained on large datasets that are not biologically plausible. We reasoned that if transformers learn like brains, then they should develop the same structure as newborn brains when exposed to the same prenatal data. To test this prediction, we simulated prenatal visual input using a retinal wave generator. Then, using self-supervised temporal learning, we trained transformers to adapt to those retinal waves. During training, the transformers spontaneously developed the same structure as newborn visual systems: (1) early layers became sensitive to edges, (2) later layers became sensitive to shapes, and (3) the models developed larger receptive fields across layers. The organization of newborn visual systems emerges spontaneously when transformers adapt to a prenatal visual world. This developmental convergence suggests that brains and transformers learn in common ways and follow the same general fitting principles.
https://arxiv.org/abs/2601.03117
Academic Papers
svg
9dfd2ffa952d7fe2532d7a52a5bba6ae7cac215f5341e6b4ddfb96d8ec7d1663
2026-01-07T00:00:00-05:00
Gradient descent reliably finds depth- and gate-optimal circuits for generic unitaries
arXiv:2601.03123v1 Announce Type: cross Abstract: When the gate set has continuous parameters, synthesizing a unitary operator as a quantum circuit is always possible using exact methods, but finding minimal circuits efficiently remains a challenging problem. The landscape is very different for compiled unitaries, which arise from programming and typically have short circuits, as compared with generic unitaries, which use all parameters and typically require circuits of maximal size. We show that simple gradient descent reliably finds depth- and gate-optimal circuits for generic unitaries, including in the presence of restricted chip connectivity. This runs counter to earlier evidence that optimal synthesis required combinatorial search, and we show that this discrepancy can be explained by avoiding the random selection of certain parameter-deficient circuit skeletons.
https://arxiv.org/abs/2601.03123
Academic Papers
svg
30b6a3ea2cf6281428eacf1a14fa7fb9395ca5da65b7ece5f3e10c6d025c2046
2026-01-07T00:00:00-05:00
A short proof of a bound on the size of finite irreducible semigroups of rational matrices
arXiv:2601.03206v1 Announce Type: cross Abstract: I give a short proof of a recent result due to Kiefer and Ryzhikov showing that a finite irreducible semigroup of $n\times n$ matrices has cardinality at most $3^{n^2}$.
https://arxiv.org/abs/2601.03206
Academic Papers
svg
63e0ab2ecf3d8b5d2a24fbe254090a907d1a5b5af57d6dbb0c71191290be954b
2026-01-07T00:00:00-05:00
Shallow-circuit Supervised Learning on a Quantum Processor
arXiv:2601.03235v1 Announce Type: cross Abstract: Quantum computing has long promised transformative advances in data analysis, yet practical quantum machine learning has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. In this work, we show that one can overcome these obstacles by using a linear Hamiltonian-based machine learning method which provides a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. We use the recent sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, whose parameters are trained to express classical datasets through local gradients. We demonstrate the efficacy and scalability of the methods by performing experiments on benchmark datasets using up to 50 qubits of an IBM Heron quantum processor.
https://arxiv.org/abs/2601.03235
Academic Papers
svg