id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
2d6288b61ad5e8148f836197a8745b628fd4bf8c2c55aaede7365ecc64f45acd
|
2026-01-16T00:00:00-05:00
|
Are Language Models Efficient Reasoners? A Perspective from Logic Programming
|
arXiv:2510.25626v2 Announce Type: replace Abstract: Modern language models (LMs) exhibit strong deductive reasoning capabilities, yet standard evaluations emphasize correctness while overlooking a key aspect of reasoning: efficiency. In real-world reasoning scenarios, much of the available information is irrelevant, and effective deductive inference requires identifying and ignoring such distractions. We propose a framework for assessing LM reasoning efficiency through the lens of logic programming, introducing a simple method to align proofs written in natural language -- as generated by an LM -- with shortest proofs found by executing the logic program. Efficiency is quantified by measuring how well a model avoids unnecessary inference. Empirically, we construct a dataset of math word problems injected with various number of irrelevant axioms that vary in semantic overlap with the goal theorem. We find that current LMs show marked accuracy declines under such conditions -- even with minimal, domain-consistent distractions -- and the proofs they generate frequently exhibit detours through irrelevant inferences.
|
https://arxiv.org/abs/2510.25626
|
Academic Papers
|
svg
|
2699c59324baa314baf6af9f1165169a9e976f3c82cae178ca190712c3114402
|
2026-01-16T00:00:00-05:00
|
ZK-SenseLM: Verifiable Large-Model Wireless Sensing with Selective Abstention and Zero-Knowledge Attestation
|
arXiv:2510.25677v3 Announce Type: replace Abstract: ZK-SenseLM is a secure and auditable wireless sensing framework that pairs a large-model encoder for Wi-Fi channel state information (and optionally mmWave radar or RFID) with a policy-grounded decision layer and end-to-end zero-knowledge proofs of inference. The encoder uses masked spectral pretraining with phase-consistency regularization, plus a light cross-modal alignment that ties RF features to compact, human-interpretable policy tokens. To reduce unsafe actions under distribution shift, we add a calibrated selective-abstention head; the chosen risk-coverage operating point is registered and bound into the proof. We implement a four-stage proving pipeline: (C1) feature sanity and commitment, (C2) threshold and version binding, (C3) time-window binding, and (C4) PLONK-style proofs that the quantized network, given the committed window, produced the logged action and confidence. Micro-batched proving amortizes cost across adjacent windows, and a gateway option offloads proofs from low-power devices. The system integrates with differentially private federated learning and on-device personalization without weakening verifiability: model hashes and the registered threshold are part of each public statement. Across activity, presence or intrusion, respiratory proxy, and RF fingerprinting tasks, ZK-SenseLM improves macro-F1 and calibration, yields favorable coverage-risk curves under perturbations, and rejects tamper and replay with compact proofs and fast verification.
|
https://arxiv.org/abs/2510.25677
|
Academic Papers
|
svg
|
7c75f1da51b1c580f0929abe6ffc3e99940a18db4cb08632e42f2cb885d16edc
|
2026-01-16T00:00:00-05:00
|
JOGS: Joint Optimization of Pose Estimation and 3D Gaussian Splatting
|
arXiv:2510.26117v2 Announce Type: replace Abstract: Traditional novel view synthesis methods heavily rely on external camera pose estimation tools such as COLMAP, which often introduce computational bottlenecks and propagate errors. To address these challenges, we propose a unified framework that jointly optimizes 3D Gaussian points and camera poses without requiring pre-calibrated inputs. Our approach iteratively refines 3D Gaussian parameters and updates camera poses through a novel co-optimization strategy, ensuring simultaneous improvements in scene reconstruction fidelity and pose estimation accuracy. The key innovation lies in decoupling the joint optimization into two interleaved phases: first, updating 3D Gaussian parameters via differentiable rendering with fixed poses, and second, refining camera poses using a customized 3D optical flow algorithm that incorporates geometric and photometric constraints. This formulation progressively reduces projection errors, particularly in challenging scenarios with large viewpoint variations and sparse feature distributions, where traditional methods struggle. Extensive evaluations on multiple datasets demonstrate that our approach significantly outperforms existing COLMAP-free techniques in reconstruction quality, and also surpasses the standard COLMAP-based baseline in general.
|
https://arxiv.org/abs/2510.26117
|
Academic Papers
|
svg
|
36d2ad211115bc5f9e3eef61641efdc9432c21e5392f611436cba4e9cf986079
|
2026-01-16T00:00:00-05:00
|
PlotCraft: Pushing the Limits of LLMs for Complex and Interactive Data Visualization
|
arXiv:2511.00010v2 Announce Type: replace Abstract: Recent Large Language Models (LLMs) have demonstrated remarkable proficiency in code generation. However, their ability to create complex visualizations for scaled and structured data remains largely unevaluated and underdeveloped. To address this gap, we introduce PlotCraft, a new benchmark featuring 1k challenging visualization tasks that cover a wide range of topics, such as finance, scientific research, and sociology. The benchmark is structured around seven high-level visualization tasks and encompasses 48 distinct chart types. Crucially, it is the first to systematically evaluate both single-turn generation and multi-turn refinement across a diverse spectrum of task complexities. Our comprehensive evaluation of 23 leading LLMs on PlotCraft reveals obvious performance deficiencies in handling sophisticated visualization tasks. To bridge this performance gap, we develope SynthVis-30K, a large-scale, high-quality dataset of complex visualization code synthesized via a collaborative agent framework. Building upon this dataset, we develope PlotCraftor, a novel code generation model that achieves strong capabilities in complex data visualization with a remarkably small size. Across VisEval, PandasPlotBench, and our proposed PlotCraft, PlotCraftor shows performance comparable to that of leading proprietary approaches. Especially, on hard task, Our model achieves over 50% performance improvement. We will release the benchmark, dataset, and code at https://github.com/Speakn0w/PlotCraft-Benchmark.
|
https://arxiv.org/abs/2511.00010
|
Academic Papers
|
svg
|
80d2aaaa828baf63a73c476c4115a938c49d923855734617a065c19f46a84fa4
|
2026-01-16T00:00:00-05:00
|
Bootstrap Off-policy with World Model
|
arXiv:2511.00423v3 Announce Type: replace Abstract: Online planning has proven effective in reinforcement learning (RL) for improving sample efficiency and final performance. However, using planning for environment interaction inevitably introduces a divergence between the collected data and the policy's actual behaviors, degrading both model learning and policy improvement. To address this, we propose BOOM (Bootstrap Off-policy with WOrld Model), a framework that tightly integrates planning and off-policy learning through a bootstrap loop: the policy initializes the planner, and the planner refines actions to bootstrap the policy through behavior alignment. This loop is supported by a jointly learned world model, which enables the planner to simulate future trajectories and provides value targets to facilitate policy improvement. The core of BOOM is a likelihood-free alignment loss that bootstraps the policy using the planner's non-parametric action distribution, combined with a soft value-weighted mechanism that prioritizes high-return behaviors and mitigates variability in the planner's action quality within the replay buffer. Experiments on the high-dimensional DeepMind Control Suite and Humanoid-Bench show that BOOM achieves state-of-the-art results in both training stability and final performance. The code is accessible at https://github.com/molumitu/BOOM_MBRL.
|
https://arxiv.org/abs/2511.00423
|
Academic Papers
|
svg
|
cf1d3ebff7341746a96d4cd540fd624e5fcc64e4e8e8d30820e51229fcc6d77d
|
2026-01-16T00:00:00-05:00
|
Lightweight Diffusion-based Framework for Online Imagined Speech Decoding in Aphasia
|
arXiv:2511.07920v3 Announce Type: replace Abstract: Individuals with aphasia experience severe difficulty in real-time verbal communication, while most imagined speech decoding approaches remain limited to offline analysis or computationally demanding models. To address this limitation, we propose a two-session experimental framework consisting of an offline data acquisition phase and a subsequent online feedback phase for real-time imagined speech decoding. The paradigm employed a four-class Korean-language task, including three imagined speech targets selected according to the participant's daily communicative needs and a resting-state condition, and was evaluated in a single individual with chronic anomic aphasia. Within this framework, we introduce a lightweight diffusion-based neural decoding model explicitly optimized for real-time inference, achieved through architectural simplifications such as dimensionality reduction, temporal kernel optimization, group normalization with regularization, and dual early-stopping criteria. In real-time evaluation, the proposed system achieved 65\% top-1 and 70\% top-2 accuracy, with the Water class reaching 80\% top-1 and 100\% top-2 accuracy. These results demonstrate that real-time-optimized diffusion-based architectures, combined with clinically grounded task design, can support feasible online imagined speech decoding for communication-oriented BCI applications in aphasia.
|
https://arxiv.org/abs/2511.07920
|
Academic Papers
|
svg
|
dc1ab2b53a27599ad6292fa5ab24e57c87ae22919a8ce931bdbcbc4af365eb8c
|
2026-01-16T00:00:00-05:00
|
Classification in Equilibrium: Structure of Optimal Decision Rules
|
arXiv:2511.08347v3 Announce Type: replace Abstract: This paper characterizes optimal classification when individuals adjust their behavior in response to the classification rule. We model the interaction between a designer and a population as a Stackelberg game: the designer selects a classification rule anticipating how individuals will comply, cheat, or abstain in order to obtain a favorable classification. Under standard monotone likelihood ratio assumptions, and for a general set of classification objectives, optimal rules belong to a small and interpretable family--single-threshold and two-cut rules--that encompass both conventional and counterintuitive designs. Our results depart sharply from prior findings that optimal classifiers reward higher signals. In equilibrium, global accuracy can be maximized by rewarding those with lower likelihood ratios or by concentrating rewards or penalties in a middle band to improve informational quality. We further characterize classification objectives that rule out socially harmful equilibria that disincentivize compliance for some populations.
|
https://arxiv.org/abs/2511.08347
|
Academic Papers
|
svg
|
8a8a8b5f1b30f71eda82cdf7ac85d8c06382ae6e58f9164931f5321156c8343f
|
2026-01-16T00:00:00-05:00
|
Bid Farewell to Seesaw: Towards Accurate Long-tail Session-based Recommendation via Dual Constraints of Hybrid Intents
|
arXiv:2511.08378v2 Announce Type: replace Abstract: Session-based recommendation (SBR) aims to predict anonymous users' next interaction based on their interaction sessions. In the practical recommendation scenario, low-exposure items constitute the majority of interactions, creating a long-tail distribution that severely compromises recommendation diversity. Existing approaches attempt to address this issue by promoting tail items but incur accuracy degradation, exhibiting a "see-saw" effect between long-tail and accuracy performance. We attribute such conflict to session-irrelevant noise within the tail items, which existing long-tail approaches fail to identify and constrain effectively. To resolve this fundamental conflict, we propose \textbf{HID} (\textbf{H}ybrid \textbf{I}ntent-based \textbf{D}ual Constraint Framework), a plug-and-play framework that transforms the conventional "see-saw" into "win-win" through introducing the hybrid intent-based dual constraints for both long-tail and accuracy. Two key innovations are incorporated in this framework: (i) \textit{Hybrid Intent Learning}, where we reformulate the intent extraction strategies by employing attribute-aware spectral clustering to reconstruct the item-to-intent mapping. Furthermore, discrimination of session-irrelevant noise is achieved through the assignment of the target and noise intents to each session. (ii) \textit{Intent Constraint Loss}, which incorporates two novel constraint paradigms regarding the \textit{diversity} and \textit{accuracy} to regulate the representation learning process of both items and sessions. These two objectives are unified into a single training loss through rigorous theoretical derivation. Extensive experiments across multiple SBR models and datasets demonstrate that HID can enhance both long-tail performance and recommendation accuracy, establishing new state-of-the-art performance in long-tail recommender systems.
|
https://arxiv.org/abs/2511.08378
|
Academic Papers
|
svg
|
a5212cab5f5ebc4d5fabfd4d07d5dc36423336d68e8a54df39d9680f396a4664
|
2026-01-16T00:00:00-05:00
|
Formal Verification of a Generic Algorithm for TDM Communication Over Inter Satellite Links
|
arXiv:2511.09485v2 Announce Type: replace Abstract: The Python Testbed for Federated Learning Algorithms is a simple FL framework targeting edge systems, which provides the three generic algorithms: the centralized federated learning, the decentralized federated learning, and the universal TDM communication in the current time slot. The first two were formally verified in a previous paper using the CSP process algebra, and in this paper, we use the same approach to formally verify the third one, in two phases. In the first phase, we construct the CSP model as a faithful representation of the real Python code. In the second phase, the model checker PAT automatically proves correctness of the third generic algorithm by proving its deadlock freeness (safety property) and successful termination (liveness property).
|
https://arxiv.org/abs/2511.09485
|
Academic Papers
|
svg
|
bc50ec9508e775455e7994f2422c1fc1ee3dab7ea3c12e21ad05cd18e73077a7
|
2026-01-16T00:00:00-05:00
|
Fine-grained MoE Load Balancing with Linear Programming
|
arXiv:2511.16947v2 Announce Type: replace Abstract: Mixture-of-Experts (MoE) has emerged as a promising approach to scale up deep learning models due to its significant reduction in computational resources. However, the dynamic nature of MoE leads to load imbalance among experts, severely impacting training efficiency. While previous research has attempted to address the load balancing challenge, existing solutions either compromise model accuracy or introduce additional system overhead. As a result, they fail to achieve fine-grained load balancing, which is crucial to optimizing training efficiency. We propose a novel parallelization strategy to achieve fine-grained load balancing in MoE systems. Our system is capable of achieving optimal load balancing in every micro-batch through efficient token scheduling across GPUs. Our experimental results demonstrate that MicroMoE improves the end-to-end training throughput by up to 47.6% compared with the state-of-the-art system, and almost consistently achieves optimal load balance among GPUs.
|
https://arxiv.org/abs/2511.16947
|
Academic Papers
|
svg
|
5f8a42e73bc02f555e2632b482516704ff8d4af3f8077bc2fb75b3df48be204c
|
2026-01-16T00:00:00-05:00
|
Modality-Balanced Collaborative Distillation for Multi-Modal Domain Generalization
|
arXiv:2511.20258v2 Announce Type: replace Abstract: Weight Averaging (WA) has emerged as a powerful technique for enhancing generalization by promoting convergence to a flat loss landscape, which correlates with stronger out-of-distribution performance. However, applying WA directly to multi-modal domain generalization (MMDG) is challenging: differences in optimization speed across modalities lead WA to overfit to faster-converging ones in early stages, suppressing the contribution of slower yet complementary modalities, thereby hindering effective modality fusion and skewing the loss surface toward sharper, less generalizable minima. To address this issue, we propose MBCD, a unified collaborative distillation framework that retains WA's flatness-inducing advantages while overcoming its shortcomings in multi-modal contexts. MBCD begins with adaptive modality dropout in the student model to curb early-stage bias toward dominant modalities. A gradient consistency constraint then aligns learning signals between uni-modal branches and the fused representation, encouraging coordinated and smoother optimization. Finally, a WA-based teacher conducts cross-modal distillation by transferring fused knowledge to each uni-modal branch, which strengthens cross-modal interactions and steer convergence toward flatter solutions. Extensive experiments on MMDG benchmarks show that MBCD consistently outperforms existing methods, achieving superior accuracy and robustness across diverse unseen domains.
|
https://arxiv.org/abs/2511.20258
|
Academic Papers
|
svg
|
f69bf445d0da904d9bb02f3dcfdf91901cc49ac2849ab3ea8d8a4d9a07019d51
|
2026-01-16T00:00:00-05:00
|
Prototype-Guided Non-Exemplar Continual Learning for Cross-subject EEG Decoding
|
arXiv:2511.20696v2 Announce Type: replace Abstract: Due to the significant variability in electroencephalo-gram (EEG) signals across individuals, knowledge acquired from previous subjects is often overwritten as new subjects are introduced in continual EEG decoding tasks. Existing methods mainly rely on storing historical data from seen subjects as replay buffers to mitigate forgetting, which is impractical under privacy or memory constraints. To address this issue, we propose a Prototype-guided Non-Exemplar Continual Learning (ProNECL) framework that preserves prior knowledge without accessing historical EEG samples. ProNECL summarizes subject-specific discriminative representations into class-level prototypes and incrementally aligns new subject representations with a global prototype memory through prototype-based feature regulariza-tion and cross-subject alignment. Experiments on the BCI Com-petition IV 2a and 2b datasets demonstrate that ProNECL effec-tively balances knowledge retention and adaptability, achieving superior performance in cross-subject continual EEG decoding tasks.
|
https://arxiv.org/abs/2511.20696
|
Academic Papers
|
svg
|
4c1987683f0931402d20256abec7bda4265f0046a779f183f9e43261e8979784
|
2026-01-16T00:00:00-05:00
|
Sneak Path Current Modeling in Memristor Crossbar Arrays for Analog In-Memory Computing
|
arXiv:2511.21796v3 Announce Type: replace Abstract: Memristor crossbar arrays have emerged as a key component for next-generation non-volatile memories, artificial neural networks, and analog in-memory computing (IMC) systems. By minimizing data transfer between the processor and memory, they offer substantial energy savings. However, a major design challenge in memristor crossbar arrays is the presence of sneak path currents, which degrade electrical performance, reduce noise margins, and limit reliable operations. This work presents a closed-form analytical framework based on 1.4nm technology for accurately estimating sneak path currents in memristor crossbar arrays. The proposed model captures the interdependence of key design parameters in memristor crossbar arrays, including array size, ON/OFF ratio of memristors, read voltage, and interconnect conditions, through mathematically derived relationships. It supports various practical configurations, such as different data patterns and connection strategies, enabling rapid and comprehensive sneak path current modeling. The sensitivity analysis includes how design parameters influence sneak path current and noise margin loss, underscoring the trade-offs involved in scaling crossbar arrays. Validation through SPICE simulations shows that the model achieves an error of less than 10.9% while being up to 4784 times faster than full circuit simulations. This analytical framework offers a powerful tool for quantitative assessment and pre-design/real-time optimization of memristor-based analog in-memory computing (IMC) architectures.
|
https://arxiv.org/abs/2511.21796
|
Academic Papers
|
svg
|
770bcbc7c79c60af66d758831fc2b8b5180a047a080b511d5350a6c306864a41
|
2026-01-16T00:00:00-05:00
|
Generative Adversarial Gumbel MCTS for Abstract Visual Composition Generation
|
arXiv:2512.01242v2 Announce Type: replace Abstract: We study abstract visual composition, in which identity is primarily determined by the spatial configuration and relations among a small set of geometric primitives (e.g., parts, symmetry, topology). They are invariant primarily to texture and photorealistic detail. Composing such structures from fixed components under geometric constraints and vague goal specification (such as text) is non-trivial due to combinatorial placement choices, limited data, and discrete feasibility (overlap-free, allowable orientations), which create a sparse solution manifold ill-suited to purely statistical pixel-space generators. We propose a constraint-guided framework that combines explicit geometric reasoning with neural semantics. An AlphaGo-style search enforces feasibility, while a fine-tuned vision-language model scores semantic alignment as reward signals. Our algorithm uses a policy network as a heuristic in Monte-Carlo Tree Search and fine-tunes the network via search-generated plans. Inspired by the Generative Adversarial Network, we use the generated instances for adversarial reward refinement. Over time, the generation should approach the actual data more closely when the reward model cannot distinguish between generated instances and ground-truth. In the Tangram Assembly task, our approach yields higher validity and semantic fidelity than diffusion and auto-regressive baselines, especially as constraints tighten.
|
https://arxiv.org/abs/2512.01242
|
Academic Papers
|
svg
|
9368fc3c206a6780081b197f48d7cc47d0916c910a768b029c213b0936f1fa7e
|
2026-01-16T00:00:00-05:00
|
PrivCode: When Code Generation Meets Differential Privacy
|
arXiv:2512.05459v3 Announce Type: replace Abstract: Large language models (LLMs) have presented outstanding performance in code generation and completion. However, fine-tuning these models on private datasets can raise privacy and proprietary concerns, such as the leakage of sensitive personal information. Differentially private (DP) code generation provides theoretical guarantees for protecting sensitive code by generating synthetic datasets that preserve statistical properties while reducing privacy leakage concerns. However, DP code generation faces significant challenges due to the strict syntactic dependencies and the privacy-utility trade-off. We propose PrivCode, the first DP synthesizer specifically designed for code datasets. It incorporates a two-stage framework to improve both privacy and utility. In the first stage, termed "privacy-sanitizing", PrivCode generates DP-compliant synthetic code by training models using DP-SGD while introducing syntactic information to preserve code structure. The second stage, termed "utility-boosting", fine-tunes a larger pre-trained LLM on the synthetic privacy-free code to mitigate the utility loss caused by DP, enhancing the utility of the generated code. Extensive experiments on four LLMs show that PrivCode generates higher-utility code across various testing tasks under four benchmarks. The experiments also confirm its ability to protect sensitive data under varying privacy budgets. We provide the replication package at the anonymous link.
|
https://arxiv.org/abs/2512.05459
|
Academic Papers
|
svg
|
dfe43f33710d4c38fefd7f89f8d7e7a5ebd099d07f0d08dd761aefd4ff263279
|
2026-01-16T00:00:00-05:00
|
Mistake Notebook Learning: Batch-Clustered Failures for Training-Free Agent Adaptation
|
arXiv:2512.11485v2 Announce Type: replace Abstract: With the growing adoption of Large Language Model (LLM) agents in persistent, real-world roles, they naturally encounter continuous streams of tasks and inevitable failures. A key limitation, however, is their inability to systematically learn from these mistakes, forcing them to repeat identical errors in similar contexts. Unlike prior training-free methods that primarily store raw instance-level experience or focus on retrieving successful trajectories, we propose Mistake Notebook Learning (MNL), a novel memory framework that enables agents to self-curate generalizable guidance from batch-clustered failures. This mechanism allows agents to distill shared error patterns into structured ``mistake notes,'' updating an external memory only when batch performance improves to ensure stability. To further amplify adaptability, we integrate MNL with test-time scaling, leveraging aggregated failure patterns to actively steer the search process away from known pitfalls. Experiments on mathematical reasoning, Text-to-SQL, and interactive agent benchmarks show that MNL achieves competitive performance compared to existing memory mechanisms and in-context methods in both effectiveness and efficiency. These findings position structured mistake abstraction as a critical lever for robust agent evolution, enabling continuous improvement without the cost of parameter updates.
|
https://arxiv.org/abs/2512.11485
|
Academic Papers
|
svg
|
b94029c48fbcc8afe12a0599977855d95d1568a1c68cde2f56d55b04fa583ae6
|
2026-01-16T00:00:00-05:00
|
Eventually LIL Regret: Almost Sure $\ln\ln T$ Regret for a sub-Gaussian Mixture on Unbounded Data
|
arXiv:2512.12325v2 Announce Type: replace Abstract: We prove that a classic sub-Gaussian mixture proposed by Robbins in a stochastic setting actually satisfies a path-wise (deterministic) regret bound. For every path in a natural ``Ville event'' $E_\alpha$, this regret till time $T$ is bounded by $\ln^2(1/\alpha)/V_T + \ln (1/\alpha) + \ln \ln V_T$ up to universal constants, where $V_T$ is a nonnegative, nondecreasing, cumulative variance process. (The bound reduces to $\ln(1/\alpha) + \ln \ln V_T$ if $V_T \geq \ln(1/\alpha)$.) If the data were stochastic, then one can show that $E_\alpha$ has probability at least $1-\alpha$ under a wide class of distributions (eg: sub-Gaussian, symmetric, variance-bounded, etc.). In fact, we show that on the Ville event $E_0$ of probability one, the regret on every path in $E_0$ is eventually bounded by $\ln \ln V_T$ (up to constants). We explain how this work helps bridge the world of adversarial online learning (which usually deals with regret bounds for bounded data), with game-theoretic statistics (which can handle unbounded data, albeit using stochastic assumptions). In short, conditional regret bounds serve as a bridge between stochastic and adversarial betting.
|
https://arxiv.org/abs/2512.12325
|
Academic Papers
|
svg
|
802f2a15e37cfb92b76221c5ba05f40a9ede6299c556802e7fd1c593e4e11376
|
2026-01-16T00:00:00-05:00
|
Sharpen the Spec, Cut the Code: A Case for Generative File System with SYSSPEC
|
arXiv:2512.13047v3 Announce Type: replace Abstract: File systems are critical OS components that require constant evolution to support new hardware and emerging application needs. However, the traditional paradigm of developing features, fixing bugs, and maintaining the system incurs significant overhead, especially as systems grow in complexity. This paper proposes a new paradigm, generative file systems, which leverages Large Language Models (LLMs) to generate and evolve a file system from prompts, effectively addressing the need for robust evolution. Despite the widespread success of LLMs in code generation, attempts to create a functional file system have thus far been unsuccessful, mainly due to the ambiguity of natural language prompts. This paper introduces SYSSPEC, a framework for developing generative file systems. Its key insight is to replace ambiguous natural language with principles adapted from formal methods. Instead of imprecise prompts, SYSSPEC employs a multi-part specification that accurately describes a file system's functionality, modularity, and concurrency. The specification acts as an unambiguous blueprint, guiding LLMs to generate expected code flexibly. To manage evolution, we develop a DAG-structured patch that operates on the specification itself, enabling new features to be added without violating existing invariants. Moreover, the SYSSPEC toolchain features a set of LLM-based agents with mechanisms to mitigate hallucination during construction and evolution. We demonstrate our approach by generating SPECFS, a concurrent file system. SPECFS demonstrates equivalent level of correctness to that of a manually-coded baseline across hundreds of regression tests. We further confirm its evolvability by seamlessly integrating 10 real-world features from Ext4. Our work shows that a specification-guided approach makes generating and evolving complex systems not only feasible but also highly effective.
|
https://arxiv.org/abs/2512.13047
|
Academic Papers
|
svg
|
d78d550dee3ba1f7c63a03431c013137beb5a668bfa29a75c27e88e2318a758c
|
2026-01-16T00:00:00-05:00
|
Can LLMs Understand What We Cannot Say? Measuring Multilevel Alignment Through Abortion Stigma Across Cognitive, Interpersonal, and Structural Levels
|
arXiv:2512.13142v4 Announce Type: replace Abstract: As Large Language Models (LLMs) increasingly mediate stigmatized health decisions, their capacity to understand complex psychological phenomena remains inadequately assessed. Can LLMs understand what we cannot say? We investigate whether LLMs coherently represent abortion stigma across cognitive, interpersonal, and structural levels. We systematically tested 627 demographically diverse personas across five leading LLMs using the validated Individual Level Abortion Stigma Scale (ILAS), examining representation at cognitive (self-judgment), interpersonal (worries about judgment and isolation), and structural (community condemnation and disclosure patterns) levels. Models fail tests of genuine understanding across all dimensions. They underestimate cognitive stigma while overestimating interpersonal stigma, introduce demographic biases assigning higher stigma to younger, less educated, and non-White personas, and treat secrecy as universal despite 36% of humans reporting openness. Most critically, models produce internal contradictions: they overestimate isolation yet predict isolated individuals are less secretive, revealing incoherent representations. These patterns show current alignment approaches ensure appropriate language but not coherent understanding across levels. This work provides empirical evidence that LLMs lack coherent understanding of psychological constructs operating across multiple dimensions. AI safety in high-stakes contexts demands new approaches to design (multilevel coherence), evaluation (continuous auditing), governance and regulation (mandatory audits, accountability, deployment restrictions), and AI literacy in domains where understanding what people cannot say determines whether support helps or harms.
|
https://arxiv.org/abs/2512.13142
|
Academic Papers
|
svg
|
7db64e6515585925cf56ad34df236ab99c46022764c8d565e6f61399c8705eca
|
2026-01-16T00:00:00-05:00
|
Cost-Free Neutrality for the River Method
|
arXiv:2512.14409v2 Announce Type: replace Abstract: Recently, the River Method was introduced as novel refinement of the Split Cycle voting rule. The decision-making process of River is closely related to the well established Ranked Pairs Method. Both methods consider a margin graph computed from the voters' preferences and eliminate majority cycles in that graph to choose a winner. As ties can occur in the margin graph, a tiebreaker is required along with the preferences. While such a tiebreaker makes the computation efficient, it compromises the fundamental property of neutrality: the voting rule should not favor alternatives in advance. One way to reintroduce neutrality is to use Parallel-Universe Tiebreaking (PUT), where each alternative is a winner if it wins according to any possible tiebreaker. Unfortunately, computing the winners selected by Ranked Pairs with PUT is NP-complete. Given the similarity of River to Ranked Pairs, one might expect River to suffer from the same complexity. Surprisingly, we show the opposite: We present a polynomial-time algorithm for computing River winners with PUT, highlighting significant structural advantages of River over Ranked Pairs. Our Fused-Universe (FUN) algorithm simulates River for every possible tiebreaking in one pass. From the resulting FUN diagram one can then directly read off both the set of winners and, for each winner, a certificate that explains how this alternative dominates the others.
|
https://arxiv.org/abs/2512.14409
|
Academic Papers
|
svg
|
9dd8288c85a29217e9cb23203ac5c6e9932b0c732de0dabec00381946b3058fa
|
2026-01-16T00:00:00-05:00
|
UAV-enabled Computing Power Networks: Task Completion Probability Analysis
|
arXiv:2512.15173v2 Announce Type: replace Abstract: This paper presents an innovative framework that synergistically enhances computing performance through ubiquitous computing power distribution and dynamic computing node accessibility control via adaptive unmanned aerial vehicle (UAV) positioning, establishing UAV-enabled Computing Power Networks (UAV-CPNs). In UAV-CPNs, UAVs function as dynamic aerial relays, outsourcing tasks generated in the request zone to an expanded service zone, consisting of a diverse range of computing devices, from vehicles with onboard computational capabilities and edge servers to dedicated computing nodes. This approach has the potential to alleviate communication bottlenecks in traditional computing power networks and overcome the "island effect" observed in multi-access edge computing. However, how to quantify the network performance under the complex spatio-temporal dynamics of both communication and computing power is a significant challenge, which introduces intricacies beyond those found in conventional networks. To address this, in this paper, we introduce task completion probability as the primary performance metric for evaluating the ability of UAV-CPNs to complete ground users' tasks within specified end-to-end latency requirements. Utilizing theories from stochastic processes and stochastic geometry, we derive analytical expressions that facilitate the assessment of this metric. Our numerical results emphasize that striking a delicate balance between communication and computational capabilities is essential for enhancing the performance of UAV-CPNs. Moreover, our findings show significant performance gains from the widespread distribution of computing nodes.
|
https://arxiv.org/abs/2512.15173
|
Academic Papers
|
svg
|
f80b854eb95fa71aa9e1f8f1a1363281728ad85d01a4c6f2ee5ab409966f0b45
|
2026-01-16T00:00:00-05:00
|
TBC: A Target-Background Contrast Metric for Low-Altitude Infrared and Visible Image Fusion
|
arXiv:2512.15211v2 Announce Type: replace Abstract: Infrared and visible image fusion (IVIF) is a pivotal technology in low-altitude Unmanned Aerial Vehicle (UAV) reconnaissance missions, enabling robust target detection and tracking by integrating thermal saliency with environmental textures. However, traditional no-reference metrics (Statistics-based metrics and Gradient-based metrics) fail in complex low-light environments, termed the ``Noise Trap''. This paper mathematically prove that these metrics are positively correlated with high-frequency sensor noise, paradoxically assigning higher scores to degraded images and misguiding algorithm optimization. To address this, we propose the Target-Background Contrast (TBC) metric. Inspired by Weber's Law, TBC focuses on the relative contrast of salient targets rather than global statistics. Unlike traditional metrics, TBC penalizes background noise and rewards target visibility. Extensive experiments on the DroneVehicle dataset demonstrate the superiority of TBC. Results show that TBC exhibits high ``Semantic Discriminability'' in distinguishing thermal targets from background clutter. Furthermore, TBC achieves remarkable computational efficiency, making it a reliable and real-time standard for intelligent UAV systems.
|
https://arxiv.org/abs/2512.15211
|
Academic Papers
|
svg
|
2c8308c62a8f521089eeab44be055932ee7020fa742044f0ac094d1504703b58
|
2026-01-16T00:00:00-05:00
|
Image Complexity-Aware Adaptive Retrieval for Efficient Vision-Language Models
|
arXiv:2512.15372v2 Announce Type: replace Abstract: Vision transformers in vision-language models typically use the same amount of compute for every image, regardless of whether it is simple or complex. We propose ICAR (Image Complexity-Aware Retrieval), an adaptive computation approach that enables vision transformers to use less compute for simple images whilst processing complex images through their full network depth. The key challenge is maintaining cross-modal alignment: embeddings from different processing depths must remain compatible for text matching. ICAR solves this through dual-path training that produces compatible embeddings from both the early-exit and full-depth paths. This maintains compatibility between image representations and text embeddings in the same semantic space, whether an image exits early or processes fully. Unlike existing two-stage approaches that require expensive reranking, ICAR enables direct image-text matching without additional overhead. To determine how much compute to use, we develop ConvNeXt-IC, which treats image complexity assessment as a classification task. By applying modern classifier backbones rather than specialised architectures, ConvNeXt-IC achieves state-of-the-art performance, attaining a Pearson correlation coefficient of 0.959 with human labelling whilst delivering 4.4x faster complexity prediction. Evaluated on standard benchmarks augmented with real-world web data, ICAR achieves 20% faster image encoding while maintaining category-level performance and 95% of instance-level performance, enabling sustainable scaling of vision-language systems.
|
https://arxiv.org/abs/2512.15372
|
Academic Papers
|
svg
|
b5a757784d0b8bf38ec32691a2fb2a7c2b8ef171d32527c6804cd9add48951f0
|
2026-01-16T00:00:00-05:00
|
Granular Ball Guided Masking: Structure-aware Data Augmentation
|
arXiv:2512.21011v2 Announce Type: replace Abstract: Deep learning models have achieved remarkable success in computer vision but still rely heavily on large-scale labeled data and tend to overfit when data is limited or distributions shift. Data augmentation -- particularly mask-based information dropping -- can enhance robustness by forcing models to explore complementary cues; however, existing approaches often lack structural awareness and risk discarding essential semantics. We propose Granular Ball Guided Masking (GBGM), a structure-aware augmentation strategy guided by Granular Ball Computing (GBC). GBGM adaptively preserves semantically rich, structurally important regions while suppressing redundant areas through a coarse-to-fine hierarchical masking process, producing augmentations that are both representative and discriminative. Extensive experiments on multiple benchmarks demonstrate consistent improvements not only in image classification and masked image reconstruction, but also in image tampering detection, validating the effectiveness and generalization of GBGM across both recognition and forensic scenarios. Simple and model-agnostic, GBGM integrates seamlessly into CNNs and Vision Transformers, offering a practical paradigm for structure-aware data augmentation.
|
https://arxiv.org/abs/2512.21011
|
Academic Papers
|
svg
|
8aec66275acb28cc243d885bca0292bf5bfdc612a982d0d6e0f5f55bbec406d2
|
2026-01-16T00:00:00-05:00
|
Five Years of SciCap: What We Learned and Future Directions for Scientific Figure Captioning
|
arXiv:2512.21789v2 Announce Type: replace Abstract: Between 2021 and 2025, the SciCap project grew from a small seed-funded idea at The Pennsylvania State University (Penn State) into one of the central efforts shaping the scientific figure-captioning landscape. Supported by a Penn State seed grant, Adobe, and the Alfred P. Sloan Foundation, what began as our attempt to test whether domain-specific training, which was successful in text models like SciBERT, could also work for figure captions expanded into a multi-institution collaboration. Over these five years, we curated, released, and continually updated a large collection of figure-caption pairs from arXiv papers, conducted extensive automatic and human evaluations on both generated and author-written captions, navigated the rapid rise of large language models (LLMs), launched annual challenges, and built interactive systems that help scientists write better captions. In this piece, we look back at the first five years of SciCap and summarize the key technical and methodological lessons we learned. We then outline five major unsolved challenges and propose directions for the next phase of research in scientific figure captioning.
|
https://arxiv.org/abs/2512.21789
|
Academic Papers
|
svg
|
3f19f28d3411039600a1fa566ff9caa071f52acdead420fad407e7825ad55f71
|
2026-01-16T00:00:00-05:00
|
Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages
|
arXiv:2512.22712v2 Announce Type: replace Abstract: Large language models demonstrate strong reasoning capabilities through chain-of-thought prompting, but whether this reasoning quality transfers across languages remains underexplored. We introduce a human-validated framework to evaluate whether model-generated reasoning traces logically support their conclusions across languages. Analyzing 65k reasoning traces from GlobalMMLU questions across 6 languages and 6 frontier models, we uncover a critical blind spot: while models achieve high task accuracy, their reasoning can fail to support their conclusions. Reasoning traces in non-Latin scripts show at least twice as much misalignment between their reasoning and conclusions than those in Latin scripts. We develop an error taxonomy through human annotation to characterize these failures, finding they stem primarily from evidential errors (unsupported claims, ambiguous facts) followed by illogical reasoning steps. Our findings demonstrate that current multilingual evaluation practices provide an incomplete picture of model reasoning capabilities and highlight the need for reasoning-aware evaluation frameworks.
|
https://arxiv.org/abs/2512.22712
|
Academic Papers
|
svg
|
6a9295a335b2120d044e65f6772f9da14a2025b05d369f8adeb996a9e6adfe69
|
2026-01-16T00:00:00-05:00
|
Wavelet-based Multi-View Fusion of 4D Radar Tensor and Camera for Robust 3D Object Detection
|
arXiv:2512.22972v2 Announce Type: replace Abstract: 4D millimeter-wave (mmWave) radar has been widely adopted in autonomous driving and robot perception due to its low cost and all-weather robustness. However, point-cloud-based radar representations suffer from information loss due to multi-stage signal processing, while directly utilizing raw 4D radar tensors incurs prohibitive computational costs. To address these challenges, we propose WRCFormer, a novel 3D object detection framework that efficiently fuses raw 4D radar cubes with camera images via decoupled multi-view radar representations. Our approach introduces two key components: (1) A Wavelet Attention Module embedded in a wavelet-based Feature Pyramid Network (FPN), which enhances the representation of sparse radar signals and image data by capturing joint spatial-frequency features, thereby mitigating information loss while maintaining computational efficiency. (2) A Geometry-guided Progressive Fusion mechanism, a two-stage query-based fusion strategy that progressively aligns multi-view radar and visual features through geometric priors, enabling modality-agnostic and efficient integration without overwhelming computational overhead. Extensive experiments on the K-Radar benchmark show that WRCFormer achieves state-of-the-art performance, surpassing the best existing model by approximately 2.4% in all scenarios and 1.6% in sleet conditions, demonstrating strong robustness in adverse weather.
|
https://arxiv.org/abs/2512.22972
|
Academic Papers
|
svg
|
65155ce27015fe0893065b2cc487e15f19c02b85e5c04c47f478af8b70c04a99
|
2026-01-16T00:00:00-05:00
|
RealCamo: Boosting Real Camouflage Synthesis with Layout Controls and Textual-Visual Guidance
|
arXiv:2512.22974v3 Announce Type: replace Abstract: Camouflaged image generation (CIG) has recently emerged as an efficient alternative for acquiring high-quality training data for camouflaged object detection (COD). However, existing CIG methods still suffer from a substantial gap to real camouflaged imagery: generated images either lack sufficient camouflage due to weak visual similarity, or exhibit cluttered backgrounds that are semantically inconsistent with foreground targets. To address these limitations, we propose RealCamo, a novel out-painting-based framework for controllable realistic camouflaged image generation. RealCamo explicitly introduces additional layout controls to regulate global image structure, thereby improving semantic coherence between foreground objects and generated backgrounds. Moreover, we construct a multimodal textual-visual condition by combining a unified fine-grained textual task description with texture-oriented background retrieval, which jointly guides the generation process to enhance visual fidelity and realism. To quantitatively assess camouflage quality, we further introduce a background-foreground distribution divergence metric that measures the effectiveness of camouflage in generated images. Extensive experiments and visualizations demonstrate the effectiveness of our proposed framework.
|
https://arxiv.org/abs/2512.22974
|
Academic Papers
|
svg
|
1ac7d9ca15ab5b4d0d5a483d4fef77cb1269946bd12da3934d6318513e447f64
|
2026-01-16T00:00:00-05:00
|
Explicit Abstention Knobs for Predictable Reliability in Video Question Answering
|
arXiv:2601.00138v2 Announce Type: replace Abstract: High-stakes deployment of vision-language models (VLMs) requires selective prediction, where systems abstain when uncertain rather than risk costly errors. We investigate whether confidence-based abstention provides reliable control over error rates in video question answering, and whether that control remains robust under distribution shift. Using NExT-QA and Gemini 2.0 Flash, we establish two findings. First, confidence thresholding provides mechanistic control in-distribution. Sweeping threshold epsilon produces smooth risk-coverage tradeoffs, reducing error rates f
|
https://arxiv.org/abs/2601.00138
|
Academic Papers
|
svg
|
7eb77d46b11465477c01abde12061a6c6081cf3bd0c50335875eb0a19aa88a3e
|
2026-01-16T00:00:00-05:00
|
Entropy Production in Machine Learning Under Fokker-Planck Probability Flow
|
arXiv:2601.00554v2 Announce Type: replace Abstract: Machine learning models deployed in nonstationary environments inevitably experience performance degradation due to data drift. While numerous drift detection heuristics exist, most lack a dynamical interpretation and provide limited guidance on how retraining decisions should be balanced against operational cost. In this work, we propose an entropy-based retraining framework grounded in nonequilibrium statistical physics. Interpreting drift as probability flow governed by a Fokker-Planck equation, we quantify model-data mismatch using relative entropy and show that its time derivative admits an entropy-balance decomposition featuring a nonnegative entropy production term driven by probability currents. Guided by this theory, we implement an entropy-triggered retraining policy using an exponentially weighted moving-average (EWMA) control statistic applied to a streaming kernel density estimator of the Kullback-Leibler divergence. We evaluate this approach across multiple nonstationary data streams. In synthetic, financial, and web-traffic domains, entropy-based retraining achieves predictive performance comparable to frequent retraining while reducing retraining frequency by one to two orders of magnitude. However, in a challenging biomedical ECG setting, the entropy-based trigger underperforms the maximum-frequency baseline, highlighting limitations of feature-space entropy monitoring under complex label-conditional drift.
|
https://arxiv.org/abs/2601.00554
|
Academic Papers
|
svg
|
83b5a4eb11681caf5e5737513dce935611b5943e2ef7ec2d28e1c9cea649d914
|
2026-01-16T00:00:00-05:00
|
RGS-SLAM: Robust Gaussian Splatting SLAM with One-Shot Dense Initialization
|
arXiv:2601.00705v3 Announce Type: replace Abstract: We introduce RGS-SLAM, a robust Gaussian-splatting SLAM framework that replaces the residual-driven densification stage of GS-SLAM with a training-free correspondence-to-Gaussian initialization. Instead of progressively adding Gaussians as residuals reveal missing geometry, RGS-SLAM performs a one-shot triangulation of dense multi-view correspondences derived from DINOv3 descriptors refined through a confidence-aware inlier classifier, generating a well-distributed and structure-aware Gaussian seed prior to optimization. This initialization stabilizes early mapping and accelerates convergence by roughly 20\%, yielding higher rendering fidelity in texture-rich and cluttered scenes while remaining fully compatible with existing GS-SLAM pipelines. Evaluated on the TUM RGB-D and Replica datasets, RGS-SLAM achieves competitive or superior localization and reconstruction accuracy compared with state-of-the-art Gaussian and point-based SLAM systems, sustaining real-time mapping performance at up to 925 FPS. Additional details and resources are available at this URL: https://breeze1124.github.io/rgs-slam-project-page/
|
https://arxiv.org/abs/2601.00705
|
Academic Papers
|
svg
|
fa138e7bb8c4a31931800609c3f84a5d883352e6e70a618b9273e4b415fb0586
|
2026-01-16T00:00:00-05:00
|
Robust and Efficient Zeroth-Order LLM Fine-Tuning via Adaptive Bayesian Subspace Optimizer
|
arXiv:2601.01452v3 Announce Type: replace Abstract: Fine-tuning large language models (LLMs) with zeroth-order (ZO) optimization reduces memory by approximating gradients through function evaluations. However, existing methods essentially perform updates in a one-dimensional space, and suffer from collapse or substantial performance degradation under low-precision training. We introduce BSZO, an adaptive \textbf{B}ayesian \textbf{S}ubspace \textbf{Z}eroth-Order \textbf{O}ptimizer, which applies Kalman filtering to combine finite-difference information across multiple perturbation directions within a subspace. By treating each finite-difference measurement as a noisy observation, BSZO builds a posterior distribution over the subspace-projected gradient and updates it through Bayesian inference, with a residual-based adaptive mechanism to adapt to noise variations. Theoretical analysis shows that BSZO improves the convergence rate by a factor of $k/\gamma$ compared to standard ZO methods. Experiments on RoBERTa, Mistral, and OPT models show that BSZO outperforms the baselines across various tasks, achieving up to 6.67\% absolute average improvement on OPT-13B while remaining robust under fp16/bf16 precision and keeping memory usage close to inference-only baselines (1.00$\times$--1.08$\times$ of MeZO).
|
https://arxiv.org/abs/2601.01452
|
Academic Papers
|
svg
|
7285d2aec0cb9fdb9022bf4ebde8a329d6c827de87229f942175e45e65c007da
|
2026-01-16T00:00:00-05:00
|
Bridging the gap: A comparative exploration of Speech-LLM and end-to-end architecture for multilingual conversational ASR
|
arXiv:2601.01461v2 Announce Type: replace Abstract: The INTERSPEECH 2025 Challenge on Multilingual Conversational Speech Language Models (MLC-SLM) promotes multilingual conversational ASR with large language models (LLMs). Our previous SHNU-mASR system adopted a competitive parallel-speech-encoder architecture that integrated Whisper and mHuBERT with an LLM. However, it faced two challenges: simple feature concatenation may not fully exploit complementary information, and the performance gap between LLM-based ASR and end-to-end(E2E) encoder-decoder ASR remained unexplored. In this work, we present an enhanced LLM-based ASR framework that combines fine-tuned Whisper and mHuBERT encoders with an LLM to enrich speech representations. We first evaluate E2E Whisper models with LoRA and full fine-tuning on the MLC-SLM ASR task, and then propose cross-attention-based fusion mechanisms for the parallel-speech-encoder. On the official evaluation set of the MLC-SLM Challenge, our system achieves a CER/WER of 10.69%, ranking on par with the top-ranked Track 1 systems, even though it uses only 1,500 hours of baseline training data compared with their large-scale training sets. Nonetheless, we find that our final LLM-based ASR still does not match the performance of a fine-tuned E2E Whisper model, providing valuable empirical guidance for future Speech-LLM design. Our code is publicly available at https://github.com/1535176727/MLC-SLM.
|
https://arxiv.org/abs/2601.01461
|
Academic Papers
|
svg
|
5f8fb7787dfbb808498b2d6c98a3d657c26924c22edc11856fcb1a9a1590ed1d
|
2026-01-16T00:00:00-05:00
|
Physics-Constrained Learning of Energy-Preserving Stencils for Maxwell's Equations
|
arXiv:2601.01902v3 Announce Type: replace Abstract: We study data-driven construction of spatial discretizations for the one-dimensional Maxwell system. Using high-fidelity training data from a spectral discretization, we learn a \emph{linear convolution stencil} that approximates the spatial derivative operator in Maxwell's equations. We formulate a convex quadratic program for the stencil coefficients with linear constraints that enforce skew-adjointness of the discrete derivative; these constraints guarantee a semi-discrete electromagnetic energy identity and yield a CFL condition expressed directly in terms of the stencil's Fourier symbol. We compare several convex solvers for the resulting quadratic program -- projected gradient, Nesterov-accelerated gradient, ADMM, and an interior-point reference implemented in CVXPY -- and evaluate the learned operators in time-dependent Maxwell simulations using a Crank--Nicolson (CN) discretization. Numerical experiments, including cases with nonstandard target operators and noisy training data, show that (i) energy-constrained learned stencils achieve accuracy comparable to standard central differences while exactly preserving the discrete electromagnetic energy under CN time-stepping, and (ii) ADMM and interior-point solvers produce nearly identical operators, with ADMM offering a favorable tradeoff between accuracy, constraint satisfaction, and runtime.
|
https://arxiv.org/abs/2601.01902
|
Academic Papers
|
svg
|
75822135dc2b44982964712dd11efbd1a0be2b1d062036e804837d3c7b818f5d
|
2026-01-16T00:00:00-05:00
|
Bayesian Monocular Depth Refinement via Neural Radiance Fields
|
arXiv:2601.03869v2 Announce Type: replace Abstract: Monocular depth estimation has applications in many fields, such as autonomous navigation and extended reality, making it an essential computer vision task. However, current methods often produce smooth depth maps that lack the fine geometric detail needed for accurate scene understanding. We propose MDENeRF, an iterative framework that refines monocular depth estimates using depth information from Neural Radiance Fields (NeRFs). MDENeRF consists of three components: (1) an initial monocular estimate for global structure, (2) a NeRF trained on perturbed viewpoints, with per-pixel uncertainty, and (3) Bayesian fusion of the noisy monocular and NeRF depths. We derive NeRF uncertainty from the volume rendering process to iteratively inject high-frequency fine details. Meanwhile, our monocular prior maintains global structure. We demonstrate improvements on key metrics and experiments using indoor scenes from the SUN RGB-D dataset.
|
https://arxiv.org/abs/2601.03869
|
Academic Papers
|
svg
|
7bb99b0f415043c15d0da211d1f9eea9ae11f030da9f5530ac2b2b877c76c8b2
|
2026-01-16T00:00:00-05:00
|
Constrained dynamics for searching saddle points on general Riemannian manifolds
|
arXiv:2601.03931v2 Announce Type: replace Abstract: Finding constrained saddle points on Riemannian manifolds is significant for analyzing energy landscapes arising in physics and chemistry. Existing works have been limited to special manifolds that admit global regular level-set representations, excluding applications such as electronic excited-state calculations. In this paper, we develop a constrained saddle dynamics applicable to smooth functions on general Riemannian manifolds. Our dynamics is formulated compactly over the Grassmann bundle of the tangent bundle. By analyzing the Grassmann bundle geometry, we achieve universality via incorporating the second fundamental form, which captures variations of tangent spaces along the trajectory. We rigorously establish the local linear stability of the dynamics and the local linear convergence of the resulting algorithms. Remarkably, our analysis provides the first convergence guarantees for discretized saddle-search algorithms in manifold settings. Moreover, by respecting the intrinsic quotient structure, we remove unnecessary nondegeneracy assumptions on the eigenvalues of the Riemannian Hessian that are present in existing works. We also point out that locating saddle points can be more ill-conditioning than finding local minimizers, and requires using nonredundant parametrizations. Finally, numerical experiments on linear eigenvalue problems and electronic excited-state calculations showcase the effectiveness of the proposed algorithms and corroborate the established local theory.
|
https://arxiv.org/abs/2601.03931
|
Academic Papers
|
svg
|
3f6f097c07466a59c9e05f2d702bdc1c8a6c64398ec34e80d2916aa49856ee36
|
2026-01-16T00:00:00-05:00
|
Disco-RAG: Discourse-Aware Retrieval-Augmented Generation
|
arXiv:2601.04377v3 Announce Type: replace Abstract: Retrieval-Augmented Generation (RAG) has emerged as an important means of enhancing the performance of large language models (LLMs) in knowledge-intensive tasks. However, most existing RAG strategies treat retrieved passages in a flat and unstructured way, which prevents the model from capturing structural cues and constrains its ability to synthesize knowledge from dispersed evidence across documents. To overcome these limitations, we propose Disco-RAG, a discourse-aware framework that explicitly injects discourse signals into the generation process. Our method constructs intra-chunk discourse trees to capture local hierarchies and builds inter-chunk rhetorical graphs to model cross-passage coherence. These structures are jointly integrated into a planning blueprint that conditions the generation. Experiments on question answering and long-document summarization benchmarks show the efficacy of our approach. Disco-RAG achieves state-of-the-art results on the benchmarks without fine-tuning. These findings underscore the important role of discourse structure in advancing RAG systems.
|
https://arxiv.org/abs/2601.04377
|
Academic Papers
|
svg
|
d2bab7349d25c377bf1b7387f671cfe6816f6b84689ff27201c60e69ad63cae1
|
2026-01-16T00:00:00-05:00
|
Fast Mining and Dynamic Time-to-Event Prediction over Multi-sensor Data Streams
|
arXiv:2601.04741v2 Announce Type: replace Abstract: Given real-time sensor data streams obtained from machines, how can we continuously predict when a machine failure will occur? This work aims to continuously forecast the timing of future events by analyzing multi-sensor data streams. A key characteristic of real-world data streams is their dynamic nature, where the underlying patterns evolve over time. To address this, we present TimeCast, a dynamic prediction framework designed to adapt to these changes and provide accurate, real-time predictions of future event time. Our proposed method has the following properties: (a) Dynamic: it identifies the distinct time-evolving patterns (i.e., stages) and learns individual models for each, enabling us to make adaptive predictions based on pattern shifts. (b) Practical: it finds meaningful stages that capture time-varying interdependencies between multiple sensors and improve prediction performance; (c) Scalable: our algorithm scales linearly with the input size and enables online model updates on data streams. Extensive experiments on real datasets demonstrate that TimeCast provides higher prediction accuracy than state-of-the-art methods while finding dynamic changes in data streams with a great reduction in computational time.
|
https://arxiv.org/abs/2601.04741
|
Academic Papers
|
svg
|
678c333ef8468e27c548a1177b825a0dcf56a49abc0cf3d8b5c93ddaf3baa9ab
|
2026-01-16T00:00:00-05:00
|
Challenges and Research Directions for Large Language Model Inference Hardware
|
arXiv:2601.05047v2 Announce Type: replace Abstract: Large Language Model (LLM) inference is hard. The autoregressive Decode phase of the underlying Transformer model makes LLM inference fundamentally different from training. Exacerbated by recent AI trends, the primary challenges are memory and interconnect rather than compute. To address these challenges, we highlight four architecture research opportunities: High Bandwidth Flash for 10X memory capacity with HBM-like bandwidth; Processing-Near-Memory and 3D memory-logic stacking for high memory bandwidth; and low-latency interconnect to speedup communication. While our focus is datacenter AI, we also review their applicability for mobile devices.
|
https://arxiv.org/abs/2601.05047
|
Academic Papers
|
svg
|
e232b73771cf6857c669416562260e27b0a1536e0467a7f1099cafefea2ef65c
|
2026-01-16T00:00:00-05:00
|
DAVOS: An Autonomous Vehicle Operating System in the Vehicle Computing Era
|
arXiv:2601.05072v2 Announce Type: replace Abstract: Vehicle computing represents a fundamental shift in how autonomous vehicles are designed and deployed, transforming them from isolated transportation systems into mobile computing platforms that support both safety-critical, real-time driving and data-centric services. In this setting, vehicles simultaneously support real-time driving pipelines and a growing set of data-driven applications, placing increased responsibility on the vehicle operating system to coordinate computation, data movement, storage, and access. These demands highlight recurring system considerations related to predictable execution, data and execution protection, efficient handling of high-rate sensor data, and long-term system evolvability, commonly summarized as Safety, Security, Efficiency, and Extensibility (SSEE). Existing vehicle operating systems and runtimes address these concerns in isolation, resulting in fragmented software stacks that limit coordination between autonomy workloads and vehicle data services. This paper presents DAVOS, the Delaware Autonomous Vehicle Operating System, a unified vehicle operating system architecture designed for the vehicle computing context. DAVOS provides a cohesive operating system foundation that supports both real-time autonomy and extensible vehicle computing within a single system framework.
|
https://arxiv.org/abs/2601.05072
|
Academic Papers
|
svg
|
2ac709c01daf234d502187acf2b22b754817923f57e0a47dd145c08032e62719
|
2026-01-16T00:00:00-05:00
|
$PC^2$: Politically Controversial Content Generation via Jailbreaking Attacks on GPT-based Text-to-Image Models
|
arXiv:2601.05150v2 Announce Type: replace Abstract: The rapid evolution of text-to-image (T2I) models has enabled high-fidelity visual synthesis on a global scale. However, these advancements have introduced significant security risks, particularly regarding the generation of harmful content. Politically harmful content, such as fabricated depictions of public figures, poses severe threats when weaponized for fake news or propaganda. Despite its criticality, the robustness of current T2I safety filters against such politically motivated adversarial prompting remains underexplored. In response, we propose $PC^2$, the first black-box political jailbreaking framework for T2I models. It exploits a novel vulnerability where safety filters evaluate political sensitivity based on linguistic context. $PC^2$ operates through: (1) Identity-Preserving Descriptive Mapping to obfuscate sensitive keywords into neutral descriptions, and (2) Geopolitically Distal Translation to map these descriptions into fragmented, low-sensitivity languages. This strategy prevents filters from constructing toxic relationships between political entities within prompts, effectively bypassing detection. We construct a benchmark of 240 politically sensitive prompts involving 36 public figures. Evaluation on commercial T2I models, specifically GPT-series, shows that while all original prompts are blocked, $PC^2$ achieves attack success rates of up to 86%.
|
https://arxiv.org/abs/2601.05150
|
Academic Papers
|
svg
|
dfab402f9354d39fdfbf1708ffa2218f553607a298d5aa066b8a7abb8d5b7447
|
2026-01-16T00:00:00-05:00
|
Stock Market Price Prediction using Neural Prophet with Deep Neural Network
|
arXiv:2601.05202v2 Announce Type: replace Abstract: Stock market price prediction is a significant interdisciplinary research domain that depends at the intersection of finance, statistics, and economics. Forecasting Accurately predicting stock prices has always been a focal point for various researchers. However, existing statistical approaches for time-series prediction often fail to effectively forecast the probability range of future stock prices. Hence, to solve this problem, the Neural Prophet with a Deep Neural Network (NP-DNN) is proposed to predict stock market prices. The preprocessing technique used in this research is Z-score normalization, which normalizes stock price data by removing scale differences, making patterns easier to detect. Missing value imputation fills gaps in historical data, enhancing the models use of complete information for more accurate predictions. The Multi-Layer Perceptron (MLP) learns complex nonlinear relationships among stock market prices and extracts hidden patterns from the input data, thereby creating meaningful feature representations for better prediction accuracy. The proposed NP-DNN model achieved an accuracy of 99.21% compared with other approaches using the Fused Large Language Model. Keywords: deep neural network, forecasting stock prices, multi-layer perceptron, neural prophet, stock market price prediction.
|
https://arxiv.org/abs/2601.05202
|
Academic Papers
|
svg
|
3286395af2bb96ae1ef22fb37aa53572d43319f2926ce67341c682623f91ff62
|
2026-01-16T00:00:00-05:00
|
STELP: Secure Transpilation and Execution of LLM-Generated Programs
|
arXiv:2601.05467v3 Announce Type: replace Abstract: Rapid evolution of Large Language Models (LLMs) has achieved major advances in reasoning, planning, and function-calling capabilities. Multi-agentic collaborative frameworks using such LLMs place them at the center of solving software development-related tasks such as code generation. However, direct use of LLM generated code in production software development systems is problematic. The code could be unstable or erroneous and contain vulnerabilities such as data poisoning, malicious attacks, and hallucinations that could lead to widespread system malfunctions. This prohibits the adoption of LLM generated code in production AI systems where human code reviews and traditional secure testing tools are impractical or untrustworthy. In this paper, we discuss safety and reliability problems with the execution of LLM generated code and propose a Secure Transpiler and Executor of LLM-Generated Program (STELP), capable of executing LLM-generated code in a controlled and safe manner. STELP secures autonomous production AI systems involving code generation, filling the critical void left by the impracticality or limitations of traditional secure testing methodologies and human oversight. This includes applications such as headless code generation-execution and LLMs that produce executable code snippets as an action plan to be executed in real time. We contribute a human-validated dataset of insecure code snippets and benchmark our approach on publicly available datasets for correctness, safety, and latency. Our results demonstrate that our approach outperforms an existing method by a significant margin, particularly in its ability to safely execute risky code snippets. Warning: This paper contains malicious code snippets that should be run with caution.
|
https://arxiv.org/abs/2601.05467
|
Academic Papers
|
svg
|
952a955f6effd090ebc79d138c763a50008e7a46522e00fac5c3422b581f45d0
|
2026-01-16T00:00:00-05:00
|
Safety Not Found (404): Hidden Risks of LLM-Based Robotics Decision Making
|
arXiv:2601.05529v2 Announce Type: replace Abstract: One mistake by an AI system in a safety-critical setting can cost lives. As Large Language Models (LLMs) become integral to robotics decision-making, the physical dimension of risk grows; a single wrong instruction can directly endanger human safety. This paper addresses the urgent need to systematically evaluate LLM performance in scenarios where even minor errors are catastrophic. Through a qualitative evaluation of a fire evacuation scenario, we identified critical failure cases in LLM-based decision-making. Based on these, we designed seven tasks for quantitative assessment, categorized into: Complete Information, Incomplete Information, and Safety-Oriented Spatial Reasoning (SOSR). Complete information tasks utilize ASCII maps to minimize interpretation ambiguity and isolate spatial reasoning from visual processing. Incomplete information tasks require models to infer missing context, testing for spatial continuity versus hallucinations. SOSR tasks use natural language to evaluate safe decision-making in life-threatening contexts. We benchmark various LLMs and Vision-Language Models (VLMs) across these tasks. Beyond aggregate performance, we analyze the implications of a 1% failure rate, highlighting how "rare" errors escalate into catastrophic outcomes. Results reveal serious vulnerabilities: several models achieved a 0% success rate in ASCII navigation, while in a simulated fire drill, models instructed robots to move toward hazardous areas instead of emergency exits. Our findings lead to a sobering conclusion: current LLMs are not ready for direct deployment in safety-critical systems. A 99% accuracy rate is dangerously misleading in robotics, as it implies one out of every hundred executions could result in catastrophic harm. We demonstrate that even state-of-the-art models cannot guarantee safety, and absolute reliance on them creates unacceptable risks.
|
https://arxiv.org/abs/2601.05529
|
Academic Papers
|
svg
|
7adf919fad3d8d219964072782bdeccb69a5b28cee8a6cc20d85d49eeacf207d
|
2026-01-16T00:00:00-05:00
|
HAG: Hierarchical Demographic Tree-based Agent Generation for Topic-Adaptive Simulation
|
arXiv:2601.05656v2 Announce Type: replace Abstract: High-fidelity agent initialization is crucial for credible Agent-Based Modeling across diverse domains. A robust framework should be Topic-Adaptive, capturing macro-level joint distributions while ensuring micro-level individual rationality. Existing approaches fall into two categories: static data-based retrieval methods that fail to adapt to unseen topics absent from the data, and LLM-based generation methods that lack macro-level distribution awareness, resulting in inconsistencies between micro-level persona attributes and reality. To address these problems, we propose HAG, a Hierarchical Agent Generation framework that formalizes population generation as a two-stage decision process. Firstly, utilizing a World Knowledge Model to infer hierarchical conditional probabilities to construct the Topic-Adaptive Tree, achieving macro-level distribution alignment. Then, grounded real-world data, instantiation and agentic augmentation are carried out to ensure micro-level consistency. Given the lack of specialized evaluation, we establish a multi-domain benchmark and a comprehensive PACE evaluation framework. Extensive experiments show that HAG significantly outperforms representative baselines, reducing population alignment errors by an average of 37.7% and enhancing sociological consistency by 18.8%.
|
https://arxiv.org/abs/2601.05656
|
Academic Papers
|
svg
|
0a22418046ef10c1818a1d1339f0333707e99d0bc386bd62ed54a2e46da6d75a
|
2026-01-16T00:00:00-05:00
|
Moonworks Lunara Aesthetic Dataset
|
arXiv:2601.07941v2 Announce Type: replace Abstract: The dataset spans diverse artistic styles, including regionally grounded aesthetics from the Middle East, Northern Europe, East Asia, and South Asia, alongside general categories such as sketch and oil painting. All images are generated using the Moonworks Lunara model and intentionally crafted to embody distinct, high-quality aesthetic styles, yielding a first-of-its-kind dataset with substantially higher aesthetic scores, exceeding even aesthetics-focused datasets, and general-purpose datasets by a larger margin. Each image is accompanied by a human-refined prompt and structured annotations that jointly describe salient objects, attributes, relationships, and stylistic cues. Unlike large-scale web-derived datasets that emphasize breadth over precision, the Lunara Aesthetic Dataset prioritizes aesthetic quality, stylistic diversity, and licensing transparency, and is released under the Apache 2.0 license to support research and unrestricted academic and commercial use.
|
https://arxiv.org/abs/2601.07941
|
Academic Papers
|
svg
|
658d7b25746328d2bd9afdc26429272019f153b41a524b24a03d5cff87fbbdd6
|
2026-01-16T00:00:00-05:00
|
Cost and accuracy of long-term memory in Distributed Multi-Agent Systems based on Large Language Models
|
arXiv:2601.07978v2 Announce Type: replace Abstract: Distributed multi-agent systems (DMAS) based on large language models (LLMs) enable collaborative intelligence while preserving data privacy. However, systematic evaluations of long-term memory under network constraints are limited. This study introduces a flexible testbed to compare mem0, a vector-based memory framework, and Graphiti, a graph-based knowledge graph, using the LoCoMo long-context benchmark. Experiments were conducted under unconstrained and constrained network conditions, measuring computational, financial, and accuracy metrics. Results indicate mem0 significantly outperforms Graphiti in efficiency, featuring faster loading times, lower resource consumption, and minimal network overhead. Crucially, accuracy differences were not statistically significant. Applying a statistical Pareto efficiency framework, mem0 is identified as the optimal choice, balancing cost and accuracy in DMAS.
|
https://arxiv.org/abs/2601.07978
|
Academic Papers
|
svg
|
c78974e52e58e316bf9bb6fe851e42153620543aa3a46b9425213a4df16418d7
|
2026-01-16T00:00:00-05:00
|
Human-inspired Global-to-Parallel Multi-scale Encoding for Lightweight Vision Models
|
arXiv:2601.08190v2 Announce Type: replace Abstract: Lightweight vision networks have witnessed remarkable progress in recent years, yet achieving a satisfactory balance among parameter scale, computational overhead, and task performance remains difficult. Although many existing lightweight models manage to reduce computation considerably, they often do so at the expense of a substantial increase in parameter count (e.g., LSNet, MobileMamba), which still poses obstacles for deployment on resource-limited devices. In parallel, some studies attempt to draw inspiration from human visual perception, but their modeling tends to oversimplify the visual process, making it hard to reflect how perception truly operates. Revisiting the cooperative mechanism of the human visual system, we propose GPM (Global-to-Parallel Multi-scale Encoding). GPM first employs a Global Insight Generator (GIG) to extract holistic cues, and subsequently processes features of different scales through parallel branches: LSAE emphasizes mid-/large-scale semantic relations, while IRB (Inverted Residual Block) preserves fine-grained texture information, jointly enabling coherent representation of global and local features. As such, GPM conforms to two characteristic behaviors of human vision perceiving the whole before focusing on details, and maintaining broad contextual awareness even during local attention. Built upon GPM, we further develop the lightweight H-GPE network. Experiments on image classification, object detection, and semantic segmentation show that H-GPE achieves strong performance while maintaining a balanced footprint in both FLOPs and parameters, delivering a more favorable accuracy-efficiency trade-off compared with recent state-of-the-art lightweight models.
|
https://arxiv.org/abs/2601.08190
|
Academic Papers
|
svg
|
dfa526a5ea9740f0d6e45151444ef087c93c91fbcbfe1d4b8a21a2b0c2fd9ab8
|
2026-01-16T00:00:00-05:00
|
Semantic Misalignment in Vision-Language Models under Perceptual Degradation
|
arXiv:2601.08355v2 Announce Type: replace Abstract: Vision-Language Models (VLMs) are increasingly deployed in autonomous driving and embodied AI systems, where reliable perception is critical for safe semantic reasoning and decision-making. While recent VLMs demonstrate strong performance on multimodal benchmarks, their robustness to realistic perception degradation remains poorly understood. In this work, we systematically study semantic misalignment in VLMs under controlled degradation of upstream visual perception, using semantic segmentation on the Cityscapes dataset as a representative perception module. We introduce perception-realistic corruptions that induce only moderate drops in conventional segmentation metrics, yet observe severe failures in downstream VLM behavior, including hallucinated object mentions, omission of safety-critical entities, and inconsistent safety judgments. To quantify these effects, we propose a set of language-level misalignment metrics that capture hallucination, critical omission, and safety misinterpretation, and analyze their relationship with segmentation quality across multiple contrastive and generative VLMs. Our results reveal a clear disconnect between pixel-level robustness and multimodal semantic reliability, highlighting a critical limitation of current VLM-based systems and motivating the need for evaluation frameworks that explicitly account for perception uncertainty in safety-critical applications.
|
https://arxiv.org/abs/2601.08355
|
Academic Papers
|
svg
|
3bf46539c1a62f660d4669e8cf945d0b93aac56ba15b37fd8c9a0609089f089d
|
2026-01-16T00:00:00-05:00
|
SPARK: Scalable Real-Time Point Cloud Aggregation with Multi-View Self-Calibration
|
arXiv:2601.08414v2 Announce Type: replace Abstract: Real-time multi-camera 3D reconstruction is crucial for 3D perception, immersive interaction, and robotics. Existing methods struggle with multi-view fusion, camera extrinsic uncertainty, and scalability for large camera setups. We propose SPARK, a self-calibrating real-time multi-camera point cloud reconstruction framework that jointly handles point cloud fusion and extrinsic uncertainty. SPARK consists of: (1) a geometry-aware online extrinsic estimation module leveraging multi-view priors and enforcing cross-view and temporal consistency for stable self-calibration, and (2) a confidence-driven point cloud fusion strategy modeling depth reliability and visibility at pixel and point levels to suppress noise and view-dependent inconsistencies. By performing frame-wise fusion without accumulation, SPARK produces stable point clouds in dynamic scenes while scaling linearly with the number of cameras. Extensive experiments on real-world multi-camera systems show that SPARK outperforms existing approaches in extrinsic accuracy, geometric consistency, temporal stability, and real-time performance, demonstrating its effectiveness and scalability for large-scale multi-camera 3D reconstruction.
|
https://arxiv.org/abs/2601.08414
|
Academic Papers
|
svg
|
d4319a88610153d6c8dc5957a03a060b4d179455e99e013ab55b04d01cdc0a40
|
2026-01-16T00:00:00-05:00
|
EfficientFSL: Enhancing Few-Shot Classification via Query-Only Tuning in Vision Transformers
|
arXiv:2601.08499v2 Announce Type: replace Abstract: Large models such as Vision Transformers (ViTs) have demonstrated remarkable superiority over smaller architectures like ResNet in few-shot classification, owing to their powerful representational capacity. However, fine-tuning such large models demands extensive GPU memory and prolonged training time, making them impractical for many real-world low-resource scenarios. To bridge this gap, we propose EfficientFSL, a query-only fine-tuning framework tailored specifically for few-shot classification with ViT, which achieves competitive performance while significantly reducing computational overhead. EfficientFSL fully leverages the knowledge embedded in the pre-trained model and its strong comprehension ability, achieving high classification accuracy with an extremely small number of tunable parameters. Specifically, we introduce a lightweight trainable Forward Block to synthesize task-specific queries that extract informative features from the intermediate representations of the pre-trained model in a query-only manner. We further propose a Combine Block to fuse multi-layer outputs, enhancing the depth and robustness of feature representations. Finally, a Support-Query Attention Block mitigates distribution shift by adjusting prototypes to align with the query set distribution. With minimal trainable parameters, EfficientFSL achieves state-of-the-art performance on four in-domain few-shot datasets and six cross-domain datasets, demonstrating its effectiveness in real-world applications.
|
https://arxiv.org/abs/2601.08499
|
Academic Papers
|
svg
|
43df37e7b97f3e01a93300d000979470464365bc71218741db6849821fa31a5b
|
2026-01-16T00:00:00-05:00
|
Efficient Maintenance of Leiden Communities in Large Dynamic Graphs
|
arXiv:2601.08554v2 Announce Type: replace Abstract: As a well-known community detection algorithm, Leiden has been widely used in various scenarios such as large language model generation (e.g., Graph-RAG), anomaly detection, and biological analysis. In these scenarios, the graphs are often large and dynamic, where vertices and edges are inserted and deleted frequently, so it is costly to obtain the updated communities by Leiden from scratch when the graph has changed. Recently, one work has attempted to study how to maintain Leiden communities in the dynamic graph, but it lacks a detailed theoretical analysis, and its algorithms are inefficient for large graphs. To address these issues, in this paper, we first theoretically show that the existing algorithms are relatively unbounded via the boundedness analysis (a powerful tool for analyzing incremental algorithms on dynamic graphs), and also analyze the memberships of vertices in communities when the graph changes. Based on theoretical analysis, we develop a novel efficient maintenance algorithm, called Hierarchical Incremental Tree Leiden (HIT-Leiden), which effectively reduces the range of affected vertices by maintaining the connected components and hierarchical community structures. Comprehensive experiments in various datasets demonstrate the superior performance of HIT-Leiden. In particular, it achieves speedups of up to five orders of magnitude over existing methods.
|
https://arxiv.org/abs/2601.08554
|
Academic Papers
|
svg
|
26062f187d961de9df95ff99cb04f3c5443b85f25ffdf99c8f5620a1adebb287
|
2026-01-16T00:00:00-05:00
|
Provably Safe Reinforcement Learning for Stochastic Reach-Avoid Problems with Entropy Regularization
|
arXiv:2601.08646v2 Announce Type: replace Abstract: We consider the problem of learning the optimal policy for Markov decision processes with safety constraints. We formulate the problem in a reach-avoid setup. Our goal is to design online reinforcement learning algorithms that ensure safety constraints with arbitrarily high probability during the learning phase. To this end, we first propose an algorithm based on the optimism in the face of uncertainty (OFU) principle. Based on the first algorithm, we propose our main algorithm, which utilizes entropy regularization. We investigate the finite-sample analysis of both algorithms and derive their regret bounds. We demonstrate that the inclusion of entropy regularization improves the regret and drastically controls the episode-to-episode variability that is inherent in OFU-based safe RL algorithms.
|
https://arxiv.org/abs/2601.08646
|
Academic Papers
|
svg
|
801b5426da65b0901cbb6df297fb32c01aec8dede4e912724ce7714e32a1a247
|
2026-01-16T00:00:00-05:00
|
RMBRec: Robust Multi-Behavior Recommendation towards Target Behaviors
|
arXiv:2601.08705v2 Announce Type: replace Abstract: Multi-behavior recommendation faces a critical challenge in practice: auxiliary behaviors (e.g., clicks, carts) are often noisy, weakly correlated, or semantically misaligned with the target behavior (e.g., purchase), which leads to biased preference learning and suboptimal performance. While existing methods attempt to fuse these heterogeneous signals, they inherently lack a principled mechanism to ensure robustness against such behavioral inconsistency. In this work, we propose Robust Multi-Behavior Recommendation towards Target Behaviors (RMBRec), a robust multi-behavior recommendation framework grounded in an information-theoretic robustness principle. We interpret robustness as a joint process of maximizing predictive information while minimizing its variance across heterogeneous behavioral environments. Under this perspective, the Representation Robustness Module (RRM) enhances local semantic consistency by maximizing the mutual information between users' auxiliary and target representations, whereas the Optimization Robustness Module (ORM) enforces global stability by minimizing the variance of predictive risks across behaviors, which is an efficient approximation to invariant risk minimization. This local-global collaboration bridges representation purification and optimization invariance in a theoretically coherent way. Extensive experiments on three real-world datasets demonstrate that RMBRec not only outperforms state-of-the-art methods in accuracy but also maintains remarkable stability under various noise perturbations. For reproducibility, our code is available at https://github.com/miaomiao-cai2/RMBRec/.
|
https://arxiv.org/abs/2601.08705
|
Academic Papers
|
svg
|
c0d6dd7537378d216e0b912fcddfc89b8f188f7ed43de6a34e4b4167a6701931
|
2026-01-16T00:00:00-05:00
|
Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs
|
arXiv:2601.08763v2 Announce Type: replace Abstract: Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@$k$ across large sampling budgets and increases the area under the pass@$k$ curve (AUC@$K$) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.
|
https://arxiv.org/abs/2601.08763
|
Academic Papers
|
svg
|
49bb9083f169f8c5c7329e8975298681494574af0aae02bb0acf78c033e5810a
|
2026-01-16T00:00:00-05:00
|
Scalable and Reliable Evaluation of AI Knowledge Retrieval Systems: RIKER and the Coherent Simulated Universe
|
arXiv:2601.08847v2 Announce Type: replace Abstract: Evaluating knowledge systems (LLMs, RAG, knowledge graphs, etc) faces fundamental challenges: static benchmarks are vulnerable to contamination, LLM-based judges exhibit systematic biases, and ground truth extraction requires expensive human annotation. We present RIKER (Retrieval Intelligence and Knowledge Extraction Rating), both a benchmark and a replicable methodology based on paradigm inversion - generating documents from known ground truth rather than extracting ground truth from documents. This approach enables deterministic scoring and scalable evaluation without human annotation or reference models, and contamination resistance through regenerable corpora. Our evaluation of 33 models using over 21 billion tokens reveals that context length claims frequently exceed usable capacity, with significant degradation beyond 32K tokens; cross-document aggregation proves substantially harder than single-document extraction; and grounding ability and hallucination resistance are distinct capabilities - models excelling at finding facts that exist may still fabricate facts that do not. Beyond the specific benchmark, we contribute a domain-agnostic methodology for constructing scalable and contamination-resistant evaluations wherever synthetic documents can be generated from structured ground truth.
|
https://arxiv.org/abs/2601.08847
|
Academic Papers
|
svg
|
bf2fa9e7441a9a693de4867f12b788c1492a40ebfe8e7519078dd3d5da0ea111
|
2026-01-16T00:00:00-05:00
|
TranslateGemma Technical Report
|
arXiv:2601.09012v2 Announce Type: replace Abstract: We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.
|
https://arxiv.org/abs/2601.09012
|
Academic Papers
|
svg
|
b59c4473b3ce72ddbe52ac4298249b34c07283af6dc391902885270e368a92d7
|
2026-01-16T00:00:00-05:00
|
How Many Human Judgments Are Enough? Feasibility Limits of Human Preference Evaluation
|
arXiv:2601.09084v2 Announce Type: replace Abstract: Human preference evaluations are widely used to compare generative models, yet it remains unclear how many judgments are required to reliably detect small improvements. We show that when preference signal is diffuse across prompts (i.e., all prompt types are similarly informative), proportional allocation is minimax-optimal: no allocation strategy substantially improves detectability. Empirical analysis of large-scale human preference datasets shows that most comparisons fall into this diffuse regime, exhibiting small preference margins that require far more judgments than typically collected, even in well-sampled comparisons. These limits persist across evaluation protocols and modalities, including chat, image generation, and code generation with execution feedback. In contrast, curated benchmarks that reduce prompt induced variability systematically induce larger margins and improve detectability through a $1.5\times$ reduction in prompt-level variance. Our results show that inconclusive or negative human evaluation outcomes frequently reflect underpowered evaluation rather than model equivalence, underscoring the need to account explicitly for effect size, budget, and protocol design.
|
https://arxiv.org/abs/2601.09084
|
Academic Papers
|
svg
|
3085fce2805a67c13c6ebcde2467f01a09ca2084296f19d12d6f7de6a817511e
|
2026-01-16T00:00:00-05:00
|
DScheLLM: Enabling Dynamic Scheduling through a Fine-Tuned Dual-System Large language Model
|
arXiv:2601.09100v2 Announce Type: replace Abstract: Production scheduling is highly susceptible to dynamic disruptions, such as variations in processing times, machine availability, and unexpected task insertions. Conventional approaches typically rely on event-specific models and explicit analytical formulations, which limits their adaptability and generalization across previously unseen disturbances. To overcome these limitations, this paper proposes DScheLLM, a dynamic scheduling approach that leverages fine-tuned large language models within a dual-system (fast-slow) reasoning architecture to address disturbances of different scales. A unified large language model-based framework is constructed to handle dynamic events, where training datasets for both fast and slow reasoning modes are generated using exact schedules obtained from an operations research solver. The Huawei OpenPangu Embedded-7B model is subsequently fine-tuned under the hybrid reasoning paradigms using LoRA. Experimental evaluations on standard job shop scheduling benchmarks demonstrate that the fast-thinking mode can efficiently generate high-quality schedules and the slow-thinking mode can produce solver-compatible and well-formatted decision inputs. To the best of our knowledge, this work represents one of the earliest studies applying large language models to job shop scheduling in dynamic environments, highlighting their considerable potential for intelligent and adaptive scheduling optimization.
|
https://arxiv.org/abs/2601.09100
|
Academic Papers
|
svg
|
1fee52038c35d3daa76f23a0e52441c35c33d6af5970a66458e330da8b4ca0c4
|
2026-01-16T00:00:00-05:00
|
Discrete Solution Operator Learning for Geometry-Dependent PDEs
|
arXiv:2601.09143v2 Announce Type: replace Abstract: Neural operator learning accelerates PDE solution by approximating operators as mappings between continuous function spaces. Yet in many engineering settings, varying geometry induces discrete structural changes, including topological changes, abrupt changes in boundary conditions or boundary types, and changes in the computational domain, which break the smooth-variation premise. Here we introduce Discrete Solution Operator Learning (DiSOL), a complementary paradigm that learns discrete solution procedures rather than continuous function-space operators. DiSOL factorizes the solver into learnable stages that mirror classical discretizations: local contribution encoding, multiscale assembly, and implicit solution reconstruction on an embedded grid, thereby preserving procedure-level consistency while adapting to geometry-dependent discrete structures. Across geometry-dependent Poisson, advection-diffusion, linear elasticity, as well as spatiotemporal heat conduction problems, DiSOL produces stable and accurate predictions under both in-distribution and strongly out-of-distribution geometries, including discontinuous boundaries and topological changes. These results highlight the need for procedural operator representations in geometry-dominated problems and position discrete solution operator learning as a distinct, complementary direction in scientific machine learning.
|
https://arxiv.org/abs/2601.09143
|
Academic Papers
|
svg
|
829743cb70b2bf89ddc278564d2d3d34ce497cb0bfc073e0358e6e5ba0425c61
|
2026-01-16T00:00:00-05:00
|
A.X K1 Technical Report
|
arXiv:2601.09200v2 Announce Type: replace Abstract: We introduce A.X K1, a 519B-parameter Mixture-of-Experts (MoE) language model trained from scratch. Our design leverages scaling laws to optimize training configurations and vocabulary size under fixed computational budgets. A.X K1 is pre-trained on a corpus of approximately 10T tokens, curated by a multi-stage data processing pipeline. Designed to bridge the gap between reasoning capability and inference efficiency, A.X K1 supports explicitly controllable reasoning to facilitate scalable deployment across diverse real-world scenarios. We propose a simple yet effective Think-Fusion training recipe, enabling user-controlled switching between thinking and non-thinking modes within a single unified model. Extensive evaluations demonstrate that A.X K1 achieves performance competitive with leading open-source models, while establishing a distinctive advantage in Korean-language benchmarks.
|
https://arxiv.org/abs/2601.09200
|
Academic Papers
|
svg
|
49403c7b62e2236ac10f15d0efda287c53fc95f944a5cf7878fe9bc25b22bfb3
|
2026-01-16T00:00:00-05:00
|
Reward Learning through Ranking Mean Squared Error
|
arXiv:2601.09236v2 Announce Type: replace Abstract: Reward design remains a significant bottleneck in applying reinforcement learning (RL) to real-world problems. A popular alternative is reward learning, where reward functions are inferred from human feedback rather than manually specified. Recent work has proposed learning reward functions from human feedback in the form of ratings, rather than traditional binary preferences, enabling richer and potentially less cognitively demanding supervision. Building on this paradigm, we introduce a new rating-based RL method, Ranked Return Regression for RL (R4). At its core, R4 employs a novel ranking mean squared error (rMSE) loss, which treats teacher-provided ratings as ordinal targets. Our approach learns from a dataset of trajectory-rating pairs, where each trajectory is labeled with a discrete rating (e.g., "bad," "neutral," "good"). At each training step, we sample a set of trajectories, predict their returns, and rank them using a differentiable sorting operator (soft ranks). We then optimize a mean squared error loss between the resulting soft ranks and the teacher's ratings. Unlike prior rating-based approaches, R4 offers formal guarantees: its solution set is provably minimal and complete under mild assumptions. Empirically, using simulated human feedback, we demonstrate that R4 consistently matches or outperforms existing rating and preference-based RL methods on robotic locomotion benchmarks from OpenAI Gym and the DeepMind Control Suite, while requiring significantly less feedback.
|
https://arxiv.org/abs/2601.09236
|
Academic Papers
|
svg
|
438830a474434ea9c64962d1fec373c74cb48df4db451c7845d56c093eb39ff1
|
2026-01-16T00:00:00-05:00
|
DSA-Tokenizer: Disentangled Semantic-Acoustic Tokenization via Flow Matching-based Hierarchical Fusion
|
arXiv:2601.09239v2 Announce Type: replace Abstract: Speech tokenizers serve as the cornerstone of discrete Speech Large Language Models (Speech LLMs). Existing tokenizers either prioritize semantic encoding, fuse semantic content with acoustic style inseparably, or achieve incomplete semantic-acoustic disentanglement. To achieve better disentanglement, we propose DSA-Tokenizer, which explicitly disentangles speech into discrete semantic and acoustic tokens via distinct optimization constraints. Specifically, semantic tokens are supervised by ASR to capture linguistic content, while acoustic tokens focus on mel-spectrograms restoration to encode style. To eliminate rigid length constraints between the two sequences, we introduce a hierarchical Flow-Matching decoder that further improve speech generation quality. Furthermore, We employ a joint reconstruction-recombination training strategy to enforce this separation. DSA-Tokenizer enables high fidelity reconstruction and flexible recombination through robust disentanglement, facilitating controllable generation in speech LLMs. Our analysis highlights disentangled tokenization as a pivotal paradigm for future speech modeling. Audio samples are avaialble at https://anonymous.4open.science/w/DSA_Tokenizer_demo/. The code and model will be made publicly available after the paper has been accepted.
|
https://arxiv.org/abs/2601.09239
|
Academic Papers
|
svg
|
5408cacd731a16b845f704eec69a8a9b57a757ecae02ba27d6368c0998319788
|
2026-01-16T00:00:00-05:00
|
Bias Dynamics in BabyLMs: Towards a Compute-Efficient Sandbox for Democratising Pre-Training Debiasing
|
arXiv:2601.09421v2 Announce Type: replace Abstract: Pre-trained language models (LMs) have, over the last few years, grown substantially in both societal adoption and training costs. This rapid growth in size has constrained progress in understanding and mitigating their biases. Since re-training LMs is prohibitively expensive, most debiasing work has focused on post-hoc or masking-based strategies, which often fail to address the underlying causes of bias. In this work, we seek to democratise pre-model debiasing research by using low-cost proxy models. Specifically, we investigate BabyLMs, compact BERT-like models trained on small and mutable corpora that can approximate bias acquisition and learning dynamics of larger models. We show that BabyLMs display closely aligned patterns of intrinsic bias formation and performance development compared to standard BERT models, despite their drastically reduced size. Furthermore, correlations between BabyLMs and BERT hold across multiple intra-model and post-model debiasing methods. Leveraging these similarities, we conduct pre-model debiasing experiments with BabyLMs, replicating prior findings and presenting new insights regarding the influence of gender imbalance and toxicity on bias formation. Our results demonstrate that BabyLMs can serve as an effective sandbox for large-scale LMs, reducing pre-training costs from over 500 GPU-hours to under 30 GPU-hours. This provides a way to democratise pre-model debiasing research and enables faster, more accessible exploration of methods for building fairer LMs.
|
https://arxiv.org/abs/2601.09421
|
Academic Papers
|
svg
|
65857a399c1900101d009802f054f24918f09c8c421e9858df7aca23cbffe0d9
|
2026-01-16T00:00:00-05:00
|
Bridging Semantic Understanding and Popularity Bias with LLMs
|
arXiv:2601.09478v2 Announce Type: replace Abstract: Semantic understanding of popularity bias is a crucial yet underexplored challenge in recommender systems, where popular items are often favored at the expense of niche content. Most existing debiasing methods treat the semantic understanding of popularity bias as a matter of diversity enhancement or long-tail coverage, neglecting the deeper semantic layer that embodies the causal origins of the bias itself. Consequently, such shallow interpretations limit both their debiasing effectiveness and recommendation accuracy. In this paper, we propose FairLRM, a novel framework that bridges the gap in the semantic understanding of popularity bias with Recommendation via Large Language Model (RecLLM). FairLRM decomposes popularity bias into item-side and user-side components, using structured instruction-based prompts to enhance the model's comprehension of both global item distributions and individual user preferences. Unlike traditional methods that rely on surface-level features such as "diversity" or "debiasing", FairLRM improves the model's ability to semantically interpret and address the underlying bias. Through empirical evaluation, we show that FairLRM significantly enhances both fairness and recommendation accuracy, providing a more semantically aware and trustworthy approach to enhance the semantic understanding of popularity bias. The implementation is available at https://github.com/LuoRenqiang/FairLRM.
|
https://arxiv.org/abs/2601.09478
|
Academic Papers
|
svg
|
0e1d8ea13b6aaf4d0eb73e8b6b7a43fc88038438503f65a7d8ed04c6e4963a65
|
2026-01-16T00:00:00-05:00
|
UAV-enabled Computing Power Networks: Design and Performance Analysis under Energy Constraints
|
arXiv:2601.09493v2 Announce Type: replace Abstract: This paper presents an innovative framework that boosts computing power by utilizing ubiquitous computing power distribution and enabling higher computing node accessibility via adaptive UAV positioning, establishing a UAV-enabled Computing Power Network (UAV-CPN). In a UAV-CPN, a UAV functions as a dynamic relay, outsourcing computing tasks from the request zone to an expanded service zone with diverse computing nodes, including vehicle onboard units, edge servers, and dedicated powerful nodes. This approach has the potential to alleviate communication bottlenecks and overcome the "island effect" observed in multi-access edge computing. A significant challenge is to quantify computing power performance under complex dynamics of communication and computing. To address this challenge, we introduce task completion probability to capture the capability of UAV-CPNs for task computing. We further enhance UAV-CPN performance under a hybrid energy architecture by jointly optimizing UAV altitude and transmit power, where fuel cells and batteries collectively power both UAV propulsion and communication systems. Extensive evaluations show significant performance gains, highlighting the importance of balancing communication and computing capabilities, especially under dual-energy constraints. These findings underscore the potential of UAV-CPNs to significantly boost computing power.
|
https://arxiv.org/abs/2601.09493
|
Academic Papers
|
svg
|
9e55336b442c6d727c69f15297467901798369e9513fcb994d655dda25fd85b7
|
2026-01-16T00:00:00-05:00
|
SiliconHealth: A Complete Low-Cost Blockchain Healthcare Infrastructure for Resource-Constrained Regions Using Repurposed Bitcoin Mining ASICs
|
arXiv:2601.09557v2 Announce Type: replace Abstract: This paper presents SiliconHealth, a comprehensive blockchain-based healthcare infrastructure designed for resource-constrained regions, particularly sub-Saharan Africa. We demonstrate that obsolete Bitcoin mining Application-Specific Integrated Circuits (ASICs) can be repurposed to create a secure, low-cost, and energy-efficient medical records system. The proposed architecture employs a four-tier hierarchical network: regional hospitals using Antminer S19 Pro (90+ TH/s), urban health centers with Antminer S9 (14 TH/s), rural clinics equipped with Lucky Miner LV06 (500 GH/s, 13W), and mobile health points with portable ASIC devices. We introduce the Deterministic Hardware Fingerprinting (DHF) paradigm, which repurposes SHA-256 mining ASICs as cryptographic proof generators, achieving 100% verification rate across 23 test proofs during 300-second validation sessions. The system incorporates Reed-Solomon LSB watermarking for medical image authentication with 30-40% damage tolerance, semantic Retrieval-Augmented Generation (RAG) for intelligent medical record queries, and offline synchronization protocols for intermittent connectivity. Economic analysis demonstrates 96% cost reduction compared to GPU-based alternatives, with total deployment cost of $847 per rural clinic including 5-year solar power infrastructure. Validation experiments on Lucky Miner LV06 (BM1366 chip, 5nm) achieve 2.93 MH/W efficiency and confirm hardware universality. This work establishes a practical framework for deploying verifiable, tamper-proof electronic health records in regions where traditional healthcare IT infrastructure is economically unfeasible, potentially benefiting over 600 million people lacking access to basic health information systems.
|
https://arxiv.org/abs/2601.09557
|
Academic Papers
|
svg
|
9ac69903471c3d3d5a399b96814896aa9e97638d92b9297d0ef4e6ed82717ac7
|
2026-01-16T00:00:00-05:00
|
MM-BRIGHT: A Multi-Task Multimodal Benchmark for Reasoning-Intensive Retrieval
|
arXiv:2601.09562v2 Announce Type: replace Abstract: Existing retrieval benchmarks primarily consist of text-based queries where keyword or semantic matching is usually sufficient. Many real-world queries contain multimodal elements, particularly, images such as diagrams, charts, and screenshots that require intensive reasoning to identify relevant documents. To address this gap, we introduce MM-BRIGHT, the first multimodal benchmark for reasoning-intensive retrieval. Our dataset consists of 2,803 real-world queries spanning 29 diverse technical domains, with four tasks of increasing complexity: text-to-text, multimodal-to-text, multimodal-to-image, and multimodal-to-multimodal retrieval. Extensive evaluation reveals that state-of-the-art models struggle across all tasks: BM25 achieves only 8.5 nDCG@10 on text-only retrieval, while the best multimodal model Nomic-Vision reaches just 27.6 nDCG@10 on multimodal-to-text retrieval actually underperforming the best text-only model (DiVeR: 32.2). These results highlight substantial headroom and position MM-BRIGHT as a testbed for next-generation retrieval models that better integrate visual reasoning. Our code and data are available at https://github.com/mm-bright/MM-BRIGHT. See also our official website: https://mm-bright.github.io/.
|
https://arxiv.org/abs/2601.09562
|
Academic Papers
|
svg
|
d494ef71f0661b575f3e5d0b1103befa2bbe7cf3ee4aa0895c3f4c30edca8640
|
2026-01-16T00:00:00-05:00
|
Collaborative Multi-Agent Test-Time Reinforcement Learning for Reasoning
|
arXiv:2601.09667v2 Announce Type: replace Abstract: Multi-agent systems have evolved into practical LLM-driven collaborators for many applications, gaining robustness from diversity and cross-checking. However, multi-agent RL (MARL) training is resource-intensive and unstable: co-adapting teammates induce non-stationarity, and rewards are often sparse and high-variance. Therefore, we introduce \textbf{Multi-Agent Test-Time Reinforcement Learning (MATTRL)}, a framework that injects structured textual experience into multi-agent deliberation at inference time. MATTRL forms a multi-expert team of specialists for multi-turn discussions, retrieves and integrates test-time experiences, and reaches consensus for final decision-making. We also study credit assignment for constructing a turn-level experience pool, then reinjecting it into the dialogue. Across challenging benchmarks in medicine, math, and education, MATTRL improves accuracy by an average of 3.67\% over a multi-agent baseline, and by 8.67\% over comparable single-agent baselines. Ablation studies examine different credit-assignment schemes and provide a detailed comparison of how they affect training outcomes. MATTRL offers a stable, effective and efficient path to distribution-shift-robust multi-agent reasoning without tuning.
|
https://arxiv.org/abs/2601.09667
|
Academic Papers
|
svg
|
60671afba18433be9377f8ec81208eacf68fdaec45980738308c274dda4e300a
|
2026-01-16T00:00:00-05:00
|
STEP3-VL-10B Technical Report
|
arXiv:2601.09668v2 Announce Type: replace Abstract: We present STEP3-VL-10B, a lightweight open-source foundation model designed to redefine the trade-off between compact efficiency and frontier-level multimodal intelligence. STEP3-VL-10B is realized through two strategic shifts: first, a unified, fully unfrozen pre-training strategy on 1.2T multimodal tokens that integrates a language-aligned Perception Encoder with a Qwen3-8B decoder to establish intrinsic vision-language synergy; and second, a scaled post-training pipeline featuring over 1k iterations of reinforcement learning. Crucially, we implement Parallel Coordinated Reasoning (PaCoRe) to scale test-time compute, allocating resources to scalable perceptual reasoning that explores and synthesizes diverse visual hypotheses. Consequently, despite its compact 10B footprint, STEP3-VL-10B rivals or surpasses models 10$\times$-20$\times$ larger (e.g., GLM-4.6V-106B, Qwen3-VL-235B) and top-tier proprietary flagships like Gemini 2.5 Pro and Seed-1.5-VL. Delivering best-in-class performance, it records 92.2% on MMBench and 80.11% on MMMU, while excelling in complex reasoning with 94.43% on AIME2025 and 75.95% on MathVision. We release the full model suite to provide the community with a powerful, efficient, and reproducible baseline.
|
https://arxiv.org/abs/2601.09668
|
Academic Papers
|
svg
|
f57d79465f2675c16b737bc813fe3bf1e207187fe6438a78eb344cb4c60f60a5
|
2026-01-16T00:00:00-05:00
|
Learning Physics-Informed Noise Models from Dark Frames for Low-Light Raw Image Denoising
|
arXiv:2310.09126v3 Announce Type: replace-cross Abstract: Recently, the mainstream practice for training low-light raw image denoising methods has shifted towards employing synthetic data. Noise modeling, which focuses on characterizing the noise distribution of real-world sensors, profoundly influences the effectiveness and practicality of synthetic data. Currently, physics-based noise modeling struggles to characterize the entire real noise distribution, while learning-based noise modeling impractically depends on paired real data. In this paper, we propose a novel strategy: learning the noise model from dark frames instead of paired real data, to break down the data dependency. Based on this strategy, we introduce an efficient physics-informed noise neural proxy (PNNP) to approximate the real-world sensor noise model. Specifically, we integrate physical priors into neural proxies and introduce three efficient techniques: physics-guided noise decoupling (PND), physics-aware proxy model (PPM), and differentiable distribution loss (DDL). PND decouples the dark frame into different components and handles different levels of noise flexibly, which reduces the complexity of noise modeling. PPM incorporates physical priors to constrain the synthetic noise, which promotes the accuracy of noise modeling. DDL provides explicit and reliable supervision for noise distribution, which promotes the precision of noise modeling. PNNP exhibits powerful potential in characterizing the real noise distribution. Extensive experiments on public datasets demonstrate superior performance in practical low-light raw image denoising. The source code will be publicly available at the project homepage.
|
https://arxiv.org/abs/2310.09126
|
Academic Papers
|
svg
|
410769c70406e7722e6bdcffcd5f26d27f48252d2fb0e21ba1636293b697df1a
|
2026-01-16T00:00:00-05:00
|
Reuniting $\chi$-boundedness with polynomial $\chi$-boundedness
|
arXiv:2310.11167v4 Announce Type: replace-cross Abstract: A class $\mathcal{F}$ of graphs is $\chi$-bounded if there is a function $f$ such that $\chi(H)\le f(\omega(H))$ for all induced subgraphs $H$ of a graph in $\mathcal{F}$. If $f$ can be chosen to be a polynomial, we say that $\mathcal{F}$ is polynomially $\chi$-bounded. Esperet proposed a conjecture that every $\chi$-bounded class of graphs is polynomially $\chi$-bounded. This conjecture has been disproved; it has been shown that there are classes of graphs that are $\chi$-bounded but not polynomially $\chi$-bounded. Nevertheless, inspired by Esperet's conjecture, we introduce Pollyanna classes of graphs. A class $\mathcal{C}$ of graphs is Pollyanna if $\mathcal{C}\cap \mathcal{F}$ is polynomially $\chi$-bounded for every $\chi$-bounded class $\mathcal{F}$ of graphs. We prove that several classes of graphs are Pollyanna and also present some proper classes of graphs that are not Pollyanna.
|
https://arxiv.org/abs/2310.11167
|
Academic Papers
|
svg
|
4b46a6949de53c2a489884edba0fa2b55de246a967e982160414014270c85651
|
2026-01-16T00:00:00-05:00
|
Arbitrary Polynomial Separations in Trainable Quantum Machine Learning
|
arXiv:2402.08606v4 Announce Type: replace-cross Abstract: Recent theoretical results in quantum machine learning have demonstrated a general trade-off between the expressive power of quantum neural networks (QNNs) and their trainability; as a corollary of these results, practical exponential separations in expressive power over classical machine learning models are believed to be infeasible as such QNNs take a time to train that is exponential in the model size. We here circumvent these negative results by constructing a hierarchy of efficiently trainable QNNs that exhibit unconditionally provable, polynomial memory separations of arbitrary constant degree over classical neural networks -- including state-of-the-art models, such as Transformers -- in performing a classical sequence modeling task. This construction is also computationally efficient, as each unit cell of the introduced class of QNNs only has constant gate complexity. We show that contextuality -- informally, a quantitative notion of semantic ambiguity -- is the source of the expressivity separation, suggesting that other learning tasks with this property may be a natural setting for the use of quantum learning algorithms.
|
https://arxiv.org/abs/2402.08606
|
Academic Papers
|
svg
|
46d183570ec6932a3c0628050de99f7260a6e49a3c2535bb4e75d33f5c5feea2
|
2026-01-16T00:00:00-05:00
|
From higher-order rewriting systems to higher-order categorial algebras and higher-order Curry-Howard isomorphisms
|
arXiv:2402.12051v2 Announce Type: replace-cross Abstract: This ongoing project aims to define and investigate, from the standpoint of category theory, order theory and universal algebra, the notions of higher-order many-sorted rewriting system and of higher-order many-sorted categorial algebra and their relationships, via the higher-order Curry-Howard isomorphisms. The ultimate goal, to be developed in future versions of this work, is to define and investigate the category of towers, whose objects will consist of families, indexed by $\mathbb{N}$, of higher-order many-sorted rewriting systems and of higher-order many-sorted categorial algebras, including higher-order Curry-Howard type results for the latter, together with an additional structure that intertwines such $\mathbb{N}$-families; and whose morphism from a tower to another will be families, indexed by $\mathbb{N}$, of morphisms between its higher-order many-sorted rewriting systems and of higher-order many-sorted categorial algebras compatible with their structures. All feedback is appreciated.
|
https://arxiv.org/abs/2402.12051
|
Academic Papers
|
svg
|
1cd3076b295df51bfd88267a74e408198c2ee7a34068a2a2e960cb6e3b206378
|
2026-01-16T00:00:00-05:00
|
Instance-level quantitative saliency in multiple sclerosis lesion segmentation
|
arXiv:2406.09335v3 Announce Type: replace-cross Abstract: Explainable artificial intelligence (XAI) methods have been proposed to interpret model decisions in classification and, more recently, in semantic segmentation. However, instance-level XAI for semantic segmentation, namely explanations focused on a single object among multiple instances of the same class, remains largely unexplored. Such explanations are particularly important in multi-lesional diseases to understand what drives the detection and contouring of a specific lesion. We propose instance-level explanation maps for semantic segmentation by extending SmoothGrad and Grad-CAM++ to obtain quantitative instance saliency. These methods were applied to the segmentation of white matter lesions (WMLs), a magnetic resonance imaging biomarker in multiple sclerosis. We used 4023 FLAIR and MPRAGE MRI scans from 687 patients collected at the University Hospital of Basel, Switzerland, with WML masks annotated by four expert clinicians. Three deep learning architectures, a 3D U-Net, nnU-Net, and Swin UNETR, were trained and evaluated, achieving normalized Dice scores of 0.71, 0.78, and 0.80, respectively. Instance saliency maps showed that the models relied primarily on FLAIR rather than MPRAGE for WML segmentation, with positive saliency inside lesions and negative saliency in their immediate neighborhood, consistent with clinical practice. Peak saliency values differed significantly across correct and incorrect predictions, suggesting that quantitative instance saliency may help identify segmentation errors. In conclusion, we introduce two architecture-agnostic XAI methods that provide quantitative instance-level explanations for semantic segmentation and support clinically meaningful interpretation of model decisions.
|
https://arxiv.org/abs/2406.09335
|
Academic Papers
|
svg
|
d269414d73d95531a5c475e1ce1ffd80b3e9b3b46bcfa11ac9fd237eb0c343e3
|
2026-01-16T00:00:00-05:00
|
HERMES: Holographic Equivariant neuRal network model for Mutational Effect and Stability prediction
|
arXiv:2407.06703v2 Announce Type: replace-cross Abstract: Predicting the stability and fitness effects of amino acid mutations in proteins is a cornerstone of biological discovery and engineering. Various experimental techniques have been developed to measure mutational effects, providing us with extensive datasets across a diverse range of proteins. By training on these data, traditional computational modeling and more recent machine learning approaches have advanced significantly in predicting mutational effects. Here, we introduce HERMES, a 3D rotationally equivariant structure-based neural network model for mutational effect and stability prediction. Pre-trained to predict amino acid propensity from its surrounding 3D structure, HERMES can be fine-tuned for mutational effects using our open-source code. We present a suite of HERMES models, pre-trained with different strategies, and fine-tuned to predict the stability effect of mutations. Benchmarking against other models shows that HERMES often outperforms or matches their performance in predicting mutational effect on stability, binding, and fitness. HERMES offers versatile tools for evaluating mutational effects and can be fine-tuned for specific predictive objectives.
|
https://arxiv.org/abs/2407.06703
|
Academic Papers
|
svg
|
ad70e5559e86d9cbd5c20fe2cd4e33e0a61badf0cd4362d08cbaafa21fa0c725
|
2026-01-16T00:00:00-05:00
|
Persistent Homology via Ellipsoids
|
arXiv:2408.11450v3 Announce Type: replace-cross Abstract: Persistent homology is one of the most popular methods in topological data analysis. An initial step in its use involves constructing a nested sequence of simplicial complexes. There is an abundance of different complexes to choose from, with \v{C}ech, Rips, alpha, and witness complexes being popular choices. In this manuscript, we build a novel type of geometrically informed simplicial complex, called a Rips-type ellipsoid complex. This complex is based on the idea that ellipsoids aligned with tangent directions better approximate the data compared to conventional (Euclidean) balls centered at sample points, as used in the construction of Rips and Alpha complexes. We use Principal Component Analysis to estimate tangent spaces directly from samples and present an algorithm for computing Rips-type ellipsoid barcodes, i.e., topological descriptors based on Rips-type ellipsoid complexes. Additionally, we show that the ellipsoid barcodes depend continuously on the input data so that small perturbations of a k-generic point cloud lead to proportionally small changes in the resulting ellipsoid barcodes. This provides a theoretical guarantee analogous, if somewhat weaker, to the classical stability results for Rips and \v{C}ech filtrations. We also conduct extensive experiments and compare Rips-type ellipsoid barcodes with standard Rips barcodes. Our findings indicate that Rips-type ellipsoid complexes are particularly effective for estimating the homology of manifolds and spaces with bottlenecks from samples. In particular, the persistence intervals corresponding to ground-truth topological features are longer compared to those obtained using the Rips complex of the data. Furthermore, Rips-type ellipsoid barcodes lead to better classification results in sparsely sampled point clouds. Finally, we demonstrate that Rips-type ellipsoid barcodes outperform Rips barcodes in classification tasks.
|
https://arxiv.org/abs/2408.11450
|
Academic Papers
|
svg
|
e7947d8a07a5df1431491bf4893d8fd7b6eaa11d6fc0316cb40d301ef38c33b2
|
2026-01-16T00:00:00-05:00
|
A response-adaptive multi-arm design for continuous endpoints based on a weighted information measure
|
arXiv:2409.04970v2 Announce Type: replace-cross Abstract: Multi-arm trials are gaining interest in practice given the statistical and logistical advantages they can offer. The standard approach uses a fixed allocation ratio, but there is a call for making it adaptive and skewing the allocation of patients towards better-performing arms. However, it is well-known that these approaches might suffer from lower statistical power. We present a response-adaptive design for continuous endpoints which explicitly allows to control the trade-off between the number of patients allocated to the "optimal" arm and the statistical power. Such a balance is achieved through the calibration of a tuning parameter, and we explore robust procedures to select it. The proposed criterion is based on a context-dependent information measure which gives greater weight to treatment arms with characteristics close to a pre-specified clinical target. We establish conditions under which the procedure consistently selects the target arm and derive the corresponding limiting allocation ratios. We also introduce a simulation-based hypothesis testing procedure which focuses on selecting the target arm and discuss strategies to effectively control the type-I error rate. The practical implementation of the proposed criterion and its potential advantage over currently used alternatives are illustrated in the context of early Phase IIa proof-of-concept oncology trials.
|
https://arxiv.org/abs/2409.04970
|
Academic Papers
|
svg
|
52f544b9e5d230b441b4eb2c467a57d3e4c76005bfa35e67a44ff2838a64f56a
|
2026-01-16T00:00:00-05:00
|
Error-Minimizing Measurements in Postselected One-Shot Symmetric Quantum State Discrimination and Acceptance as a Performance Metric
|
arXiv:2409.13379v2 Announce Type: replace-cross Abstract: In hypothesis testing with quantum states, given a black box containing one of the two possible states, measurement is performed to detect in favor of one of the hypotheses. In postselected hypothesis testing, a third outcome is added, corresponding to not selecting any of the hypotheses. In postselected scenario, minimum error one-shot symmetric hypothesis testing is characterized in literature conditioned on the fact that one of the selected outcomes occur. We proceed further in this direction to give the set of all possible measurements that lead to the minimum error. We have given an arbitrary error-minimizing measurement in a parametric form. Note that not selecting any of the hypotheses decimates the quality of testing. We further give an example to show that these measurements vary in quality. There is a need to discuss the quality of postselected hypothesis testing. We then characterize the quality of postselected hypothesis testing by defining a new metric acceptance and give expression of acceptance for an arbitrary error-minimizing measurement in terms of some parameters of the measurement. On the set of measurements that achieve minimum error, we have maximized the acceptance, and given an example which achieves that, thus giving an example of the best possible measurement in terms of acceptance.
|
https://arxiv.org/abs/2409.13379
|
Academic Papers
|
svg
|
13b49065230a5b34c5bc450efaba4b2c5fb73e615211ee2b5696a03bd5ded367
|
2026-01-16T00:00:00-05:00
|
Convex optimization with $p$-norm oracles
|
arXiv:2410.24158v2 Announce Type: replace-cross Abstract: In recent years, there have been significant advances in efficiently solving $\ell_s$-regression using linear system solvers and $\ell_2$-regression [Adil-Kyng-Peng-Sachdeva, J. ACM'24]. Would efficient smoothed $\ell_p$-norm solvers lead to even faster rates for solving $\ell_s$-regression when $2 \leq p < s$? In this paper, we give an affirmative answer to this question and show how to solve $\ell_s$-regression using $\tilde{O}(n^{\frac{\nu}{1+\nu}})$ iterations of solving smoothed $\ell_p$ regression problems, where $\nu := \frac{1}{p} - \frac{1}{s}$. To obtain this result, we provide improved accelerated rates for convex optimization problems when given access to an $\ell_p^s(\lambda)$-proximal oracle, which, for a point $c$, returns the solution of the regularized problem $\min_{x} f(x) + \lambda ||x-c||_p^s$. Additionally, we show that these rates for the $\ell_p^s(\lambda)$-proximal oracle are optimal for algorithms that query in the span of the outputs of the oracle, and we further apply our techniques to settings of high-order and quasi-self-concordant optimization.
|
https://arxiv.org/abs/2410.24158
|
Academic Papers
|
svg
|
ecb8fa4ac65b2ea20d043815746972b53194b590fac405eac95df2835c1fbf62
|
2026-01-16T00:00:00-05:00
|
Rydberg Atomic Quantum Receivers for Classical Wireless Communications and Sensing: Their Models and Performance
|
arXiv:2412.05554v3 Announce Type: replace-cross Abstract: The significant progress of quantum sensing technologies offer numerous radical solutions for measuring a multitude of physical quantities at an unprecedented precision. Among them, Rydberg atomic quantum receivers (RAQRs) emerge as an eminent solution for detecting the electric field of radio frequency (RF) signals, exhibiting great potential in assisting classical wireless communications and sensing. So far, most experimental studies have aimed for the proof of physical concepts to reveal its promise, while the practical signal model of RAQR-aided wireless communications and sensing remained under-explored. Furthermore, the performance of RAQR-based wireless receivers and their advantages over classical RF receivers have not been fully characterized. To fill these gaps, we introduce the RAQR to the wireless community by presenting an end-to-end reception scheme. We then develop a corresponding equivalent baseband signal model relying on a realistic reception flow. Our scheme and model provide explicit design guidance to RAQR-aided wireless systems. We next study the performance of RAQR-aided wireless systems based on our model, and compare them to classical RF receivers. The results show that Doppler broadening-free RAQRs are capable of achieving a substantial received signal-to-noise ratio (SNR) gain of over $27$ decibel (dB) and $40$ dB in the photon shot limit and standard quantum limit regimes, respectively.
|
https://arxiv.org/abs/2412.05554
|
Academic Papers
|
svg
|
089994d1a77454172d930a77203153e4f0df935f254c5230b2cbf0450d2d73cf
|
2026-01-16T00:00:00-05:00
|
Assessing fault-tolerant quantum advantage for $k$-SAT with structure
|
arXiv:2412.13274v3 Announce Type: replace-cross Abstract: For many problems, quantum algorithms promise speedups over their classical counterparts. However, these results predominantly rely on asymptotic worst-case analysis, which overlooks significant overheads due to error correction and the fact that real-world instances often contain exploitable structure. In this work, we employ the hybrid benchmarking method to evaluate the potential of quantum Backtracking and Grover's algorithm against the 2023 SAT competition main track winner in solving random $k$-SAT instances with tunable structure, designed to represent industry-like scenarios, using both $T$-depth and $T$-count as cost metrics to estimate quantum run times. Our findings reproduce the results of Campbell, Khurana, and Montanaro (Quantum '19) in the unstructured case using hybrid benchmarking. However, we offer a more sobering perspective in practically relevant regimes: almost all quantum speedups vanish, even asymptotically, when minimal structure is introduced or when $T$-count is considered instead of $T$-depth. Moreover, when the requirement is for the algorithm to find a solution within a single day, we find that only Grover's algorithm has the potential to outperform classical algorithms, but only in a very limited regime and only when using $T$-depth. We also discuss how more sophisticated heuristics could restore the asymptotic scaling advantage for quantum backtracking, but our findings suggest that the potential for practical quantum speedups in more structured $k$-SAT solving will remain limited.
|
https://arxiv.org/abs/2412.13274
|
Academic Papers
|
svg
|
680158355ab008abf655f96e1c7e7660d68351f93f456e005eca3f5b31725211
|
2026-01-16T00:00:00-05:00
|
Non-Expansive Mappings in Two-Time-Scale Stochastic Approximation: Finite-Time Analysis
|
arXiv:2501.10806v3 Announce Type: replace-cross Abstract: Two-time-scale stochastic approximation algorithms are iterative methods used in applications such as optimization, reinforcement learning, and control. Finite-time analysis of these algorithms has primarily focused on fixed point iterations where both time-scales have contractive mappings. In this work, we broaden the scope of such analyses by considering settings where the slower time-scale has a non-expansive mapping. For such algorithms, the slower time-scale can be viewed as a stochastic inexact Krasnoselskii-Mann iteration. We also study a variant where the faster time-scale has a projection step which leads to non-expansiveness in the slower time-scale. We show that the last-iterate mean square residual error for such algorithms decays at a rate $O(1/k^{1/4-\epsilon})$, where $\epsilon>0$ is arbitrarily small. We further establish almost sure convergence of iterates to the set of fixed points. We demonstrate the applicability of our framework by applying our results to minimax optimization, linear stochastic approximation, and Lagrangian optimization.
|
https://arxiv.org/abs/2501.10806
|
Academic Papers
|
svg
|
c39ba46df3705f0560df9f56f640a3bd4598a34d88739ec7e015d991c85292cf
|
2026-01-16T00:00:00-05:00
|
Topological constraints on self-organisation in locally interacting systems
|
arXiv:2501.13188v2 Announce Type: replace-cross Abstract: All intelligence is collective intelligence, in the sense that it is made of parts which must align with respect to system-level goals. Understanding the dynamics which facilitate or limit navigation of problem spaces by aligned parts thus impacts many fields ranging across life sciences and engineering. To that end, consider a system on the vertices of a planar graph, with pairwise interactions prescribed by the edges of the graph. Such systems can sometimes exhibit long-range order, distinguishing one phase of macroscopic behaviour from another. In networks of interacting systems we may view spontaneous ordering as a form of self-organisation, modelling neural and basal forms of cognition. Here, we discuss necessary conditions on the topology of the graph for an ordered phase to exist, with an eye towards finding constraints on the ability of a system with local interactions to maintain an ordered target state. By studying the scaling of free energy under the formation of domain walls in three model systems -- the Potts model, autoregressive models, and hierarchical networks -- we show how the combinatorics of interactions on a graph prevent or allow spontaneous ordering. As an application we are able to analyse why multiscale systems like those prevalent in biology are capable of organising into complex patterns, whereas rudimentary language models are challenged by long sequences of outputs.
|
https://arxiv.org/abs/2501.13188
|
Academic Papers
|
svg
|
7eb1127e7415f85f16564f3f40dece54f7d00a467800802483f2f0217017ed28
|
2026-01-16T00:00:00-05:00
|
Exploring specialization and sensitivity of convolutional neural networks in the context of simultaneous image augmentations
|
arXiv:2503.03283v2 Announce Type: replace-cross Abstract: Drawing parallels with the way biological networks are studied, we adapt the treatment--control paradigm to explainable artificial intelligence research and enrich it through multi-parametric input alterations. In this study, we propose a framework for investigating the internal inference impacted by input data augmentations. The internal changes in network operation are reflected in activation changes measured by variance, which can be decomposed into components related to each augmentation, employing Sobol indices and Shapley values. These quantities enable one to visualize sensitivity to different variables and use them for guided masking of activations. In addition, we introduce a way of single-class sensitivity analysis where the candidates are filtered according to their matching to prediction bias generated by targeted damaging of the activations. Relying on the observed parallels, we assume that the developed framework can potentially be transferred to studying biological neural networks in complex environments.
|
https://arxiv.org/abs/2503.03283
|
Academic Papers
|
svg
|
f83ff717ad9b8e1935a2ce4b2a590b690e19317863bee1d4e497e0f13cca60ac
|
2026-01-16T00:00:00-05:00
|
Probabilistic Insights for Efficient Exploration Strategies in Reinforcement Learning
|
arXiv:2503.03565v2 Announce Type: replace-cross Abstract: We investigate efficient exploration strategies of environments with unknown stochastic dynamics and sparse rewards. Specifically, we analyze first the impact of parallel simulations on the probability of reaching rare states within a finite time budget. Using simplified models based on random walks and L\'evy processes, we provide analytical results that demonstrate a phase transition in reaching probabilities as a function of the number of parallel simulations. We identify an optimal number of parallel simulations that balances exploration diversity and time allocation. Additionally, we analyze a restarting mechanism that exponentially enhances the probability of success by redirecting efforts toward more promising regions of the state space. Our findings contribute to a more qualitative and quantitative theory of some exploration schemes in reinforcement learning, offering insights into developing more efficient strategies for environments characterized by rare events.
|
https://arxiv.org/abs/2503.03565
|
Academic Papers
|
svg
|
19291cb921b1635a5003e1f0d7b36e200fe6458dd2db3954fbea9fda4eef889c
|
2026-01-16T00:00:00-05:00
|
End-to-End PET Image Reconstruction via a Posterior-Mean Diffusion Model
|
arXiv:2503.08546v2 Announce Type: replace-cross Abstract: Positron Emission Tomography (PET) is a functional imaging modality that enables the visualization of biochemical and physiological processes across various tissues. Recently, deep learning (DL)-based methods have demonstrated significant progress in directly mapping sinograms to PET images. However, regression-based DL models often yield overly smoothed reconstructions lacking of details (i.e., low distortion, low perceptual quality), whereas GAN-based and likelihood-based posterior sampling models tend to introduce undesirable artifacts in predictions (i.e., high distortion, high perceptual quality), limiting their clinical applicability. To achieve a robust perception-distortion tradeoff, we propose Posterior-Mean Denoising Diffusion Model (PMDM-PET), a novel approach that builds upon a recently established mathematical theory to explore the closed-form expression of perception-distortion function in diffusion model space for PET image reconstruction from sinograms. Specifically, PMDM-PET first obtained posterior-mean PET predictions under minimum mean square error (MSE), then optimally transports the distribution of them to the ground-truth PET images distribution. Experimental results demonstrate that PMDM-PET not only generates realistic PET images with possible minimum distortion and optimal perceptual quality but also outperforms five recent state-of-the-art (SOTA) DL baselines in both qualitative visual inspection and quantitative pixel-wise metrics PSNR (dB)/SSIM/NRMSE.
|
https://arxiv.org/abs/2503.08546
|
Academic Papers
|
svg
|
4a2cd789389153c332a845636257978f839a0e43d3b2e2195dcce201ee511744
|
2026-01-16T00:00:00-05:00
|
Sparse Nonparametric Contextual Bandits
|
arXiv:2503.16382v2 Announce Type: replace-cross Abstract: We study the benefits of sparsity in nonparametric contextual bandit problems, in which the set of candidate features is countably or uncountably infinite. Our contribution is two-fold. First, using a novel reduction to sequences of multi-armed bandit problems, we provide lower bounds on the minimax regret, which show that polynomial dependence on the number of actions is generally unavoidable in this setting. Second, we show that a variant of the Feel-Good Thompson Sampling algorithm enjoys regret bounds that match our lower bounds up to logarithmic factors of the horizon, and have logarithmic dependence on the effective number of candidate features. When we apply our results to kernelised and neural contextual bandits, we find that sparsity enables better regret bounds whenever the horizon is large enough relative to the sparsity and the number of actions.
|
https://arxiv.org/abs/2503.16382
|
Academic Papers
|
svg
|
28fd9ee71065988c5a070d603aba8be726a85882d5e0cb59c5a73e662f71f123
|
2026-01-16T00:00:00-05:00
|
Information-theoretic coordinate subset and partition selection of multivariate Markov chains via submodular optimization
|
arXiv:2503.23340v2 Announce Type: replace-cross Abstract: We study the problem of optimally projecting the transition matrix of a finite ergodic multivariate Markov chain onto a lower-dimensional state space, as well as the problem of finding an optimal partition of coordinates such that the factorized Markov chain gives minimal information loss compared to the original multivariate chain. Specifically, we seek to construct a Markov chain that optimizes various information-theoretic criteria under cardinality constraints. These criteria include entropy rate, information-theoretic distance to factorizability, independence, and stationarity. We formulate these tasks as best subset or partition selection problems over multivariate Markov chains and leverage the (k-)submodular (or (k-)supermodular) structures of the objective functions to develop efficient greedy-based algorithms with theoretical guarantees. Along the way, we introduce a generalized version of the distorted greedy algorithm, which may be of independent interest. Finally, we illustrate the theory and algorithms through extensive numerical experiments with publicly available code on multivariate Markov chains associated with the Bernoulli--Laplace and Curie--Weiss models.
|
https://arxiv.org/abs/2503.23340
|
Academic Papers
|
svg
|
0593f91e7757cef421837cc7c39c7f2cfe02ab21898d631e5b61ad01221a69ea
|
2026-01-16T00:00:00-05:00
|
From Ground to Sky: Architectures, Applications, and Challenges Shaping Low-Altitude Wireless Networks
|
arXiv:2506.12308v3 Announce Type: replace-cross Abstract: In this article, we introduce a novel low-altitude wireless network (LAWN), which is a reconfigurable, three-dimensional (3D) layered architecture. In particular, the LAWN integrates connectivity, sensing, control, and computing across aerial and terrestrial nodes that enable seamless operation in complex, dynamic, and mission-critical environments. Different from the conventional aerial communication systems, LAWN's distinctive feature is its tight integration of functional planes in which multiple functionalities continually reshape themselves to operate safely and efficiently in the low-altitude sky. With the LAWN, we discuss several enabling technologies, such as integrated sensing and communication (ISAC), semantic communication, and fully-actuated control systems. Finally, we identify potential applications and key cross-layer challenges. This article offers a comprehensive roadmap for future research and development in the low-altitude airspace.
|
https://arxiv.org/abs/2506.12308
|
Academic Papers
|
svg
|
d4926a00a1e5407a09ed92137af19b253937b24c39c964ff74c858a22bc3f7e1
|
2026-01-16T00:00:00-05:00
|
A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning
|
arXiv:2506.14432v2 Announce Type: replace-cross Abstract: We present FOMO300K, a large-scale, heterogeneous dataset of 318,877 brain Magnetic Resonance Imaging (MRI) scans from 82,678 MRI sessions and 59,969 subjects, aggregated from 920 publicly available sources. The dataset includes both clinical- and research-grade images, multiple MRI sequences, and a wide range of anatomical and pathological variability, including scans with large brain anomalies. Minimal preprocessing was applied to preserve the original image characteristics while reducing entry barriers for new users. Companion code for self-supervised pretraining and finetuning is provided, along with pretrained models. FOMO300K is intended to support the development and benchmarking of self-supervised learning methods in medical imaging at scale.
|
https://arxiv.org/abs/2506.14432
|
Academic Papers
|
svg
|
5f1affa048c92702018b213b8106dfadc491a0bfbe250e593f96dd348ab0f8dd
|
2026-01-16T00:00:00-05:00
|
Data-Driven Dynamic Factor Modeling via Manifold Learning
|
arXiv:2506.19945v2 Announce Type: replace-cross Abstract: We introduce a data-driven dynamic factor framework for modeling the joint evolution of high-dimensional covariates and responses without parametric assumptions. Standard factor models applied to covariates alone often lose explanatory power for responses. Our approach uses anisotropic diffusion maps, a manifold learning technique, to learn low-dimensional embeddings that preserve both the intrinsic geometry of the covariates and the predictive relationship with responses. For time series arising from Langevin diffusions in Euclidean space, we show that the associated graph Laplacian converges to the generator of the underlying diffusion. We further establish a bound on the approximation error between the diffusion map coordinates and linear diffusion processes, and we show that ergodic averages in the embedding space converge under standard spectral assumptions. These results justify using Kalman filtering in diffusion-map coordinates for predicting joint covariate-response evolution. We apply this methodology to equity-portfolio stress testing using macroeconomic and financial variables from Federal Reserve supervisory scenarios, achieving mean absolute error improvements of up to 55% over classical scenario analysis and 39% over principal component analysis benchmarks.
|
https://arxiv.org/abs/2506.19945
|
Academic Papers
|
svg
|
165b9bca324a167cbde0a36fbed4178398919bee5993b7526f18689d5a42487a
|
2026-01-16T00:00:00-05:00
|
Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation
|
arXiv:2507.06363v3 Announce Type: replace-cross Abstract: In recent years, artificial intelligence has significantly advanced medical image segmentation. Nonetheless, challenges remain, including efficient 3D medical image processing across diverse modalities and handling data variability. In this work, we introduce Hierarchical Soft Mixture-of-Experts (HoME), a two-level token-routing layer for efficient long-context modeling, specifically designed for 3D medical image segmentation. Built on the Mamba Selective State Space Model (SSM) backbone, HoME enhances sequential modeling through adaptive expert routing. In the first level, a Soft Mixture-of-Experts (SMoE) layer partitions input sequences into local groups, routing tokens to specialized per-group experts for localized feature extraction. The second level aggregates these outputs through a global SMoE layer, enabling cross-group information fusion and global context refinement. This hierarchical design, combining local expert routing with global expert refinement, enhances generalizability and segmentation performance, surpassing state-of-the-art results across datasets from the three most widely used 3D medical imaging modalities and varying data qualities. The code is publicly available at https://github.com/gmum/MambaHoME.
|
https://arxiv.org/abs/2507.06363
|
Academic Papers
|
svg
|
6eca5de14172a7debd744a9f4264393b0d6892ef5078ae2cdaf6dec62a7337ec
|
2026-01-16T00:00:00-05:00
|
prNet: Data-Driven Phase Retrieval via Stochastic Refinement
|
arXiv:2507.09608v2 Announce Type: replace-cross Abstract: Phase retrieval is an ill-posed inverse problem in which classical and deep learning-based methods struggle to jointly achieve measurement fidelity and perceptual realism. We propose a novel framework for phase retrieval that leverages Langevin dynamics to enable efficient posterior sampling, yielding reconstructions that explicitly balance distortion and perceptual quality. Unlike conventional approaches that prioritize pixel-wise accuracy, our methods navigate the perception-distortion tradeoff through a principled combination of stochastic sampling, learned denoising, and model-based updates. The framework comprises three variants of increasing complexity, integrating theoretically grounded Langevin inference, adaptive noise schedule learning, parallel reconstruction sampling, and warm-start initialization from classical solvers. Extensive experiments demonstrate that our methods achieve state-of-the-art performance across multiple benchmarks, both in terms of fidelity and perceptual quality. The source code and trained models are available at https://github.com/METU-SPACE-Lab/prNet-for-Phase-Retrieval
|
https://arxiv.org/abs/2507.09608
|
Academic Papers
|
svg
|
eb1823bb5c04591b0aed78f724381f5aa5f88db5f87d7350679d6ddf54ea0ed0
|
2026-01-16T00:00:00-05:00
|
Life Finds A Way: Emergence of Cooperative Structures in Adaptive Threshold Networks
|
arXiv:2507.13253v3 Announce Type: replace-cross Abstract: There has been a long debate on how new levels of organization have evolved. It might seem unlikely, as cooperation must prevail over competition. One well-studied example is the emergence of autocatalytic sets, which seem to be a prerequisite for the evolution of life. Using a simple model, we investigate how varying bias toward cooperation versus antagonism shapes network dynamics, revealing that higher-order organization emerges even amid pervasive antagonistic interactions. In general, we observe that a quantitative increase in the number of elements in a system leads to a qualitative transition. We present a random threshold-directed network model that integrates node-specific traits with dynamic edge formation and node removal, simulating arbitrary levels of cooperation and competition. In our framework, intrinsic node values determine directed links through various threshold rules. Our model generates a multi-digraph with signed edges (reflecting support/antagonism, labeled ``help''/``harm''), which ultimately yields two parallel yet interdependent threshold graphs. Incorporating temporal growth and node turnover in our approach allows exploration of the evolution, adaptation, and potential collapse of communities and reveals phase transitions in both connectivity and resilience. Our findings extend classical random threshold and Erd\H{o}s-R\'enyi models, offering new insights into adaptive systems in biological and economic contexts, with emphasis on the application to Collective Affordance Sets. This framework should also be useful for making predictions that will be tested by ongoing experiments of microbial communities in soil.
|
https://arxiv.org/abs/2507.13253
|
Academic Papers
|
svg
|
56615377c4aa501b7b37b838d4a004d5d1e7cf4786b7ddacd9aa53f284aa6019
|
2026-01-16T00:00:00-05:00
|
Quantum circuit complexity and unsupervised machine learning of topological order
|
arXiv:2508.04486v2 Announce Type: replace-cross Abstract: Inspired by the close relationship between Kolmogorov complexity and unsupervised machine learning, we explore quantum circuit complexity, an important concept in quantum computation and quantum information science, as a pivot to understand and to build interpretable and efficient unsupervised machine learning for topological order in quantum many-body systems. We argue that Nielsen's quantum circuit complexity represents an intrinsic topological distance between topological quantum many-body phases of matter, and as such plays a central role in interpretable manifold learning of topological order. To span a bridge from conceptual power to practical applicability, we present two theorems that connect Nielsen's quantum circuit complexity for the quantum path planning between two arbitrary quantum many-body states with quantum Fisher complexity (Bures distance) and entanglement generation, respectively. Leveraging these connections, fidelity-based and entanglement-based similarity measures or kernels, which are more practical for implementation, are formulated. Using the two proposed distance measures, unsupervised manifold learning of quantum phases of the bond-alternating XXZ spin chain, the ground state of Kitaev's toric code and random product states, is conducted, demonstrating their superior performance. Moreover, we find that the entanglement-based approach, which captures the long-range structure of quantum entanglement of topological orders, is more robust to local Haar random noises. Relations with classical shadow tomography and shadow kernel learning are also discussed, where the latter can be naturally understood from our approach. Our results establish connections between key concepts and tools of quantum circuit computation, quantum complexity, quantum metrology, and machine learning of topological quantum order.
|
https://arxiv.org/abs/2508.04486
|
Academic Papers
|
svg
|
692c5f3900d3d642524d25277c2588f6157991c9025487f9b14ad67ec88d8851
|
2026-01-16T00:00:00-05:00
|
Random Walk Learning and the Pac-Man Attack
|
arXiv:2508.05663v3 Announce Type: replace-cross Abstract: Random walk (RW)-based algorithms have long been popular in distributed systems due to low overheads and scalability, with recent growing applications in decentralized learning. However, their reliance on local interactions makes them inherently vulnerable to malicious behavior. In this work, we investigate an adversarial threat that we term the ``Pac-Man'' attack, in which a malicious node probabilistically terminates any RW that visits it. This stealthy behavior gradually eliminates active RWs from the network, effectively halting the learning process without triggering failure alarms. To counter this threat, we propose the Average Crossing (AC) algorithm--a fully decentralized mechanism for duplicating RWs to prevent RW extinction in the presence of Pac-Man. Our theoretical analysis establishes that (i) the RW population remains almost surely bounded under AC and (ii) RW-based stochastic gradient descent remains convergent under AC, even in the presence of Pac-Man, with a quantifiable deviation from the true optimum. Our extensive empirical results on both synthetic and real-world datasets corroborate our theoretical findings. Furthermore, they uncover a phase transition in the extinction probability as a function of the duplication threshold. We offer theoretical insights by analyzing a simplified variant of the AC, which sheds light on the observed phase transition.
|
https://arxiv.org/abs/2508.05663
|
Academic Papers
|
svg
|
3f097f166848ce19d16c2c53759a02520d8f9dc9c6d4242456638f971a30620f
|
2026-01-16T00:00:00-05:00
|
A reduced-order derivative-informed neural operator for subsurface fluid-flow
|
arXiv:2509.13620v2 Announce Type: replace-cross Abstract: Neural operators have emerged as cost-effective surrogates for expensive fluid-flow simulators, particularly in computationally intensive tasks such as permeability inversion from time-lapse seismic data, and uncertainty quantification. In these applications, the fidelity of the surrogate's gradients with respect to system parameters is crucial, as the accuracy of downstream tasks, such as optimization and Bayesian inference, relies directly on the quality of the derivative information. Recent advances in physics-informed methods have leveraged derivative information to improve surrogate accuracy. However, incorporating explicit Jacobians can become computationally prohibitive, as the complexity typically scales quadratically with the number of input parameters. To address this limitation, we propose DeFINO (Derivative-based Fisher-score Informed Neural Operator), a reduced-order, derivative-informed training framework. DeFINO integrates Fourier neural operators (FNOs) with a novel derivative-based training strategy guided by the Fisher Information Matrix (FIM). By projecting Jacobians onto dominant eigen-directions identified by the FIM, DeFINO captures critical sensitivity information directly informed by observational data, significantly reducing computational expense. We validate DeFINO through synthetic experiments in the context of subsurface multi-phase fluid-flow, demonstrating improvements in gradient accuracy while maintaining robust forward predictions of underlying fluid dynamics. These results highlight DeFINO's potential to offer practical, scalable solutions for inversion problems in complex real-world scenarios, all at substantially reduced computational cost.
|
https://arxiv.org/abs/2509.13620
|
Academic Papers
|
svg
|
224f6e206c33b640d89b5b73a1e7143a3a51662ec846a35dc93ea17aa7524fca
|
2026-01-16T00:00:00-05:00
|
Effects of Structural Allocation of Geometric Task Diversity in Linear Meta-Learning Models
|
arXiv:2509.18349v3 Announce Type: replace-cross Abstract: Meta-learning aims to leverage information across related tasks to improve prediction on unlabeled data for new tasks when only a small number of labeled observations are available ("few-shot" learning). Increased task diversity is often believed to enhance meta-learning by providing richer information across tasks. However, recent work by Kumar et al. (2022) shows that increasing task diversity, quantified through the overall geometric spread of task representations, can in fact degrade meta-learning prediction performance across a range of models and datasets. In this work, we build on this observation by showing that meta-learning performance is affected not only by the overall geometric variability of task parameters, but also by how this variability is allocated relative to an underlying low-dimensional structure. Similar to Pimonova et al. (2025), we decompose task-specific regression effects into a structurally informative component and an orthogonal, non-informative component. We show theoretically and through simulation that meta-learning prediction degrades when a larger fraction of between-task variability lies in orthogonal, non-informative directions, even when the overall geometric variability of tasks is held fixed.
|
https://arxiv.org/abs/2509.18349
|
Academic Papers
|
svg
|
a861d9d7712098514d13f8490183a9d7f9099049a86d4c9d792e244c432dafc5
|
2026-01-16T00:00:00-05:00
|
Relative Information Gain and Gaussian Process Regression
|
arXiv:2510.04277v2 Announce Type: replace-cross Abstract: The sample complexity of estimating or maximising an unknown function in a reproducing kernel Hilbert space is known to be linked to both the effective dimension and the information gain associated with the kernel. While the information gain has an attractive information-theoretic interpretation, the effective dimension typically results in better rates. We introduce a new quantity called the relative information gain, which measures the sensitivity of the information gain with respect to the observation noise. We show that the relative information gain smoothly interpolates between the effective dimension and the information gain, and that the relative information gain has the same growth rate as the effective dimension. In the second half of the paper, we prove a new PAC-Bayesian excess risk bound for Gaussian process regression. The relative information gain arises naturally from the complexity term in this PAC-Bayesian bound. We prove bounds on the relative information gain that depend on the spectral properties of the kernel. When these upper bounds are combined with our excess risk bound, we obtain minimax-optimal rates of convergence.
|
https://arxiv.org/abs/2510.04277
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.