id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
29dfeb6b72f4fe44293139cf91c3433de93390ab626ae0568f6eec414e8afec4
|
2026-02-02T00:00:00-05:00
|
RoboStriker: Hierarchical Decision-Making for Autonomous Humanoid Boxing
|
arXiv:2601.22517v1 Announce Type: new Abstract: Achieving human-level competitive intelligence and physical agility in humanoid robots remains a major challenge, particularly in contact-rich and highly dynamic tasks such as boxing. While Multi-Agent Reinforcement Learning (MARL) offers a principled framework for strategic interaction, its direct application to humanoid control is hindered by high-dimensional contact dynamics and the absence of strong physical motion priors. We propose RoboStriker, a hierarchical three-stage framework that enables fully autonomous humanoid boxing by decoupling high-level strategic reasoning from low-level physical execution. The framework first learns a comprehensive repertoire of boxing skills by training a single-agent motion tracker on human motion capture data. These skills are subsequently distilled into a structured latent manifold, regularized by projecting the Gaussian-parameterized distribution onto a unit hypersphere. This topological constraint effectively confines exploration to the subspace of physically plausible motions. In the final stage, we introduce Latent-Space Neural Fictitious Self-Play (LS-NFSP), where competing agents learn competitive tactics by interacting within the latent action space rather than the raw motor space, significantly stabilizing multi-agent training. Experimental results demonstrate that RoboStriker achieves superior competitive performance in simulation and exhibits sim-to-real transfer. Our website is available at RoboStriker.
|
https://arxiv.org/abs/2601.22517
|
Academic Papers
|
svg
|
e9a05bae7c0fc57942c50a32c269c106110c3586efd67c901a63649b00e3c360
|
2026-02-02T00:00:00-05:00
|
Design Perspective on Materials Experience: A CiteSpace-Based Bibliometric and Visual Analysis of Interdisciplinary Research
|
arXiv:2601.22518v1 Announce Type: new Abstract: Based on a bibliometric analysis of literature from 2005 to 2024, this study reveals that material experience is undergoing a profound transformation characterized by evolving material definitions, methodological advances, and increasing interdisciplinary integration. Material types now extend beyond traditional substances to encompass virtual and biological media, underscoring a growing emphasis on perception and interaction. Methodologically, the field has transitioned from subjective descriptions to data-driven, quantifiable models focused on objective sensory analysis and multisensory integration to enhance immersion. Key drivers, including human-machine perception convergence, material-driven interface interactions, and the embedding of intelligent interactive functions, propel the discipline toward an experience-centered paradigm reflecting a deep convergence of design, science, and technology. At the national/regional level, the United States, China, Japan, Germany, and the Netherlands lead in contributions, while France, the United Kingdom, and Romania demonstrate significant interdisciplinary progress. At the institutional level, Delft University of Technology, Justus Liebig University Giessen, and the Centre National de la Recherche Scientifique show significant advantages. In particular, the Material-Driven Design theory has established a foundational impact on the discipline, while, regarding general research trends, scholars from the United States, the Netherlands, and Germany maintain the highest academic visibility. Overall, material experience research is at a critical juncture, its future development will depend on progress in material innovation, technological integration, and perceptual quantification, as well as the establishment of socio-cultural values, all of which must be effectively unified through design to address complex evolving needs.
|
https://arxiv.org/abs/2601.22518
|
Academic Papers
|
svg
|
01a9503c7c2ce7944bfe43d815361276f87b311b44f30d9dd82a52f6a342b585
|
2026-02-02T00:00:00-05:00
|
One Ring to Rule Them All: Unifying Group-Based RL via Dynamic Power-Mean Geometry
|
arXiv:2601.22521v1 Announce Type: new Abstract: Group-based reinforcement learning has evolved from the arithmetic mean of GRPO to the geometric mean of GMPO. While GMPO improves stability by constraining a conservative objective, it shares a fundamental limitation with GRPO: reliance on a fixed aggregation geometry that ignores the evolving and heterogeneous nature of each trajectory. In this work, we unify these approaches under Power-Mean Policy Optimization (PMPO), a generalized framework that parameterizes the aggregation geometry via the power-mean geometry exponent p. Within this framework, GRPO and GMPO are recovered as special cases. Theoretically, we demonstrate that adjusting p modulates the concentration of gradient updates, effectively reweighting tokens based on their advantage contribution. To determine p adaptively, we introduce a Clip-aware Effective Sample Size (ESS) mechanism. Specifically, we propose a deterministic rule that maps a trajectory clipping fraction to a target ESS. Then, we solve for the specific p to align the trajectory induced ESS with this target one. This allows PMPO to dynamically transition between the aggressive arithmetic mean for reliable trajectories and the conservative geometric mean for unstable ones. Experiments on multiple mathematical reasoning benchmarks demonstrate that PMPO outperforms strong baselines.
|
https://arxiv.org/abs/2601.22521
|
Academic Papers
|
svg
|
fa1718ef73e221d9f2afe4a3de5817863989fd4b8dd4ef15e96e2ce68c184545
|
2026-02-02T00:00:00-05:00
|
Can 3D point cloud data improve automated body condition score prediction in dairy cattle?
|
arXiv:2601.22522v1 Announce Type: new Abstract: Body condition score (BCS) is a widely used indicator of body energy status and is closely associated with metabolic status, reproductive performance, and health in dairy cattle; however, conventional visual scoring is subjective and labor-intensive. Computer vision approaches have been applied to BCS prediction, with depth images widely used because they capture geometric information independent of coat color and texture. More recently, three-dimensional point cloud data have attracted increasing interest due to their ability to represent richer geometric characteristics of animal morphology, but direct head-to-head comparisons with depth image-based approaches remain limited. In this study, we compared top-view depth image and point cloud data for BCS prediction under four settings: 1) unsegmented raw data, 2) segmented full-body data, 3) segmented hindquarter data, and 4) handcrafted feature data. Prediction models were evaluated using data from 1,020 dairy cows collected on a commercial farm, with cow-level cross-validation to prevent data leakage. Depth image-based models consistently achieved higher accuracy than point cloud-based models when unsegmented raw data and segmented full-body data were used, whereas comparable performance was observed when segmented hindquarter data were used. Both depth image and point cloud approaches showed reduced accuracy when handcrafted feature data were employed compared with the other settings. Overall, point cloud-based predictions were more sensitive to noise and model architecture than depth image-based predictions. Taken together, these results indicate that three-dimensional point clouds do not provide a consistent advantage over depth images for BCS prediction in dairy cattle under the evaluated conditions.
|
https://arxiv.org/abs/2601.22522
|
Academic Papers
|
svg
|
584924c5435a2bcee3846fe92193fa3237e259e4be035f97722b3c77b8358c12
|
2026-02-02T00:00:00-05:00
|
Variational Bayesian Flow Network for Graph Generation
|
arXiv:2601.22524v1 Announce Type: new Abstract: Graph generation aims to sample discrete node and edge attributes while satisfying coupled structural constraints. Diffusion models for graphs often adopt largely factorized forward-noising, and many flow-matching methods start from factorized reference noise and coordinate-wise interpolation, so node-edge coupling is not encoded by the generative geometry and must be recovered implicitly by the core network, which can be brittle after discrete decoding. Bayesian Flow Networks (BFNs) evolve distribution parameters and naturally support discrete generation. But classical BFNs typically rely on factorized beliefs and independent channels, which limit geometric evidence fusion. We propose Variational Bayesian Flow Network (VBFN), which performs a variational lifting to a tractable joint Gaussian variational belief family governed by structured precisions. Each Bayesian update reduces to solving a symmetric positive definite linear system, enabling coupled node and edge updates within a single fusion step. We construct sample-agnostic sparse precisions from a representation-induced dependency graph, thereby avoiding label leakage while enforcing node-edge consistency. On synthetic and molecular graph datasets, VBFN improves fidelity and diversity, and surpasses baseline methods.
|
https://arxiv.org/abs/2601.22524
|
Academic Papers
|
svg
|
23e8b3992887d3e5232f8af1ac0103aa65b1398501964b44d756eea7b8ddaa14
|
2026-02-02T00:00:00-05:00
|
Flexible FTN-OTFS for High-Mobility LEO Satellite-to-Ground Communication
|
arXiv:2601.22526v1 Announce Type: new Abstract: In this paper, a lightweight LEO satellite-assisted flexible faster-than-Nyquist (FTN)-orthogonal time frequency space (OTFS) (LEO-FFTN-OTFS) scheme is proposed to address the stringent constraints on onboard power consumption and the severe impact of fast time-varying channels in non-terrestrial networks. A rigorous system framework incorporating realistic 3GPP Tapped Delay Line (TDL) channel models is established to accurately capture high-mobility propagation characteristics. To counteract channel aging effects while maintaining low computational complexity, an SNR-aware flexible FTN strategy is introduced, wherein a low-complexity Look-Up Table (LUT) is utilized to adaptively optimize the time-domain compression factor based on instantaneous channel responses. Through this mechanism, the trade-off between rate acceleration and interference penalty is effectively resolved, ensuring that spectral efficiency is maximized while strict reliability constraints are satisfied with minimal processing overhead. Moreover, a comprehensive theoretical analysis is provided, in which analytical expressions for effective throughput, energy efficiency, and bit error rate are derived. Finally, it is demonstrated by extensive simulations that the proposed scheme significantly outperforms static FTN benchmarks, offering a superior balance of high throughput and robustness for next-generation LEO communications.
|
https://arxiv.org/abs/2601.22526
|
Academic Papers
|
svg
|
63432c576e7b99dfc159c3609df3e8448786d6ec83a71c13382b0667dd4aafc0
|
2026-02-02T00:00:00-05:00
|
$\rho$-$\texttt{EOS}$: Training-free Bidirectional Variable-Length Control for Masked Diffusion LLMs
|
arXiv:2601.22527v1 Announce Type: new Abstract: Beyond parallel generation and global context modeling, current masked diffusion large language models (dLLMs) suffer from a fundamental limitation: they require a predefined, fixed generation length, which lacks flexibility and forces an inevitable trade-off between output quality and computational efficiency. To address this, we study the denoising dynamics and find that the implicit density ($\rho$) of end-of-sequence ($\texttt{EOS}$) tokens serves as a reliable signal of generation sufficiency. In particular, the evolving implicit $\texttt{EOS}$ density during denoising reveals whether the current masked space is excessive or insufficient, thereby guiding the adjustment direction for generation length. Building on this insight, we propose $\textbf{$\rho$-$\texttt{EOS}$}$, a training-free, single-stage strategy that enables bidirectional variable-length generation for masked dLLMs. Unlike prior two-stage approaches--which require separate length adjustment and iterative mask insertion phases while supporting only unidirectional expansion--$\textbf{$\rho$-$\texttt{EOS}$}$ achieves bidirectional length adjustment within a unified denoising process by continuously estimating the implicit $\texttt{EOS}$ density: excessively high density triggers $\texttt{MASK}$ token contraction, while insufficient density induces expansion. Extensive experiments on mathematics and code benchmarks demonstrate that $\textbf{$\rho$-$\texttt{EOS}$}$ achieves comparable performance while substantially improving inference efficiency and token utilization.
|
https://arxiv.org/abs/2601.22527
|
Academic Papers
|
svg
|
3f5e9f3f793f959501f6d4aba71812a592c1548adbe45fb29cd2c43256a162fc
|
2026-02-02T00:00:00-05:00
|
Darwinian Memory: A Training-Free Self-Regulating Memory System for GUI Agent Evolution
|
arXiv:2601.22528v1 Announce Type: new Abstract: Multimodal Large Language Model (MLLM) agents facilitate Graphical User Interface (GUI) automation but struggle with long-horizon, cross-application tasks due to limited context windows. While memory systems provide a viable solution, existing paradigms struggle to adapt to dynamic GUI environments, suffering from a granularity mismatch between high-level intent and low-level execution, and context pollution where the static accumulation of outdated experiences drives agents into hallucination. To address these bottlenecks, we propose the Darwinian Memory System (DMS), a self-evolving architecture that constructs memory as a dynamic ecosystem governed by the law of survival of the fittest. DMS decomposes complex trajectories into independent, reusable units for compositional flexibility, and implements Utility-driven Natural Selection to track survival value, actively pruning suboptimal paths and inhibiting high-risk plans. This evolutionary pressure compels the agent to derive superior strategies. Extensive experiments on real-world multi-app benchmarks validate that DMS boosts general-purpose MLLMs without training costs or architectural overhead, achieving average gains of 18.0% in success rate and 33.9% in execution stability, while reducing task latency, establishing it as an effective self-evolving memory system for GUI tasks.
|
https://arxiv.org/abs/2601.22528
|
Academic Papers
|
svg
|
5df7dd010b0d4ff13e5782e49b87830bf7f6d72b31af33f69af7bbb2a8dd19a4
|
2026-02-02T00:00:00-05:00
|
SHED Light on Segmentation for Dense Prediction
|
arXiv:2601.22529v1 Announce Type: new Abstract: Dense prediction infers per-pixel values from a single image and is fundamental to 3D perception and robotics. Although real-world scenes exhibit strong structure, existing methods treat it as an independent pixel-wise prediction, often resulting in structural inconsistencies. We propose SHED, a novel encoder-decoder architecture that enforces geometric prior explicitly by incorporating segmentation into dense prediction. By bidirectional hierarchical reasoning, segment tokens are hierarchically pooled in the encoder and unpooled in the decoder to reverse the hierarchy. The model is supervised only at the final output, allowing the segment hierarchy to emerge without explicit segmentation supervision. SHED improves depth boundary sharpness and segment coherence, while demonstrating strong cross-domain generalization from synthetic to the real-world environments. Its hierarchy-aware decoder better captures global 3D scene layouts, leading to improved semantic segmentation performance. Moreover, SHED enhances 3D reconstruction quality and reveals interpretable part-level structures that are often missed by conventional pixel-wise methods.
|
https://arxiv.org/abs/2601.22529
|
Academic Papers
|
svg
|
6aba2504029313f380cc53fcf9c4182400b539a453bc445e5f88eb29bb3a4c6f
|
2026-02-02T00:00:00-05:00
|
Enhancing TableQA through Verifiable Reasoning Trace Reward
|
arXiv:2601.22530v1 Announce Type: new Abstract: A major challenge in training TableQA agents, compared to standard text- and image-based agents, is that answers cannot be inferred from a static input but must be reasoned through stepwise transformations of the table state, introducing multi-step reasoning complexity and environmental interaction. This leads to a research question: Can explicit feedback on table transformation action improve model reasoning capability? In this work, we introduce RE-Tab, a plug-and-play framework that architecturally enhances trajectory search via lightweight, training-free reward modeling by formulating the problem as a Partially Observable Markov Decision Process. We demonstrate that providing explicit verifiable rewards during State Transition (``What is the best action?'') and Simulative Reasoning (``Am I sure about the output?'') is crucial to steer the agent's navigation in table states. By enforcing stepwise reasoning with reward feedback in table transformations, RE-Tab achieves state-of-the-art performance in TableQA with almost 25\% drop in inference cost. Furthermore, a direct plug-and-play implementation of RE-Tab brings up to 41.77% improvement in QA accuracy and 33.33% drop in test-time inference samples for consistent answer. Consistent improvement pattern across various LLMs and state-of-the-art benchmarks further confirms RE-Tab's generalisability. The repository is available at https://github.com/ThomasK1018/RE_Tab .
|
https://arxiv.org/abs/2601.22530
|
Academic Papers
|
svg
|
0ca15af99d892db10cdde24e523048d6c906c2aa63476a42b37aec14e93d21d2
|
2026-02-02T00:00:00-05:00
|
Learn from A Rationalist: Distilling Intermediate Interpretable Rationales
|
arXiv:2601.22531v1 Announce Type: new Abstract: Because of the pervasive use of deep neural networks (DNNs), especially in high-stakes domains, the interpretability of DNNs has received increased attention. The general idea of rationale extraction (RE) is to provide an interpretable-by-design framework for DNNs via a select-predict architecture where two neural networks learn jointly to perform feature selection and prediction, respectively. Given only the remote supervision from the final task prediction, the process of learning to select subsets of features (or \emph{rationales}) requires searching in the space of all possible feature combinations, which is computationally challenging and even harder when the base neural networks are not sufficiently capable. To improve the predictive performance of RE models that are based on less capable or smaller neural networks (i.e., the students), we propose \textbf{REKD} (\textbf{R}ationale \textbf{E}xtraction with \textbf{K}nowledge \textbf{D}istillation) where a student RE model learns from the rationales and predictions of a teacher (i.e., a \emph{rationalist}) in addition to the student's own RE optimization. This structural adjustment to RE aligns well with how humans could learn effectively from interpretable and verifiable knowledge. Because of the neural-model agnostic nature of the method, any black-box neural network could be integrated as a backbone model. To demonstrate the viability of REKD, we conduct experiments with multiple variants of BERT and vision transformer (ViT) models. Our experiments across language and vision classification datasets (i.e., IMDB movie reviews, CIFAR 10 and CIFAR 100) show that REKD significantly improves the predictive performance of the student RE models.
|
https://arxiv.org/abs/2601.22531
|
Academic Papers
|
svg
|
18d4fe7b9649506807582d1f7268f751d738cf556d680df32516e595c8206202
|
2026-02-02T00:00:00-05:00
|
Demystifying Design Choices of Reinforcement Fine-tuning: A Batched Contextual Bandit Learning Perspective
|
arXiv:2601.22532v1 Announce Type: new Abstract: The reinforcement fine-tuning area is undergoing an explosion papers largely on optimizing design choices. Though performance gains are often claimed, inconsistent conclusions also arise from time to time, making the progress illusive. Reflecting on this illusion, we still lack principled answers to two fundamental questions: 1) what is the role of each design choice? 2) which ones are critical? This paper aims to shed light on them. The underlying challenge is that design choices are entangled together, making their contribution to learning and generalization difficult to attribute. To address this challenge, we first construct a minimalist baseline for disentangling factors: one rollout per query in each round, the outcome reward serving as the training signal without any advantage trick, and a batch size of thirty-two. This baseline connects to batched contextual bandit learning, which facilitates experimental analysis. Centering around this baseline, we design an experiment pipeline, examining the marginal gains of factors like advantage, number of rollouts, etc. Experiments on three base models and two datasets, not only reveal new understanding on the role of various design choices on learning and generalization dynamics, but also identify critical ones that deserve more effort.
|
https://arxiv.org/abs/2601.22532
|
Academic Papers
|
svg
|
073e893c134a1c02516f8d54160cb50dc36839299440108827aca025fe48443c
|
2026-02-02T00:00:00-05:00
|
LEAP -- Live Experiments for Active Pedagogy
|
arXiv:2601.22534v1 Announce Type: new Abstract: Interactive computational environments can help students explore algorithmic concepts through collaborative hands-on experimentation. However, static and instructor controlled demos in lectures limit engagement. Even when interactive visualizations are used, interactions are solely controlled by the instructor, leaving students as passive observers. In addition, the tools used for demonstration often vary significantly, as they are typically developed by individual instructors. Consequently, the visualizations remain confined to a single classroom, rather than being shared and adapted across courses or reused by other instructors. To address this gap and foster active engagement in live classrooms, we present a lightweight and seamless software framework named LEAP for developing interactive computational lab exercises using a simple idea: remotely callable instructor-defined functions. Using API endpoints and a provided client, students can discover and then call instructor defined functions remotely from their coding environment using scripts or interactive notebooks. Each function call is time-stamped and persistently logged in a database, allowing real-time visualization of participation, diverse solution paths, common pitfalls, and live feedback through collaboration, gamification, and quizzes. Labs are packaged as self-contained folders, each containing their own remotely callable functions. We provide example labs to demonstrate applications relevant for numerical analysis, machine learning, algorithms courses and mention some in electrical engineering (EE), economics, and physics. These capabilities enhance engagement and provide instructors with actionable insights into learning processes. With a standardized lab format and an online directory for community-contributed labs, we aim to foster a global ecosystem for exchanging and expanding interactive pedagogy enabled by LEAP.
|
https://arxiv.org/abs/2601.22534
|
Academic Papers
|
svg
|
a445f9807f085c2ad5e5cd36f63a62d825f2db53ea1c2da5c6eb864f62113155
|
2026-02-02T00:00:00-05:00
|
High Rate Efficient Local List Decoding from HDX
|
arXiv:2601.22535v1 Announce Type: new Abstract: We construct the first (locally computable, approximately) locally list decodable codes with rate, efficiency, and error tolerance approaching the information theoretic limit, a core regime of interest for the complexity theoretic task of hardness amplification. Our algorithms run in polylogarithmic time and sub-logarithmic depth, which together with classic constructions in the unique decoding (low-noise) regime leads to the resolution of several long-standing problems in coding and complexity theory: 1. Near-optimally input-preserving hardness amplification (and corresponding fast PRGs) 2. Constant rate codes with $\log(N)$-depth list decoding (RNC$^1$) 3. Complexity-preserving distance amplification Our codes are built on the powerful theory of (local-spectral) high dimensional expanders (HDX). At a technical level, we make two key contributions. First, we introduce a new framework for ($\mathrm{polylog(N)}$-round) belief propagation on HDX that leverages a mix of local correction and global expansion to control error build-up while maintaining high rate. Second, we introduce the notion of strongly explicit local routing on HDX, local algorithms that given any two target vertices, output a random path between them in only polylogarithmic time (and, preferably, sub-logarithmic depth). Constructing such schemes on certain coset HDX allows us to instantiate our otherwise combinatorial framework in polylogarithmic time and low depth, completing the result.
|
https://arxiv.org/abs/2601.22535
|
Academic Papers
|
svg
|
6cc6617e97edc4858525bfeaca226d6276b6f3dd0b7e5e633bba840d184c80f0
|
2026-02-02T00:00:00-05:00
|
Decoding in Geometry: Alleviating Embedding-Space Crowding for Complex Reasoning
|
arXiv:2601.22536v1 Announce Type: new Abstract: Sampling-based decoding underlies complex reasoning in large language models (LLMs), where decoding strategies critically shape model behavior. Temperature- and truncation-based methods reshape the next-token distribution through global probability reweighting or thresholding to balance the quality-diversity tradeoff. However, they operate solely on token probabilities, ignoring fine-grained relationships among tokens in the embedding space. We uncover a novel phenomenon, embedding-space crowding, where the next-token distribution concentrates its probability mass on geometrically close tokens in the embedding space. We quantify crowding at multiple granularities and find a statistical association with reasoning success in mathematical problem solving. Motivated by this finding, we propose CraEG, a plug-and-play sampling method that mitigates crowding through geometry-guided reweighting. CraEG is training-free, single-pass, and compatible with standard sampling strategies. Experiments on multiple models and benchmarks demonstrate improved generation performance, with gains in robustness and diversity metrics.
|
https://arxiv.org/abs/2601.22536
|
Academic Papers
|
svg
|
4a9b6add54722857e70b684974478f70643b47630ba62eec2bd0544591acc2fb
|
2026-02-02T00:00:00-05:00
|
Learning to Defer in Non-Stationary Time Series via Switching State-Space Models
|
arXiv:2601.22538v1 Announce Type: new Abstract: We study Learning to Defer for non-stationary time series with partial feedback and time-varying expert availability. At each time step, the router selects an available expert, observes the target, and sees only the queried expert's prediction. We model signed expert residuals using L2D-SLDS, a factorized switching linear-Gaussian state-space model with context-dependent regime transitions, a shared global factor enabling cross-expert information transfer, and per-expert idiosyncratic states. The model supports expert entry and pruning via a dynamic registry. Using one-step-ahead predictive beliefs, we propose an IDS-inspired routing rule that trades off predicted cost against information gained about the latent regime and shared factor. Experiments show improvements over contextual-bandit baselines and a no-shared-factor ablation.
|
https://arxiv.org/abs/2601.22538
|
Academic Papers
|
svg
|
270faf0399291da7cd6459f8fac2debc32b8e6c477cd8cadd9f118cdf0206602
|
2026-02-02T00:00:00-05:00
|
Neural-Inspired Posterior Approximation (NIPA)
|
arXiv:2601.22539v1 Announce Type: new Abstract: Humans learn efficiently from their environment by engaging multiple interacting neural systems that support distinct yet complementary forms of control, including model-based (goal-directed) planning, model-free (habitual) responding, and episodic memory-based learning. Model-based mechanisms compute prospective action values using an internal model of the environment, supporting flexible but computationally costly planning; model-free mechanisms cache value estimates and build heuristics that enable fast, efficient habitual responding; and memory-based mechanisms allow rapid adaptation from individual experience. In this work, we aim to elucidate the computational principles underlying this biological efficiency and translate them into a sampling algorithm for scalable Bayesian inference through effective exploration of the posterior distribution. More specifically, our proposed algorithm comprises three components: a model-based module that uses the target distribution for guided but computationally slow sampling; a model-free module that uses previous samples to learn patterns in the parameter space, enabling fast, reflexive sampling without directly evaluating the expensive target distribution; and an episodic-control module that supports rapid sampling by recalling specific past events (i.e., samples). We show that this approach advances Bayesian methods and facilitates their application to large-scale statistical machine learning problems. In particular, we apply our proposed framework to Bayesian deep learning, with an emphasis on proper and principled uncertainty quantification.
|
https://arxiv.org/abs/2601.22539
|
Academic Papers
|
svg
|
2c70d072a6fd7d6f2d50349227a54d74179c0293d760a3b52f61e7656d12fb05
|
2026-02-02T00:00:00-05:00
|
Benchmarking Long Roll-outs of Auto-regressive Neural Operators for the Compressible Navier-Stokes Equations with Conserved Quantity Correction
|
arXiv:2601.22541v1 Announce Type: new Abstract: Deep learning has been proposed as an efficient alternative for the numerical approximation of PDE solutions, offering fast, iterative simulation of PDEs through the approximation of solution operators. However, deep learning solutions have struggle to perform well over long prediction durations due to the accumulation of auto-regressive error, which is compounded by the inability of models to conserve physical quantities. In this work, we present conserved quantity correction, a model-agnostic technique for incorporation physical conservation criteria within deep learning models. Our results demonstrate consistent improvement in the long-term stability of auto-regressive neural operator models, regardless of the model architecture. Furthermore, we analyze the performance of neural operators from the spectral domain, highlighting significant limitations of present architectures. These results highlight the need for future work to consider architectures that place specific emphasis on high frequency components, which are integral to the understanding and modeling of turbulent flows.
|
https://arxiv.org/abs/2601.22541
|
Academic Papers
|
svg
|
11a4f060201c4c598f953409fd87890de0708b68c8b77d962e041ce781d6c823
|
2026-02-02T00:00:00-05:00
|
Detect and Act: Automated Dynamic Optimizer through Meta-Black-Box Optimization
|
arXiv:2601.22542v1 Announce Type: new Abstract: Dynamic Optimization Problems (DOPs) are challenging to address due to their complex nature, i.e., dynamic environment variation. Evolutionary Computation methods are generally advantaged in solving DOPs since they resemble dynamic biological evolution. However, existing evolutionary dynamic optimization methods rely heavily on human-crafted adaptive strategy to detect environment variation in DOPs, and then adapt the searching strategy accordingly. These hand-crafted strategies may perform ineffectively at out-of-box scenarios. In this paper, we propose a reinforcement learning-assisted approach to enable automated variation detection and self-adaption in evolutionary algorithms. This is achieved by borrowing the bi-level learning-to-optimize idea from recent Meta-Black-Box Optimization works. We use a deep Q-network as optimization dynamics detector and searching strategy adapter: It is fed as input with current-step optimization state and then dictates desired control parameters to underlying evolutionary algorithms for next-step optimization. The learning objective is to maximize the expected performance gain across a problem distribution. Once trained, our approach could generalize toward unseen DOPs with automated environment variation detection and self-adaption. To facilitate comprehensive validation, we further construct an easy-to-difficult DOPs testbed with diverse synthetic instances. Extensive benchmark results demonstrate flexible searching behavior and superior performance of our approach in solving DOPs, compared to state-of-the-art baselines.
|
https://arxiv.org/abs/2601.22542
|
Academic Papers
|
svg
|
778801e16ead80e527ef56f16a58b08f415656225a2b8050d3db4b83c164011f
|
2026-02-02T00:00:00-05:00
|
SCaLRec: Semantic Calibration for LLM-enabled Cloud-Device Sequential Recommendation
|
arXiv:2601.22543v1 Announce Type: new Abstract: Cloud-device collaborative recommendation partitions computation across the cloud and user devices: the cloud provides semantic user modeling, while the device leverages recent interactions and cloud semantic signals for privacy-preserving, responsive reranking. With large language models (LLMs) on the cloud, semantic user representations can improve sequential recommendation by capturing high-level intent. However, regenerating such representations via cloud LLM inference for every request is often infeasible at real-world scale. As a result, on-device reranking commonly reuses a cached cloud semantic user embedding across requests. We empirically identify a cloud semantic staleness effect: reused embeddings become less aligned with the user's latest interactions, leading to measurable ranking degradation. Most existing LLM-enabled cloud-device recommenders are typically designed around on-demand cloud semantics, either by assuming low-latency cloud LLM access or by regenerating semantic embeddings per request. When per-request regeneration is infeasible and cached semantics must be reused, two technical challenges arise: (1) deciding when cached cloud semantics remain useful for on-device reranking, and (2) maintaining ranking quality when the cloud LLM cannot be invoked and only cached semantics are available. To address this gap, we introduce the Semantic Calibration for LLM-enabled Cloud-Device Recommendation (SCaLRec). First, it estimates the reliability of cached semantics under the user's latest interactions. Second, an on-device semantic calibration module is proposed to adjusts the cached semantic embedding on-device using up-to-date interaction evidence, without per-request cloud LLM involvement. Experiments on real-world datasets show that SCaLRec consistently improves recommendation performance over strong baselines under cloud semantic staleness.
|
https://arxiv.org/abs/2601.22543
|
Academic Papers
|
svg
|
9566e0788af3a126b146e52f229b04d5ee2312919f29d5fec7c71fd0f270ca2b
|
2026-02-02T00:00:00-05:00
|
Adapting Reinforcement Learning for Path Planning in Constrained Parking Scenarios
|
arXiv:2601.22545v1 Announce Type: new Abstract: Real-time path planning in constrained environments remains a fundamental challenge for autonomous systems. Traditional classical planners, while effective under perfect perception assumptions, are often sensitive to real-world perception constraints and rely on online search procedures that incur high computational costs. In complex surroundings, this renders real-time deployment prohibitive. To overcome these limitations, we introduce a Deep Reinforcement Learning (DRL) framework for real-time path planning in parking scenarios. In particular, we focus on challenging scenes with tight spaces that require a high number of reversal maneuvers and adjustments. Unlike classical planners, our solution does not require ideal and structured perception, and in principle, could avoid the need for additional modules such as localization and tracking, resulting in a simpler and more practical implementation. Also, at test time, the policy generates actions through a single forward pass at each step, which is lightweight enough for real-time deployment. The task is formulated as a sequential decision-making problem grounded in a bicycle model dynamics, enabling the agent to directly learn navigation policies that respect vehicle kinematics and environmental constraints in the closed-loop setting. A new benchmark is developed to support both training and evaluation, capturing diverse and challenging scenarios. Our approach achieves state-of-the-art success rates and efficiency, surpassing classical planner baselines by +96% in success rate and +52% in efficiency. Furthermore, we release our benchmark as an open-source resource for the community to foster future research in autonomous systems. The benchmark and accompanying tools are available at https://github.com/dqm5rtfg9b-collab/Constrained_Parking_Scenarios.
|
https://arxiv.org/abs/2601.22545
|
Academic Papers
|
svg
|
4aed398ee270aa6acc5b873395cc158c382788c66ad5ad2d1be818977b44deb2
|
2026-02-02T00:00:00-05:00
|
Towards the Holographic Characteristic of LLMs for Efficient Short-text Generation
|
arXiv:2601.22546v1 Announce Type: new Abstract: The recent advancements in Large Language Models (LLMs) have attracted interest in exploring their in-context learning abilities and chain-of-thought capabilities. However, there are few studies investigating the specific traits related to the powerful generation capacity of LLMs. This paper aims to delve into the generation characteristics exhibited by LLMs. Through our investigation, we have discovered that language models tend to capture target-side keywords at the beginning of the generation process. We name this phenomenon the Holographic Characteristic of language models. For the purpose of exploring this characteristic and further improving the inference efficiency of language models, we propose a plugin called HOLO, which leverages the Holographic Characteristic to extract target-side keywords from language models within a limited number of generation steps and complements the sentence with a parallel lexically constrained text generation method. To verify the effectiveness of HOLO, we conduct massive experiments on language models of varying architectures and scales in the short-text generation scenario. The results demonstrate that HOLO achieves comparable performance to the baselines in terms of both automatic and human-like evaluation metrics and highlight the potential of the Holographic Characteristic.
|
https://arxiv.org/abs/2601.22546
|
Academic Papers
|
svg
|
cf105407e39697ec888c5ef52501ae5a41b0f568100d7c6a524c1a7bc9cc49c8
|
2026-02-02T00:00:00-05:00
|
PersonaAct: Simulating Short-Video Users with Personalized Agents for Counterfactual Filter Bubble Auditing
|
arXiv:2601.22547v1 Announce Type: new Abstract: Short-video platforms rely on personalized recommendation, raising concerns about filter bubbles that narrow content exposure. Auditing such phenomena at scale is challenging because real user studies are costly and privacy-sensitive, and existing simulators fail to reproduce realistic behaviors due to their reliance on textual signals and weak personalization. We propose PersonaAct, a framework for simulating short-video users with persona-conditioned multimodal agents trained on real behavioral traces for auditing filter bubbles in breadth and depth. PersonaAct synthesizes interpretable personas through automated interviews combining behavioral analysis with structured questioning, then trains agents on multimodal observations using supervised fine-tuning and reinforcement learning. We deploy trained agents for filter bubble auditing and evaluate bubble breadth via content diversity and bubble depth via escape potential. The evaluation demonstrates substantial improvements in fidelity over generic LLM baselines, enabling realistic behavior reproduction. Results reveal significant content narrowing over interaction. However, we find that Bilibili demonstrates the strongest escape potential. We release the first open multimodal short-video dataset and code to support reproducible auditing of recommender systems.
|
https://arxiv.org/abs/2601.22547
|
Academic Papers
|
svg
|
0e3a65376aa7591afb0be5c48c8024c229ea56a84b12ce14e25c6ac2efcd5952
|
2026-02-02T00:00:00-05:00
|
Are LLM Evaluators Really Narcissists? Sanity Checking Self-Preference Evaluations
|
arXiv:2601.22548v1 Announce Type: new Abstract: Recent research has shown that large language models (LLM) favor own outputs when acting as judges, undermining the integrity of automated post-training and evaluation workflows. However, it is difficult to disentangle which evaluation biases are explained by narcissism versus general experimental confounds, distorting measurements of self-preference bias. We discover a core methodological confound which could reduce measurement error by 89.6%. Specifically, LLM evaluators may deliver self-preferring verdicts when the judge responds to queries which they completed incorrectly themselves; this would be true regardless of whether one of their responses is their own. To decouple self-preference signals from noisy outputs on hard problems, we introduce an Evaluator Quality Baseline, which compares the probability that a judge incorrectly votes for itself against the probability that it votes for an incorrect response from another model. Evaluating this simple baseline on 37,448 queries, only 51% of initial findings retain statistical significance. Finally, we turn towards characterizing the entropy of "easy" versus "hard" evaluation votes from LLM judges. Our corrective baseline enables future research on self-preference by eliminating noisy data from potential solutions. More widely, this work contributes to the growing body of work on cataloging and isolating judge-bias effects.
|
https://arxiv.org/abs/2601.22548
|
Academic Papers
|
svg
|
f1bc6c6c967ea8db2ecac3a2b41eb269889c0a9c9033f7ca75d4f9466c95948e
|
2026-02-02T00:00:00-05:00
|
Exo-Plore: Exploring Exoskeleton Control Space through Human-aligned Simulation
|
arXiv:2601.22550v1 Announce Type: new Abstract: Exoskeletons show great promise for enhancing mobility, but providing appropriate assistance remains challenging due to the complexity of human adaptation to external forces. Current state-of-the-art approaches for optimizing exoskeleton controllers require extensive human experiments in which participants must walk for hours, creating a paradox: those who could benefit most from exoskeleton assistance, such as individuals with mobility impairments, are rarely able to participate in such demanding procedures. We present Exo-plore, a simulation framework that combines neuromechanical simulation with deep reinforcement learning to optimize hip exoskeleton assistance without requiring real human experiments. Exo-plore can (1) generate realistic gait data that captures human adaptation to assistive forces, (2) produce reliable optimization results despite the stochastic nature of human gait, and (3) generalize to pathological gaits, showing strong linear relationships between pathology severity and optimal assistance.
|
https://arxiv.org/abs/2601.22550
|
Academic Papers
|
svg
|
e5c4330b0ff117404b7b09ce315a68b435e2a544c8f639ccfc9e4ec8baab3903
|
2026-02-02T00:00:00-05:00
|
Hybrid Cross-Device Localization via Neural Metric Learning and Feature Fusion
|
arXiv:2601.22551v1 Announce Type: new Abstract: We present a hybrid cross-device localization pipeline developed for the CroCoDL 2025 Challenge. Our approach integrates a shared retrieval encoder and two complementary localization branches: a classical geometric branch using feature fusion and PnP, and a neural feed-forward branch (MapAnything) for metric localization conditioned on geometric inputs. A neural-guided candidate pruning strategy further filters unreliable map frames based on translation consistency, while depth-conditioned localization refines metric scale and translation precision on Spot scenes. These components jointly lead to significant improvements in recall and accuracy across both HYDRO and SUCCU benchmarks. Our method achieved a final score of 92.62 (R@0.5m, 5{\deg}) during the challenge.
|
https://arxiv.org/abs/2601.22551
|
Academic Papers
|
svg
|
f1ce5877cc93989f274f1ade868585fa98a4ea497eccdda5289d88934e1a7dca
|
2026-02-02T00:00:00-05:00
|
LeanArchitect: Automating Blueprint Generation for Humans and AI
|
arXiv:2601.22554v1 Announce Type: new Abstract: Large-scale formalization projects in Lean rely on blueprints: structured dependency graphs linking informal mathematical exposition to formal declarations. While blueprints are central to human collaboration, existing tooling treats the informal ($\LaTeX$) and formal (Lean) components as largely decoupled artifacts, leading to maintenance overhead and limiting integration with AI automation. We present LeanArchitect, a Lean package for extracting, managing, and exporting blueprint data directly from Lean code. LeanArchitect introduces a declarative annotation mechanism that associates formal declarations with blueprint metadata, automatically infers dependency information, and generates $\LaTeX$ blueprint content synchronized with the Lean development. This design eliminates duplication between formal and informal representations and eases fine-grained progress tracking for both human contributors and AI-based theorem provers. We demonstrate the practicality of LeanArchitect through the automated conversion of several large existing blueprint-driven projects, and through a human--AI collaboration case study formalizing a multivariate Taylor theorem. Our results show that LeanArchitect improves maintainability, exposes latent inconsistencies in existing blueprints, and provides an effective interface for integrating AI tools into real-world formalization workflows.
|
https://arxiv.org/abs/2601.22554
|
Academic Papers
|
svg
|
5f7b2ce11f5c58ae2f8e866e8825fe3f3db6bb6c7d5ee1bb771350d456a7b3a4
|
2026-02-02T00:00:00-05:00
|
VocBulwark: Towards Practical Generative Speech Watermarking via Additional-Parameter Injection
|
arXiv:2601.22556v1 Announce Type: new Abstract: Generated speech achieves human-level naturalness but escalates security risks of misuse. However, existing watermarking methods fail to reconcile fidelity with robustness, as they rely either on simple superposition in the noise space or on intrusive alterations to model weights. To bridge this gap, we propose VocBulwark, an additional-parameter injection framework that freezes generative model parameters to preserve perceptual quality. Specifically, we design a Temporal Adapter to deeply entangle watermarks with acoustic attributes, synergizing with a Coarse-to-Fine Gated Extractor to resist advanced attacks. Furthermore, we develop an Accuracy-Guided Optimization Curriculum that dynamically orchestrates gradient flow to resolve the optimization conflict between fidelity and robustness. Comprehensive experiments demonstrate that VocBulwark achieves high-capacity and high-fidelity watermarking, offering robust defense against complex practical scenarios, with resilience to Codec regenerations and variable-length manipulations.
|
https://arxiv.org/abs/2601.22556
|
Academic Papers
|
svg
|
89d73e4572feed2c78b4c76bf4a828bb65f44c84e22ac7a76b44357be255891a
|
2026-02-02T00:00:00-05:00
|
Recursive Mutexes in Separation Logic
|
arXiv:2601.22557v1 Announce Type: new Abstract: Mutexes (i.e., locks) are well understood in separation logic, and can be specified in terms of either protecting an invariant or atomically changing the state of the lock. In this abstract, we develop the same styles of specifications for \emph{recursive} mutexes, a common variant of mutexes in object-oriented languages such as C++ and Java. A recursive mutex can be acquired any number of times by the same thread, and our specifications treat all acquires/releases uniformly, with clients only needing to determine whether they hold the mutex when accessing the lock invariant.
|
https://arxiv.org/abs/2601.22557
|
Academic Papers
|
svg
|
66fba21a6f09f01dda12911291371aa19535399a237b4bbfa1c2f979518cf483
|
2026-02-02T00:00:00-05:00
|
Inverse acoustic scattering for random obstacles with multi-frequency data
|
arXiv:2601.22560v1 Announce Type: new Abstract: We study an inverse random obstacle scattering problems in $\mathbb{R}^2$ where the scatterer is formulated by a Gaussian process defined on the angular parameter domain. Equipped with a modified covariance function which is mathematically well-defined and physically consistent, the Gaussian process admits a parameterization via Karhunen--Lo\`eve (KL) expansion. Based on observed multi-frequency data, we develop a two-stage inversion method: the first stage reconstructs the baseline shape of the random scatterer and the second stage estimates the statistical characteristics of the boundary fluctuations, including KL eigenvalues and covariance hyperparameters. We further provide theoretical justifications for the modeling and inversion pipeline, covering well-definedness of the Gaussian-process model, convergence for the two-stage procedure and a brief discussion on uniqueness. Numerical experiments demonstrate stable recovery of both geometric and statistical information for obstacles with simple and more complex shapes.
|
https://arxiv.org/abs/2601.22560
|
Academic Papers
|
svg
|
4b0a4f1326033fda7b7f36f3ad99d29778fa67134f7dbfd1e7b8cd5ed78a51ad
|
2026-02-02T00:00:00-05:00
|
Approximately Optimal Multi-Stream Quickest Change Detection for Gaussian Streams
|
arXiv:2601.22561v1 Announce Type: new Abstract: This paper considers the bandit quickest change detection problem in which one stream contains a change-point that shifts its distribution by an unknown amount in an unknown direction. We consider an agent that can observe only a single stream at each time, and the goal of the agent is to detect this change as quickly as possible while controlling for false alarms. We propose an algorithm that combines a decaying-$\epsilon$-greedy stream switching rule with an efficient change-point detection algorithm for unknown post-change means. We provide bounds on the expected detection delay and average run length to false alarm for our algorithm, and based on these results we prove our algorithm is approximately optimal with respect to a commonly used surrogate. This work is the first to provide provable guarantees in this setting without strong assumptions such as a discretized post-change parameter set or a lower bound on the magnitude of change.
|
https://arxiv.org/abs/2601.22561
|
Academic Papers
|
svg
|
ec04ffa2a2bd41c63b9e67a79d06f72e877d9a502006b7adeff41c0c35f2dbc5
|
2026-02-02T00:00:00-05:00
|
EUGens: Efficient, Unified, and General Dense Layers
|
arXiv:2601.22563v1 Announce Type: new Abstract: Efficient neural networks are essential for scaling machine learning models to real-time applications and resource-constrained environments. Fully-connected feedforward layers (FFLs) introduce computation and parameter count bottlenecks within neural network architectures. To address this challenge, in this work, we propose a new class of dense layers that generalize standard fully-connected feedforward layers, \textbf{E}fficient, \textbf{U}nified and \textbf{Gen}eral dense layers (EUGens). EUGens leverage random features to approximate standard FFLs and go beyond them by incorporating a direct dependence on the input norms in their computations. The proposed layers unify existing efficient FFL extensions and improve efficiency by reducing inference complexity from quadratic to linear time. They also lead to \textbf{the first} unbiased algorithms approximating FFLs with arbitrary polynomial activation functions. Furthermore, EuGens reduce the parameter count and computational overhead while preserving the expressive power and adaptability of FFLs. We also present a layer-wise knowledge transfer technique that bypasses backpropagation, enabling efficient adaptation of EUGens to pre-trained models. Empirically, we observe that integrating EUGens into Transformers and MLPs yields substantial improvements in inference speed (up to \textbf{27}\%) and memory efficiency (up to \textbf{30}\%) across a range of tasks, including image classification, language model pre-training, and 3D scene reconstruction. Overall, our results highlight the potential of EUGens for the scalable deployment of large-scale neural networks in real-world scenarios.
|
https://arxiv.org/abs/2601.22563
|
Academic Papers
|
svg
|
a7e67d9eeec97cdcb1967dbbab6e73a99ba5e145e9989e9e0800f3cec051782e
|
2026-02-02T00:00:00-05:00
|
Quantum $(r,\delta)$-Locally Recoverable BCH and Homothetic-BCH Codes
|
arXiv:2601.22567v1 Announce Type: new Abstract: Quantum $(r,\delta)$-locally recoverable codes ($(r,\delta)$-LRCs) are the quantum version of classical $(r,\delta)$-LRCs designed to recover multiple failures in large-scale distributed and cloud storage systems. A quantum $(r,\delta)$-LRC, $Q(C)$, can be constructed from an $(r,\delta)$-LRC, $C$, which is Euclidean or Hermitian dual-containing. This article is devoted to studying how to get quantum $(r,\delta)$-LRCs from BCH and homothetic-BCH codes. As a consequence, we give pure quantum $(r,\delta)$-LRCs which are optimal for the Singleton-like bound.
|
https://arxiv.org/abs/2601.22567
|
Academic Papers
|
svg
|
d50b597d13cd8634429a6fffc43cdbe0ab7664c7c1219058b5529a5400a10451
|
2026-02-02T00:00:00-05:00
|
Whispers of Wealth: Red-Teaming Google's Agent Payments Protocol via Prompt Injection
|
arXiv:2601.22569v1 Announce Type: new Abstract: Large language model (LLM) based agents are increasingly used to automate financial transactions, yet their reliance on contextual reasoning exposes payment systems to prompt-driven manipulation. The Agent Payments Protocol (AP2) aims to secure agent-led purchases through cryptographically verifiable mandates, but its practical robustness remains underexplored. In this work, we perform an AI red-teaming evaluation of AP2 and identify vulnerabilities arising from indirect and direct prompt injection. We introduce two attack techniques, the Branded Whisper Attack and the Vault Whisper Attack which manipulate product ranking and extract sensitive user data. Using a functional AP2 based shopping agent built with Gemini-2.5-Flash and the Google ADK framework, we experimentally validate that simple adversarial prompts can reliably subvert agent behavior. Our findings reveal critical weaknesses in current agentic payment architectures and highlight the need for stronger isolation and defensive safeguards in LLM-mediated financial systems.
|
https://arxiv.org/abs/2601.22569
|
Academic Papers
|
svg
|
fe21cf13f8d619a3fb0b6d8c7bf2af198a9fc5616df37fdbd0fd6616d92e3182
|
2026-02-02T00:00:00-05:00
|
Leveraging Data to Say No: Memory Augmented Plug-and-Play Selective Prediction
|
arXiv:2601.22570v1 Announce Type: new Abstract: Selective prediction aims to endow predictors with a reject option, to avoid low confidence predictions. However, existing literature has primarily focused on closed-set tasks, such as visual question answering with predefined options or fixed-category classification. This paper considers selective prediction for visual language foundation models, addressing a taxonomy of tasks ranging from closed to open set and from finite to unbounded vocabularies, as in image captioning. We seek training-free approaches of low-complexity, applicable to any foundation model and consider methods based on external vision-language model embeddings, like CLIP. This is denoted as Plug-and-Play Selective Prediction (PaPSP). We identify two key challenges: (1) instability of the visual-language representations, leading to high variance in image-text embeddings, and (2) poor calibration of similarity scores. To address these issues, we propose a memory augmented PaPSP (MA-PaPSP) model, which augments PaPSP with a retrieval dataset of image-text pairs. This is leveraged to reduce embedding variance by averaging retrieved nearest-neighbor pairs and is complemented by the use of contrastive normalization to improve score calibration. Through extensive experiments on multiple datasets, we show that MA-PaPSP outperforms PaPSP and other selective prediction baselines for selective captioning, image-text matching, and fine-grained classification. Code is publicly available at https://github.com/kingston-aditya/MA-PaPSP.
|
https://arxiv.org/abs/2601.22570
|
Academic Papers
|
svg
|
3a0d93404894da300ece621cf78a41a493b794288769ca324d60570d7571756a
|
2026-02-02T00:00:00-05:00
|
PerfGuard: A Performance-Aware Agent for Visual Content Generation
|
arXiv:2601.22571v1 Announce Type: new Abstract: The advancement of Large Language Model (LLM)-powered agents has enabled automated task processing through reasoning and tool invocation capabilities. However, existing frameworks often operate under the idealized assumption that tool executions are invariably successful, relying solely on textual descriptions that fail to distinguish precise performance boundaries and cannot adapt to iterative tool updates. This gap introduces uncertainty in planning and execution, particularly in domains like visual content generation (AIGC), where nuanced tool performance significantly impacts outcomes. To address this, we propose PerfGuard, a performance-aware agent framework for visual content generation that systematically models tool performance boundaries and integrates them into task planning and scheduling. Our framework introduces three core mechanisms: (1) Performance-Aware Selection Modeling (PASM), which replaces generic tool descriptions with a multi-dimensional scoring system based on fine-grained performance evaluations; (2) Adaptive Preference Update (APU), which dynamically optimizes tool selection by comparing theoretical rankings with actual execution rankings; and (3) Capability-Aligned Planning Optimization (CAPO), which guides the planner to generate subtasks aligned with performance-aware strategies. Experimental comparisons against state-of-the-art methods demonstrate PerfGuard's advantages in tool selection accuracy, execution reliability, and alignment with user intent, validating its robustness and practical utility for complex AIGC tasks. The project code is available at https://github.com/FelixChan9527/PerfGuard.
|
https://arxiv.org/abs/2601.22571
|
Academic Papers
|
svg
|
e2c782c4c5add64f3c82e6b08efa2c5ab6bc37249add4fc7b3b4f5f71fd7a648
|
2026-02-02T00:00:00-05:00
|
DELNet: Continuous All-in-One Weather Removal via Dynamic Expert Library
|
arXiv:2601.22573v1 Announce Type: new Abstract: All-in-one weather image restoration methods are valuable in practice but depend on pre-collected data and require retraining for unseen degradations, leading to high cost. We propose DELNet, a continual learning framework for weather image restoration. DELNet integrates a judging valve that measures task similarity to distinguish new from known tasks, and a dynamic expert library that stores experts trained on different degradations. For new tasks, the valve selects top-k experts for knowledge transfer while adding new experts to capture task-specific features; for known tasks, the corresponding experts are directly reused. This design enables continuous optimization without retraining existing models. Experiments on OTS, Rain100H, and Snow100K demonstrate that DELNet surpasses state-of-the-art continual learning methods, achieving PSNR gains of 16\%, 11\%, and 12\%, respectively. These results highlight the effectiveness, robustness, and efficiency of DELNet, which reduces retraining cost and enables practical deployment in real-world scenarios.
|
https://arxiv.org/abs/2601.22573
|
Academic Papers
|
svg
|
6406b6ca46cec78f3355f13ce8a27185938fc06a41a0de10fc2daebcc54582ff
|
2026-02-02T00:00:00-05:00
|
Mitigating Hallucinations in Video Large Language Models via Spatiotemporal-Semantic Contrastive Decoding
|
arXiv:2601.22574v1 Announce Type: new Abstract: Although Video Large Language Models perform remarkably well across tasks such as video understanding, question answering, and reasoning, they still suffer from the problem of hallucination, which refers to generating outputs that are inconsistent with explicit video content or factual evidence. However, existing decoding methods for mitigating video hallucinations, while considering the spatiotemporal characteristics of videos, mostly rely on heuristic designs. As a result, they fail to precisely capture the root causes of hallucinations and their fine-grained temporal and semantic correlations, leading to limited robustness and generalization in complex scenarios. To more effectively mitigate video hallucinations, we propose a novel decoding strategy termed Spatiotemporal-Semantic Contrastive Decoding. This strategy constructs negative features by deliberately disrupting the spatiotemporal consistency and semantic associations of video features, and suppresses video hallucinations through contrastive decoding against the original video features during inference. Extensive experiments demonstrate that our method not only effectively mitigates the occurrence of hallucinations, but also preserves the general video understanding and reasoning capabilities of the model.
|
https://arxiv.org/abs/2601.22574
|
Academic Papers
|
svg
|
e0bbf5279755c8c74dd8b68475d6c1bc8157c7b529399bda1f99bfdb72a0ed03
|
2026-02-02T00:00:00-05:00
|
PhoStream: Benchmarking Real-World Streaming for Omnimodal Assistants in Mobile Scenarios
|
arXiv:2601.22575v1 Announce Type: new Abstract: Multimodal Large Language Models excel at offline audio-visual understanding, but their ability to serve as mobile assistants in continuous real-world streams remains underexplored. In daily phone use, mobile assistants must track streaming audio-visual inputs and respond at the right time, yet existing benchmarks are often restricted to multiple-choice questions or use shorter videos. In this paper, we introduce PhoStream, the first mobile-centric streaming benchmark that unifies on-screen and off-screen scenarios to evaluate video, audio, and temporal reasoning. PhoStream contains 5,572 open-ended QA pairs from 578 videos across 4 scenarios and 10 capabilities. We build it with an Automated Generative Pipeline backed by rigorous human verification, and evaluate models using a realistic Online Inference Pipeline and LLM-as-a-Judge evaluation for open-ended responses. Experiments reveal a temporal asymmetry in LLM-judged scores (0-100): models perform well on Instant and Backward tasks (Gemini 3 Pro exceeds 80), but drop sharply on Forward tasks (16.40), largely due to early responses before the required visual and audio cues appear. This highlights a fundamental limitation: current MLLMs struggle to decide when to speak, not just what to say. Code and datasets used in this work will be made publicly accessible at https://github.com/Lucky-Lance/PhoStream.
|
https://arxiv.org/abs/2601.22575
|
Academic Papers
|
svg
|
714f603026ad42726fdd4d1cad02920d8714c62aa53660235d8f77be21588f32
|
2026-02-02T00:00:00-05:00
|
FedDis: A Causal Disentanglement Framework for Federated Traffic Prediction
|
arXiv:2601.22578v1 Announce Type: new Abstract: Federated learning offers a promising paradigm for privacy-preserving traffic prediction, yet its performance is often challenged by the non-identically and independently distributed (non-IID) nature of decentralized traffic data. Existing federated methods frequently struggle with this data heterogeneity, typically entangling globally shared patterns with client-specific local dynamics within a single representation. In this work, we postulate that this heterogeneity stems from the entanglement of two distinct generative sources: client-specific localized dynamics and cross-client global spatial-temporal patterns. Motivated by this perspective, we introduce FedDis, a novel framework that, to the best of our knowledge, is the first to leverage causal disentanglement for federated spatial-temporal prediction. Architecturally, FedDis comprises a dual-branch design wherein a Personalized Bank learns to capture client-specific factors, while a Global Pattern Bank distills common knowledge. This separation enables robust cross-client knowledge transfer while preserving high adaptability to unique local environments. Crucially, a mutual information minimization objective is employed to enforce informational orthogonality between the two branches, thereby ensuring effective disentanglement. Comprehensive experiments conducted on four real-world benchmark datasets demonstrate that FedDis consistently achieves state-of-the-art performance, promising efficiency, and superior expandability.
|
https://arxiv.org/abs/2601.22578
|
Academic Papers
|
svg
|
4f92ce4314539c70a6e3a1daafa07a9f43eb698e3b0886b9a870f1784d11f8b8
|
2026-02-02T00:00:00-05:00
|
Non-Intrusive Graph-Based Bot Detection for E-Commerce Using Inductive Graph Neural Networks
|
arXiv:2601.22579v1 Announce Type: new Abstract: Malicious bots pose a growing threat to e-commerce platforms by scraping data, hoarding inventory, and perpetrating fraud. Traditional bot mitigation techniques, including IP blacklists and CAPTCHA-based challenges, are increasingly ineffective or intrusive, as modern bots leverage proxies, botnets, and AI-assisted evasion strategies. This work proposes a non-intrusive graph-based bot detection framework for e-commerce that models user session behavior through a graph representation and applies an inductive graph neural network for classification. The approach captures both relational structure and behavioral semantics, enabling accurate identification of subtle automated activity that evades feature-based methods. Experiments on real-world e-commerce traffic demonstrate that the proposed inductive graph model outperforms a strong session-level multilayer perceptron baseline in terms of AUC and F1 score. Additional adversarial perturbation and cold-start simulations show that the model remains robust under moderate graph modifications and generalizes effectively to previously unseen sessions and URLs. The proposed framework is deployment-friendly, integrates with existing systems without client-side instrumentation, and supports real-time inference and incremental updates, making it suitable for practical e-commerce security deployments.
|
https://arxiv.org/abs/2601.22579
|
Academic Papers
|
svg
|
ac5244c58e80ebcf4209745c4a535d9630f9dad3cd8883b0ea4072d18b2bcbb5
|
2026-02-02T00:00:00-05:00
|
SpanNorm: Reconciling Training Stability and Performance in Deep Transformers
|
arXiv:2601.22580v1 Announce Type: new Abstract: The success of Large Language Models (LLMs) hinges on the stable training of deep Transformer architectures. A critical design choice is the placement of normalization layers, leading to a fundamental trade-off: the ``PreNorm'' architecture ensures training stability at the cost of potential performance degradation in deep models, while the ``PostNorm'' architecture offers strong performance but suffers from severe training instability. In this work, we propose SpanNorm, a novel technique designed to resolve this dilemma by integrating the strengths of both paradigms. Structurally, SpanNorm establishes a clean residual connection that spans the entire transformer block to stabilize signal propagation, while employing a PostNorm-style computation that normalizes the aggregated output to enhance model performance. We provide a theoretical analysis demonstrating that SpanNorm, combined with a principled scaling strategy, maintains bounded signal variance throughout the network, preventing the gradient issues that plague PostNorm models, and also alleviating the representation collapse of PreNorm. Empirically, SpanNorm consistently outperforms standard normalization schemes in both dense and Mixture-of-Experts (MoE) scenarios, paving the way for more powerful and stable Transformer architectures.
|
https://arxiv.org/abs/2601.22580
|
Academic Papers
|
svg
|
cc4c1fddab6955afccbd4bd38e2ed120a058adadf8766d45cd2810279028ed97
|
2026-02-02T00:00:00-05:00
|
Cross-Domain Few-Shot Learning for Hyperspectral Image Classification Based on Mixup Foundation Model
|
arXiv:2601.22581v1 Announce Type: new Abstract: Although cross-domain few-shot learning (CDFSL) for hyper-spectral image (HSI) classification has attracted significant research interest, existing works often rely on an unrealistic data augmentation procedure in the form of external noise to enlarge the sample size, thus greatly simplifying the issue of data scarcity. They involve a large number of parameters for model updates, being prone to the overfitting problem. To the best of our knowledge, none has explored the strength of the foundation model, having strong generalization power to be quickly adapted to downstream tasks. This paper proposes the MIxup FOundation MOdel (MIFOMO) for CDFSL of HSI classifications. MIFOMO is built upon the concept of a remote sensing (RS) foundation model, pre-trained across a large scale of RS problems, thus featuring generalizable features. The notion of coalescent projection (CP) is introduced to quickly adapt the foundation model to downstream tasks while freezing the backbone network. The concept of mixup domain adaptation (MDM) is proposed to address the extreme domain discrepancy problem. Last but not least, the label smoothing concept is implemented to cope with noisy pseudo-label problems. Our rigorous experiments demonstrate the advantage of MIFOMO, where it beats prior arts with up to 14% margin. The source code of MIFOMO is open-sourced in https://github.com/Naeem- Paeedeh/MIFOMO for reproducibility and convenient further study.
|
https://arxiv.org/abs/2601.22581
|
Academic Papers
|
svg
|
a6a0698432551e62d7bc75db7f310a0b01713b065066249cfbd5cacce8b747bc
|
2026-02-02T00:00:00-05:00
|
MC-GRPO: Median-Centered Group Relative Policy Optimization for Small-Rollout Reinforcement Learning
|
arXiv:2601.22582v1 Announce Type: new Abstract: Group-relative policy optimization methods train language models by generating multiple rollouts per prompt and normalizing rewards with a shared mean reward baseline. In resource-constrained settings where the rollout budget is small, accuracy often degrades. We find that noise in the shared baseline induces advantage sign flips, where some rollouts receive an incorrect advantage sign, and the update direction is reversed. To address this, we propose Median-Centered Group Relative Policy Optimization (MC-GRPO), a simple and effective solution for small-rollout training. Our main idea is to replace the mean baseline with a median baseline: the median is far less sensitive to outlier rewards than the mean, mitigating the sign flips under small rollout size (G). We generate one additional rollout for median reference (G+1), and compute advantages by using the group median. With an odd-sized group, exactly one completion is the median and receives zero advantage, we exclude this pivot rollout from backpropagation so the number of gradient-contributing samples per prompt remains G, preserving the core update cost of standard G-rollout training. Across various GRPO-family methods and a wide range of models and scales, this median-centered training consistently improves stability and final accuracy in the low-rollout regime, reducing the gap between G=2 and G=8 to within 1%. Code is available at https://github.com/lotusroot-kim/MC-GRPO
|
https://arxiv.org/abs/2601.22582
|
Academic Papers
|
svg
|
7dec15012bcdcc86388f5cf2826de9e21b0834ff77f2119e7ee8b49b9906b927
|
2026-02-02T00:00:00-05:00
|
Scalable Fair Influence Blocking Maximization via Approximately Monotonic Submodular Optimization
|
arXiv:2601.22584v1 Announce Type: new Abstract: Influence Blocking Maximization (IBM) aims to select a positive seed set to suppress the spread of negative influence. However, existing IBM methods focus solely on maximizing blocking effectiveness, overlooking fairness across communities. To address this issue, we formalize fairness in IBM and justify Demographic Parity (DP) as a notion that is particularly well aligned with its semantics. Yet enforcing DP is computationally challenging: prior work typically formulates DP as a Linear Programming (LP) problem and relies on costly solvers, rendering them impractical for large-scale networks. In this paper, we propose a DP-aware objective while maintaining an approximately monotonic submodular structure, enabling efficient optimization with theoretical guarantees. We integrate this objective with blocking effectiveness through a tunable scalarization, yielding a principled fairness-effectiveness trade-offs. Building on this structure, we develop CELF-R, an accelerated seed selection algorithm that exploits approximate submodularity to eliminate redundant evaluations and naturally supports Pareto front construction. Extensive experiments demonstrate that CELF-R consistently outperforms state-of-the-art baselines, achieving a $(1-1/e-\psi)$-approximate solution while maintaining high efficiency.
|
https://arxiv.org/abs/2601.22584
|
Academic Papers
|
svg
|
e9735ec3b0e5a1699da6b5327c952f04819aa513a718a1d6553d7f7426a6fda0
|
2026-02-02T00:00:00-05:00
|
HetCCL: Accelerating LLM Training with Heterogeneous GPUs
|
arXiv:2601.22585v1 Announce Type: new Abstract: The rapid growth of large language models is driving organizations to expand their GPU clusters, often with GPUs from multiple vendors. However, current deep learning frameworks lack support for collective communication across heterogeneous GPUs, leading to inefficiency and higher costs. We present HetCCL, a collective communication library that unifies vendor-specific backends and enables RDMA-based communication across GPUs without requiring driver modifications. HetCCL introduces two novel mechanisms that enable cross-vendor communication while leveraging optimized vendor libraries, NVIDIA NCCL and AMD RCCL. Evaluations on a multi-vendor GPU cluster show that HetCCL matches NCCL and RCCL performance in homogeneous setups while uniquely scaling in heterogeneous environments, enabling practical, high-performance training with both NVIDIA and AMD GPUs without changes to existing deep learning applications.
|
https://arxiv.org/abs/2601.22585
|
Academic Papers
|
svg
|
3d4ce788667b0530063287210a5395fc8d6fc53934863901a9dc47392869cf2d
|
2026-02-02T00:00:00-05:00
|
WED-Net: A Weather-Effect Disentanglement Network with Causal Augmentation for Urban Flow Prediction
|
arXiv:2601.22586v1 Announce Type: new Abstract: Urban spatio-temporal prediction under extreme conditions (e.g., heavy rain) is challenging due to event rarity and dynamics. Existing data-driven approaches that incorporate weather as auxiliary input often rely on coarse-grained descriptors and lack dedicated mechanisms to capture fine-grained spatio-temporal effects. Although recent methods adopt causal techniques to improve out-of-distribution generalization, they typically overlook temporal dynamics or depend on fixed confounder stratification. To address these limitations, we propose WED-Net (Weather-Effect Disentanglement Network), a dual-branch Transformer architecture that separates intrinsic and weather-induced traffic patterns via self- and cross-attention, enhanced with memory banks and fused through adaptive gating. To further promote disentanglement, we introduce a discriminator that explicitly distinguishes weather conditions. Additionally, we design a causal data augmentation strategy that perturbs non-causal parts while preserving causal structures, enabling improved generalization under rare scenarios. Experiments on taxi-flow datasets from three cities demonstrate that WED-Net delivers robust performance under extreme weather conditions, highlighting its potential to support safer mobility, highlighting its potential to support safer mobility, disaster preparedness, and urban resilience in real-world settings. The code is publicly available at https://github.com/HQ-LV/WED-Net.
|
https://arxiv.org/abs/2601.22586
|
Academic Papers
|
svg
|
203469beb213edaf582c108a877ab5e9589cef412d5ff3ed16402d4f4be3c001
|
2026-02-02T00:00:00-05:00
|
An ultra-weak three-field finite element formulation for the biharmonic and extended Fisher--Kolmogorov equations
|
arXiv:2601.22587v1 Announce Type: new Abstract: This paper discusses a so-called ultra-weak three-field formulation of the biharmonic problem where the solution, its gradient, and an additional Lagrange multiplier are the three unknowns. We establish the well-posedness of the problem using the abstract theory for saddle-point problems, and develop a conforming finite element scheme based on Raviart--Thomas discretisations of the two auxiliary variables. The well-posedness of the discrete formulation and the corresponding a priori error estimate are proved using a discrete inf-sup condition. We further extend the analysis to the time-dependent semilinear equation, namely extended Fisher--Kolmogorov equation. We present a few numerical examples to demonstrate the performance of our approach.
|
https://arxiv.org/abs/2601.22587
|
Academic Papers
|
svg
|
325eec8fe0f2d1cd6afcff10a2675726b4a3ca458a65ba0eac35bb03fb5c2a57
|
2026-02-02T00:00:00-05:00
|
Rethinking LLM-as-a-Judge: Representation-as-a-Judge with Small Language Models via Semantic Capacity Asymmetry
|
arXiv:2601.22588v1 Announce Type: new Abstract: Large language models (LLMs) are widely used as reference-free evaluators via prompting, but this "LLM-as-a-Judge" paradigm is costly, opaque, and sensitive to prompt design. In this work, we investigate whether smaller models can serve as efficient evaluators by leveraging internal representations instead of surface generation. We uncover a consistent empirical pattern: small LMs, despite with weak generative ability, encode rich evaluative signals in their hidden states. This motivates us to propose the Semantic Capacity Asymmetry Hypothesis: evaluation requires significantly less semantic capacity than generation and can be grounded in intermediate representations, suggesting that evaluation does not necessarily need to rely on large-scale generative models but can instead leverage latent features from smaller ones. Our findings motivate a paradigm shift from LLM-as-a-Judge to Representation-as-a-Judge, a decoding-free evaluation strategy that probes internal model structure rather than relying on prompted output. We instantiate this paradigm through INSPECTOR, a probing-based framework that predicts aspect-level evaluation scores from small model representations. Experiments on reasoning benchmarks (GSM8K, MATH, GPQA) show that INSPECTOR substantially outperforms prompting-based small LMs and closely approximates full LLM judges, while offering a more efficient, reliable, and interpretable alternative for scalable evaluation.
|
https://arxiv.org/abs/2601.22588
|
Academic Papers
|
svg
|
6b739cb0ad2c4d8b54ebe7ac3f8b9098d16b2932e9fad51b5d9b0daff9cbbe37
|
2026-02-02T00:00:00-05:00
|
FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery
|
arXiv:2601.22589v1 Announce Type: new Abstract: Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request. Retraining a federated model from scratch is prohibitively expensive, motivating federated unlearning (FU). However, existing FU methods suffer from high unlearning overhead, utility degradation caused by entangled knowledge, and unintended relearning during post-unlearning recovery. In this paper, we propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery. FedCARE leverages gradient ascent for efficient forgetting when target data are locally available and employs data free model inversion to construct class level proxies of shared knowledge. Based on these insights, FedCARE integrates a pseudo-sample generator, conflict-aware projected gradient ascent for utility preserving unlearning, and a recovery strategy that suppresses rollback toward the pre-unlearning model. FedCARE supports client, instance, and class level unlearning with modest overhead. Extensive experiments on multiple datasets and model architectures under both IID and non-IID settings show that FedCARE achieves effective forgetting, improved utility retention, and reduced relearning risk compared to state of the art FU baselines.
|
https://arxiv.org/abs/2601.22589
|
Academic Papers
|
svg
|
ed2c64e89c51b29ea6b02cf3737e68192f28ceade430ce58c7d05ad77d3ea487
|
2026-02-02T00:00:00-05:00
|
Small is Beautiful: A Practical and Efficient Log Parsing Framework
|
arXiv:2601.22590v1 Announce Type: new Abstract: Log parsing is a fundamental step in log analysis, partitioning raw logs into constant templates and dynamic variables. While recent semantic-based parsers leveraging Large Language Models (LLMs) exhibit superior generalizability over traditional syntax-based methods, their effectiveness is heavily contingent on model scale. This dependency leads to significant performance collapse when employing smaller, more resource-efficient LLMs. Such degradation creates a major barrier to real-world adoption, where data privacy requirements and computational constraints necessitate the use of succinct models. To bridge this gap, we propose EFParser, an unsupervised LLM-based log parser designed to enhance the capabilities of smaller models through systematic architectural innovation. EFParser introduces a dual-cache system with an adaptive updating mechanism that distinguishes between novel patterns and variations of existing templates. This allows the parser to merge redundant templates and rectify prior errors, maintaining cache consistency. Furthermore, a dedicated correction module acts as a gatekeeper, validating and refining every LLM-generated template before caching to prevent error injection. Empirical evaluations on public large-scale datasets demonstrate that EFParser outperforms state-of-the-art baselines by an average of 12.5% across all metrics when running on smaller LLMs, even surpassing some baselines utilizing large-scale models. Despite its additional validation steps, EFParser maintains high computational efficiency, offering a robust and practical solution for real-world log analysis deployment.
|
https://arxiv.org/abs/2601.22590
|
Academic Papers
|
svg
|
5390d71686405556ca30b3ae818bc76abace83ece776d7c31b3209853415512c
|
2026-02-02T00:00:00-05:00
|
Heterogeneous Graph Alignment for Joint Reasoning and Interpretability
|
arXiv:2601.22593v1 Announce Type: new Abstract: Multi-graph learning is crucial for extracting meaningful signals from collections of heterogeneous graphs. However, effectively integrating information across graphs with differing topologies, scales, and semantics, often in the absence of shared node identities, remains a significant challenge. We present the Multi-Graph Meta-Transformer (MGMT), a unified, scalable, and interpretable framework for cross-graph learning. MGMT first applies Graph Transformer encoders to each graph, mapping structure and attributes into a shared latent space. It then selects task-relevant supernodes via attention and builds a meta-graph that connects functionally aligned supernodes across graphs using similarity in the latent space. Additional Graph Transformer layers on this meta-graph enable joint reasoning over intra- and inter-graph structure. The meta-graph provides built-in interpretability: supernodes and superedges highlight influential substructures and cross-graph alignments. Evaluating MGMT on both synthetic datasets and real-world neuroscience applications, we show that MGMT consistently outperforms existing state-of-the-art models in graph-level prediction tasks while offering interpretable representations that facilitate scientific discoveries. Our work establishes MGMT as a unified framework for structured multi-graph learning, advancing representation techniques in domains where graph-based data plays a central role.
|
https://arxiv.org/abs/2601.22593
|
Academic Papers
|
svg
|
e17dcb7020a91d37d03545f2369bd700deed10bde0a87a30bc7f038b544d38bc
|
2026-02-02T00:00:00-05:00
|
Language Model Circuits Are Sparse in the Neuron Basis
|
arXiv:2601.22594v1 Announce Type: new Abstract: The high-level concepts that a neural network uses to perform computation need not be aligned to individual neurons (Smolensky, 1986). Language model interpretability research has thus turned to techniques such as \textit{sparse autoencoders} (SAEs) to decompose the neuron basis into more interpretable units of model computation, for tasks such as \textit{circuit tracing}. However, not all neuron-based representations are uninterpretable. For the first time, we empirically show that \textbf{MLP neurons are as sparse a feature basis as SAEs}. We use this finding to develop an end-to-end pipeline for circuit tracing on the MLP neuron basis, which locates causal circuitry on a variety of tasks using gradient-based attribution. On a standard subject-verb agreement benchmark (Marks et al., 2025), a circuit of $\approx 10^2$ MLP neurons is enough to control model behaviour. On the multi-hop city $\to$ state $\to$ capital task from Lindsey et al., 2025, we find a circuit in which small sets of neurons encode specific latent reasoning steps (e.g.~`map city to its state'), and can be steered to change the model's output. This work thus advances automated interpretability of language models without additional training costs.
|
https://arxiv.org/abs/2601.22594
|
Academic Papers
|
svg
|
266b6e201053d3ad7d10fc7dbc61347bd13ce3185a3332a0272f8b746243fec8
|
2026-02-02T00:00:00-05:00
|
Learn More with Less: Uncertainty Consistency Guided Query Selection for RLVR
|
arXiv:2601.22595v1 Announce Type: new Abstract: Large Language Models (LLMs) have recently improved mathematical reasoning through Reinforcement Learning with Verifiable Reward (RLVR). However, existing RLVR algorithms require large query budgets, making annotation costly. We investigate whether fewer but more informative queries can yield similar or superior performance, introducing active learning (AL) into RLVR. We identify that classic AL sampling strategies fail to outperform random selection in this setting, due to ignoring objective uncertainty when only selecting by subjective uncertainty. This work proposes an uncertainty consistency metric to evaluate how well subjective uncertainty aligns with objective uncertainty. In the offline setting, this alignment is measured using the Point-Biserial Correlation Coefficient (PBC). For online training, because of limited sampling and dynamically shifting output distributions, PBC estimation is difficult. Therefore, we introduce a new online variant, computed from normalized advantage and subjective uncertainty. Theoretically, we prove that the online variant is strictly negatively correlated with offline PBC and supports better sample selection. Experiments show our method consistently outperforms random and classic AL baselines, achieving full-dataset performance while training on only 30% of the data, effectively reducing the cost of RLVR for reasoning tasks.
|
https://arxiv.org/abs/2601.22595
|
Academic Papers
|
svg
|
05fd4a9ec6a3182b80776e8cfe1394bf9bd6c7d013a265cb6b25f88b7ca9213b
|
2026-02-02T00:00:00-05:00
|
FOTBCD: A Large-Scale Building Change Detection Benchmark from French Orthophotos and Topographic Data
|
arXiv:2601.22596v1 Announce Type: new Abstract: We introduce FOTBCD, a large-scale building change detection dataset derived from authoritative French orthophotos and topographic building data provided by IGN France. Unlike existing benchmarks that are geographically constrained to single cities or limited regions, FOTBCD spans 28 departments across mainland France, with 25 used for training and three geographically disjoint departments held out for evaluation. The dataset covers diverse urban, suburban, and rural environments at 0.2m/pixel resolution. We publicly release FOTBCD-Binary, a dataset comprising approximately 28,000 before/after image pairs with pixel-wise binary building change masks, each associated with patch-level spatial metadata. The dataset is designed for large-scale benchmarking and evaluation under geographic domain shift, with validation and test samples drawn from held-out departments and manually verified to ensure label quality. In addition, we publicly release FOTBCD-Instances, a publicly available instance-level annotated subset comprising several thousand image pairs, which illustrates the complete annotation schema used in the full instance-level version of FOTBCD. Using a fixed reference baseline, we benchmark FOTBCD-Binary against LEVIR-CD+ and WHU-CD, providing strong empirical evidence that geographic diversity at the dataset level is associated with improved cross-domain generalization in building change detection.
|
https://arxiv.org/abs/2601.22596
|
Academic Papers
|
svg
|
c49917c40261247beeadf1a3a3e2690ed7a18bcca34f0db6a2a3e56840d4d0ef
|
2026-02-02T00:00:00-05:00
|
TimeMachine-bench: A Benchmark for Evaluating Model Capabilities in Repository-Level Migration Tasks
|
arXiv:2601.22597v1 Announce Type: new Abstract: With the advancement of automated software engineering, research focus is increasingly shifting toward practical tasks reflecting the day-to-day work of software engineers. Among these tasks, software migration, a critical process of adapting code to evolving environments, has been largely overlooked. In this study, we introduce TimeMachine-bench, a benchmark designed to evaluate software migration in real-world Python projects. Our benchmark consists of GitHub repositories whose tests begin to fail in response to dependency updates. The construction process is fully automated, enabling live updates of the benchmark. Furthermore, we curated a human-verified subset to ensure problem solvability. We evaluated agent-based baselines built on top of 11 models, including both strong open-weight and state-of-the-art LLMs on this verified subset. Our results indicated that, while LLMs show some promise for migration tasks, they continue to face substantial reliability challenges, including spurious solutions that exploit low test coverage and unnecessary edits stemming from suboptimal tool-use strategies. Our dataset and implementation are available at https://github.com/tohoku-nlp/timemachine-bench.
|
https://arxiv.org/abs/2601.22597
|
Academic Papers
|
svg
|
88ba5bb49a90372c780c45b9f750784885bd4676b89748453c40f7464b08b6dc
|
2026-02-02T00:00:00-05:00
|
A Semantically Consistent Dataset for Data-Efficient Query-Based Universal Sound Separation
|
arXiv:2601.22599v1 Announce Type: new Abstract: Query-based universal sound separation is fundamental to intelligent auditory systems, aiming to isolate specific sources from mixtures. Despite recent advances, existing methods continue to suffer from residual interference in complex acoustic scenes. This performance limitation stems largely from a data bottleneck: in-the-wild datasets contain weak labels and severe co-occurrence of events. These flaws induce models to learn spurious correlations between background noise and target categories instead of robust acoustic features. To address this, we propose an automated pipeline that eliminates co-occurrence of events by mining high-purity single-event segments from in-the-wild datasets via a semantically consistent synthesis protocol. Utilizing this pipeline, we constructed Hive, a high-quality synthetic dataset comprising 2.4k hours of raw audio. Experimental results demonstrate that, compared with the state-of-the-art model SAM-Audio which was trained on a huge dataset $\sim$500 times larger than Hive, certain open-source models trained on Hive achieve competitive separation accuracy and perceptual quality. Moreover, these models exhibited remarkable zero-shot generalization on out-of-distribution evaluation benchmarks. These findings highlight that prioritizing purity of supervised signals enables significant data efficiency, offering a new paradigm for training robust auditory foundation models with reduced computational costs. Code and dataset are available at https://shandaai.github.io/Hive.
|
https://arxiv.org/abs/2601.22599
|
Academic Papers
|
svg
|
619978c08758650b5ee33ce4bcfc7878b712af9a56ec98c7539fbad5c3a474ef
|
2026-02-02T00:00:00-05:00
|
Lethe:Adapter-Augmented Dual-Stream Update for Persistent Knowledge Erasure in Federated Unlearning
|
arXiv:2601.22601v1 Announce Type: new Abstract: Federated unlearning (FU) aims to erase designated client-level, class-level, or sample-level knowledge from a global model. Existing studies commonly assume that the collaboration ends up with the unlearning operation, overlooking the follow-up situation where the federated training continues over the remaining data.We identify a critical failure mode, termed Knowledge resurfacing, by revealing that continued training can re-activate unlearned knowledge and cause the removed influence to resurface in the global model. To address this, we propose Lethe, a novel federated unlearning method that de-correlates knowledge to be unlearned from knowledge to be retained, ensuring persistent erasure during continued training.Lethe follows a Reshape--Rectify--Restore pipeline: a temporary adapter is first trained with gradient ascent on the unlearning data to obtain magnified updates, which is then used as corrective signals to diverge layer-wise rectification on the remaining updates in two streams. Finally, the adapter is removed and a short recovery stage is performed on the retained data. Our experiments show that Lethe supports unlearning in the federated system at all levels in a unified manner and maintains superior persistence (Resurfacing Rate <1% in most cases) even after numerous rounds of follow-up training.
|
https://arxiv.org/abs/2601.22601
|
Academic Papers
|
svg
|
1c97effe479ac0874cc769cfda0d596fda513c1b2159a3fc1ea031e34e99077e
|
2026-02-02T00:00:00-05:00
|
An inertial minimal-deformation-rate framework for shape optimization
|
arXiv:2601.22605v1 Announce Type: new Abstract: We propose a robust numerical framework for PDE-constrained shape optimization and Willmore-driven surface hole filling. To address two central challenges -- slow progress in flat energy landscapes, which can trigger premature stagnation at suboptimal configurations, and mesh deterioration during geometric evolution -- we couple a second-order inertial flow with a minimal-deformation-rate (MDR) mesh motion strategy. This coupling accelerates convergence while preserving mesh quality and thus avoids remeshing. To further enhance robustness for non-smooth or non-convex initial geometries, we incorporate surface-diffusion regularization within the Barrett-Garcke-N"urnberg (BGN) framework. Moreover, we extend the inertial MDR methodology to Willmore-type surface hole filling, enabling high-order smooth reconstructions even from incompatible initial data. Numerical experiments demonstrate markedly faster convergence to lower original objective values, together with consistently superior mesh preservation throughout the evolution.
|
https://arxiv.org/abs/2601.22605
|
Academic Papers
|
svg
|
3908026e7f04c23b166defb402fa73b1b89619dfb029e37fceb72e7734e92aba
|
2026-02-02T00:00:00-05:00
|
From Self-Evolving Synthetic Data to Verifiable-Reward RL: Post-Training Multi-turn Interactive Tool-Using Agents
|
arXiv:2601.22607v1 Announce Type: new Abstract: Interactive tool-using agents must solve real-world tasks via multi-turn interaction with both humans and external environments, requiring dialogue state tracking, multi-step tool execution, while following complex instructions. Post-training such agents is challenging because synthesis for high-quality multi-turn tool-use data is difficult to scale, and reinforcement learning (RL) could face noisy signals caused by user simulation, leading to degraded training efficiency. We propose a unified framework that combines a self-evolving data agent with verifier-based RL. Our system, EigenData, is a hierarchical multi-agent engine that synthesizes tool-grounded dialogues together with executable per-instance checkers, and improves generation reliability via closed-loop self-evolving process that updates prompts and workflow. Building on the synthetic data, we develop an RL recipe that first fine-tunes the user model and then applies GRPO-style training with trajectory-level group-relative advantages and dynamic filtering, yielding consistent improvements beyond SFT. Evaluated on tau^2-bench, our best model reaches 73.0% pass^1 on Airline and 98.3% pass^1 on Telecom, matching or exceeding frontier models. Overall, our results suggest a scalable pathway for bootstrapping complex tool-using behaviors without expensive human annotation.
|
https://arxiv.org/abs/2601.22607
|
Academic Papers
|
svg
|
728a32b1382216d1c12aaeafb289daaafa7917435736e6a562298998db0e4f20
|
2026-02-02T00:00:00-05:00
|
Computing Dominating Sets in Disk Graphs with Centers in Convex Position
|
arXiv:2601.22609v1 Announce Type: new Abstract: Given a set $P$ of $n$ points in the plane and a collection of disks centered at these points, the disk graph $G(P)$ has vertex set $P$, with an edge between two vertices if their corresponding disks intersect. We study the dominating set problem in $G(P)$ under the special case where the points of $P$ are in convex position. The problem is NP-hard in general disk graphs. Under the convex position assumption, however, we present the first polynomial-time algorithm for the problem. Specifically, we design an $O(k^2 n \log^2 n)$-time algorithm, where $k$ denotes the size of a minimum dominating set. For the weighted version, in which each disk has an associated weight and the goal is to compute a dominating set of minimum total weight, we obtain an $O(n^5 \log^2 n)$-time algorithm.
|
https://arxiv.org/abs/2601.22609
|
Academic Papers
|
svg
|
1b0af543ad197e3a5c0a2fd7385db6777571514e5793b80a5e92468bd529673f
|
2026-02-02T00:00:00-05:00
|
Local-Global Multimodal Contrastive Learning for Molecular Property Prediction
|
arXiv:2601.22610v1 Announce Type: new Abstract: Accurate molecular property prediction requires integrating complementary information from molecular structure and chemical semantics. In this work, we propose LGM-CL, a local-global multimodal contrastive learning framework that jointly models molecular graphs and textual representations derived from SMILES and chemistry-aware augmented texts. Local functional group information and global molecular topology are captured using AttentiveFP and Graph Transformer encoders, respectively, and aligned through self-supervised contrastive learning. In addition, chemically enriched textual descriptions are contrasted with original SMILES to incorporate physicochemical semantics in a task-agnostic manner. During fine-tuning, molecular fingerprints are further integrated via Dual Cross-attention multimodal fusion. Extensive experiments on MoleculeNet benchmarks demonstrate that LGM-CL achieves consistent and competitive performance across both classification and regression tasks, validating the effectiveness of unified local-global and multimodal representation learning.
|
https://arxiv.org/abs/2601.22610
|
Academic Papers
|
svg
|
ca9f192ce15f029edbd3f1c82a9f4ff0c66e3bc58086fb2f6afddf2c559b34ff
|
2026-02-02T00:00:00-05:00
|
Stabilizing Transformer Training Through Consensus
|
arXiv:2601.22614v1 Announce Type: new Abstract: Standard attention-based transformers are known to exhibit instability under learning rate overspecification during training, particularly at high learning rates. While various methods have been proposed to improve resilience to such overspecification by modifying the optimization procedure, fundamental architectural innovations to this end remain underexplored. In this work, we illustrate that the consensus mechanism, a drop-in replacement for attention, stabilizes transformer training across a wider effective range of learning rates. We formulate consensus as a graphical model and provide extensive empirical analysis demonstrating improved stability across learning rate sweeps on text, DNA, and protein modalities. We further propose a hybrid consensus-attention framework that preserves performance while improving stability. We provide theoretical analysis characterizing the properties of consensus.
|
https://arxiv.org/abs/2601.22614
|
Academic Papers
|
svg
|
afc166cd569b749be2a219888047806f51ec3b36fe00fdc267ebd097214275cc
|
2026-02-02T00:00:00-05:00
|
TTSA3R: Training-Free Temporal-Spatial Adaptive Persistent State for Streaming 3D Reconstruction
|
arXiv:2601.22615v1 Announce Type: new Abstract: Streaming recurrent models enable efficient 3D reconstruction by maintaining persistent state representations. However, they suffer from catastrophic memory forgetting over long sequences due to balancing historical information with new observations. Recent methods alleviate this by deriving adaptive signals from attention perspective, but they operate on single dimensions without considering temporal and spatial consistency. To this end, we propose a training-free framework termed TTSA3R that leverages both temporal state evolution and spatial observation quality for adaptive state updates in 3D reconstruction. In particular, we devise a Temporal Adaptive Update Module that regulates update magnitude by analyzing temporal state evolution patterns. Then, a Spatial Contextual Update Module is introduced to localize spatial regions that require updates through observation-state alignment and scene dynamics. These complementary signals are finally fused to determine the state updating strategies. Extensive experiments demonstrate the effectiveness of TTSA3R in diverse 3D tasks. Moreover, our method exhibits only 15% error increase compared to over 200% degradation in baseline models on extended sequences, significantly improving long-term reconstruction stability. Our codes will be available soon.
|
https://arxiv.org/abs/2601.22615
|
Academic Papers
|
svg
|
441c0be6f8b9155a58c5ce4be23907112567732b8000bd58eedd96b58c66f7bf
|
2026-02-02T00:00:00-05:00
|
UniGeo: A Unified 3D Indoor Object Detection Framework Integrating Geometry-Aware Learning and Dynamic Channel Gating
|
arXiv:2601.22616v1 Announce Type: new Abstract: The growing adoption of robotics and augmented reality in real-world applications has driven considerable research interest in 3D object detection based on point clouds. While previous methods address unified training across multiple datasets, they fail to model geometric relationships in sparse point cloud scenes and ignore the feature distribution in significant areas, which ultimately restricts their performance. To deal with this issue, a unified 3D indoor detection framework, called UniGeo, is proposed. To model geometric relations in scenes, we first propose a geometry-aware learning module that establishes a learnable mapping from spatial relationships to feature weights, which enabes explicit geometric feature enhancement. Then, to further enhance point cloud feature representation, we propose a dynamic channel gating mechanism that leverages learnable channel-wise weighting. This mechanism adaptively optimizes features generated by the sparse 3D U-Net network, significantly enhancing key geometric information. Extensive experiments on six different indoor scene datasets clearly validate the superior performance of our method.
|
https://arxiv.org/abs/2601.22616
|
Academic Papers
|
svg
|
b81eb61dae4d9f406f60cc80a38e55ecb6034c2851b9cae68cb58885ab08c0c0
|
2026-02-02T00:00:00-05:00
|
EntroCut: Entropy-Guided Adaptive Truncation for Efficient Chain-of-Thought Reasoning in Small-scale Large Reasoning Models
|
arXiv:2601.22617v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) excel at complex reasoning tasks through extended chain-of-thought generation, but their reliance on lengthy intermediate steps incurs substantial computational cost. We find that the entropy of the model's output distribution in early reasoning steps reliably distinguishes correct from incorrect reasoning. Motivated by this observation, we propose EntroCut, a training-free method that dynamically truncates reasoning by identifying high-confidence states where reasoning can be safely terminated. To comprehensively evaluate the trade-off between efficiency and accuracy, we introduce the Efficiency-Performance Ratio (EPR), a unified metric that quantifies relative token savings per unit accuracy loss. Experiments on four benchmarks show that EntroCut reduces token usage by up to 40\% with minimal accuracy sacrifice, achieving superior efficiency-performance trade-offs compared with existing training-free methods. These results demonstrate that entropy-guided dynamic truncation provides a practical approach to mitigate the inefficiency of LRMs.
|
https://arxiv.org/abs/2601.22617
|
Academic Papers
|
svg
|
420991a900dde42c44360e1aa4442779d995f3af6d3ac99ef8d3d28a3b81568a
|
2026-02-02T00:00:00-05:00
|
Layer-wise Swapping for Generalizable Multilingual Safety
|
arXiv:2601.22620v1 Announce Type: new Abstract: Despite the rapid advancements of Large Language Models (LLMs), safety risks remain a critical challenge for low-resource languages. Existing safety datasets are predominantly English centric, limiting progress in multilingual safety alignment. As a result, low resource expert models, finetuned on their respective instruction datasets, tend to exhibit higher unsafety rates compared to their high resource counterparts. In this work, we propose a safety aware layer swapping method that transfers safety alignment from an English safety expert to low resource language experts without additional training. To further enhance transfer ability, our method adaptively selects or blends modules based on their degree of specialization. Our approach preserves performance on general language understanding tasks while enhancing safety in the target languages. Experimental results show that the proposed method achieves comparable performance to the language expert on general benchmarks such as MMMLU, BELEBELE, and MGSM, while producing more aligned and less harmful responses on the MultiJail safety benchmark.
|
https://arxiv.org/abs/2601.22620
|
Academic Papers
|
svg
|
4ca5ba8233ba792a2214947e04edd4b837322116cc3b7d667f6613103a4eceb4
|
2026-02-02T00:00:00-05:00
|
Ethical Risks of Large Language Models in Medical Consultation: An Assessment Based on Reproductive Ethics
|
arXiv:2601.22621v1 Announce Type: new Abstract: Background: As large language models (LLMs) are increasingly used in healthcare and medical consultation settings, a growing concern is whether these models can respond to medical inquiries in a manner that is ethically compliant--particularly in accordance with local ethical standards. To address the pressing need for comprehensive research on reliability and safety, this study systematically evaluates LLM performance in answering questions related to reproductive ethics, specifically assessing their alignment with Chinese ethical regulations. Methods: We evaluated eight prominent LLMs (e.g., GPT-4, Claude-3.7) on a custom test set of 986 questions (906 subjective, 80 objective) derived from 168 articles within Chinese reproductive ethics regulations. Subjective responses were evaluated using a novel six-dimensional scoring rubric assessing Safety (Normative Compliance, Guidance Safety) and Quality of the Answer (Problem Identification, Citation, Suggestion, Empathy). Results: Significant safety issues were prevalent, with risk rates for unsafe or misleading advice reaching 29.91%. A systemic weakness was observed across all models: universally poor performance in citing normative sources and expressing empathy. We also identified instances of anomalous moral reasoning, including logical self-contradictions and responses violating fundamental moral intuitions. Conclusions: Current LLMs are unreliable and unsafe for autonomous reproductive ethics counseling. Despite knowledge recall, they exhibit critical deficiencies in safety, logical consistency, and essential humanistic skills. These findings serve as a critical cautionary note against premature deployment, urging future development to prioritize robust reasoning, regulatory justification, and empathy.
|
https://arxiv.org/abs/2601.22621
|
Academic Papers
|
svg
|
438a67a290c38cb32413dd577798d0010efe847cbc84382ef93f51511a5f0514
|
2026-02-02T00:00:00-05:00
|
SYMPHONY: Synergistic Multi-agent Planning with Heterogeneous Language Model Assembly
|
arXiv:2601.22623v1 Announce Type: new Abstract: Recent advancements have increasingly focused on leveraging large language models (LLMs) to construct autonomous agents for complex problem-solving tasks. However, existing approaches predominantly employ a single-agent framework to generate search branches and estimate rewards during Monte Carlo Tree Search (MCTS) planning. This single-agent paradigm inherently limits exploration capabilities, often resulting in insufficient diversity among generated branches and suboptimal planning performance. To overcome these limitations, we propose Synergistic Multi-agent Planning with Heterogeneous langauge model assembly (SYMPHONY), a novel multi-agent planning framework that integrates a pool of heterogeneous language model-based agents. By leveraging diverse reasoning patterns across agents, SYMPHONY enhances rollout diversity and facilitates more effective exploration. Empirical results across multiple benchmark tasks show that SYMPHONY achieves strong performance even when instantiated with open-source LLMs deployable on consumer-grade hardware. When enhanced with cloud-based LLMs accessible via API, SYMPHONY demonstrates further improvements, outperforming existing state-of-the-art baselines and underscoring the effectiveness of heterogeneous multi-agent coordination in planning tasks.
|
https://arxiv.org/abs/2601.22623
|
Academic Papers
|
svg
|
c78a5b12f11a93a375b35bf77708a849db6094f4718cb06d4b32e7d3131ffb2c
|
2026-02-02T00:00:00-05:00
|
COBRA++: Enhanced COBRA Optimizer with Augmented Surrogate Pool and Reinforced Surrogate Selection
|
arXiv:2601.22624v1 Announce Type: new Abstract: The optimization problems in realistic world present significant challenges onto optimization algorithms, such as the expensive evaluation issue and complex constraint conditions. COBRA optimizer (including its up-to-date variants) is a representative and effective tool for addressing such optimization problems, which introduces 1) RBF surrogate to reduce online evaluation and 2) bi-stage optimization process to alternate search for feasible solution and optimal solution. Though promising, its design space, i.e., surrogate model pool and selection standard, is still manually decided by human expert, resulting in labor-intensive fine-tuning for novel tasks. In this paper, we propose a learning-based adaptive strategy (COBRA++) that enhances COBRA in two aspects: 1) An augmented surrogate pool to break the tie with RBF-like surrogate and hence enhances model diversity and approximation capability; 2) A reinforcement learning-based online model selection policy that empowers efficient and accurate optimization process. The model selection policy is trained to maximize overall performance of COBRA++ across a distribution of constrained optimization problems with diverse properties. We have conducted multi-dimensional validation experiments and demonstrate that COBRA++ achieves substantial performance improvement against vanilla COBRA and its adaptive variant. Ablation studies are provided to support correctness of each design component in COBRA++.
|
https://arxiv.org/abs/2601.22624
|
Academic Papers
|
svg
|
8efef3f5fca78824258db0fd565252c33f6ebd28404e6b77ca0c1d5a4efb3439
|
2026-02-02T00:00:00-05:00
|
Elderly HealthMag: Systematic Building and Calibrating a Tool for Identifying and Evaluating Senior User Digital Health Software
|
arXiv:2601.22627v1 Announce Type: new Abstract: Digital health (DH) software is increasingly deployed to populations where many end users live with one or more health conditions. Yet, DH software development teams frequently operate using implicit, incorrect assumptions about these users, resulting in products that under-serve the specific requirements imposed by their age and health conditions. Consequently, while software may meet clinical objectives on paper, it often fails to be inclusive during actual user interaction. To address this, we propose \textbf{\textit{HealthMag}}, a tool inspired by GenderMag designed to help better elicit, model and evaluate requirements for digital health software. We developed HealthMag through systematic mapping and calibration following the InclusiveMag framework. Furthermore, we integrated this with a calibrated version of an existing AgeMag method to create a dual-lens approach: \textbf{\textit{Elderly HealthMag}}, designed to aid requirements, design and evaluation of mHealth software for senior end users. We demonstrate application and utility of Age HealthMag via cognitive walkthroughs in identifying inclusivity biases in current senior user-oriented digital health applications.
|
https://arxiv.org/abs/2601.22627
|
Academic Papers
|
svg
|
2ee68a54750e761798db056853e11819d24b0edbc1c964cc3786d0ba8cfa186e
|
2026-02-02T00:00:00-05:00
|
TTCS: Test-Time Curriculum Synthesis for Self-Evolving
|
arXiv:2601.22628v1 Announce Type: new Abstract: Test-Time Training offers a promising way to improve the reasoning ability of large language models (LLMs) by adapting the model using only the test questions. However, existing methods struggle with difficult reasoning problems for two reasons: raw test questions are often too difficult to yield high-quality pseudo-labels, and the limited size of test sets makes continuous online updates prone to instability. To address these limitations, we propose TTCS, a co-evolving test-time training framework. Specifically, TTCS initializes two policies from the same pretrained model: a question synthesizer and a reasoning solver. These policies evolve through iterative optimization: the synthesizer generates progressively challenging question variants conditioned on the test questions, creating a structured curriculum tailored to the solver's current capability, while the solver updates itself using self-consistency rewards computed from multiple sampled responses on both original test and synthetic questions. Crucially, the solver's feedback guides the synthesizer to generate questions aligned with the model's current capability, and the generated question variants in turn stabilize the solver's test-time training. Experiments show that TTCS consistently strengthens the reasoning ability on challenging mathematical benchmarks and transfers to general-domain tasks across different LLM backbones, highlighting a scalable path towards dynamically constructing test-time curricula for self-evolving. Our code and implementation details are available at https://github.com/XMUDeepLIT/TTCS.
|
https://arxiv.org/abs/2601.22628
|
Academic Papers
|
svg
|
98b6add2f69a50ec7c54487d80b25b65dbbc597f04cb61f82d16e3aedfd949e4
|
2026-02-02T00:00:00-05:00
|
Time-Annealed Perturbation Sampling: Diverse Generation for Diffusion Language Models
|
arXiv:2601.22629v1 Announce Type: new Abstract: Diffusion language models (Diffusion-LMs) introduce an explicit temporal dimension into text generation, yet how this structure can be leveraged to control generation diversity for exploring multiple valid semantic or reasoning paths remains underexplored. In this paper, we show that Diffusion-LMs, like diffusion models in image generation, exhibit a temporal division of labor: early denoising steps largely determine the global semantic structure, while later steps focus on local lexical refinement. Building on this insight, we propose Time-Annealed Perturbation Sampling (TAPS), a training-free inference strategy that encourages semantic branching early in the diffusion process while progressively reducing perturbations to preserve fluency and instruction adherence. TAPS is compatible with both non-autoregressive and semi-autoregressive Diffusion backbones, demonstrated on LLaDA and TraDo in our paper, and consistently improves output diversity across creative writing and reasoning benchmarks without compromising generation quality.
|
https://arxiv.org/abs/2601.22629
|
Academic Papers
|
svg
|
59aa034762ef660ccc81526e58aeb92c1e4c4d841afceb4730f3a42193975470
|
2026-02-02T00:00:00-05:00
|
LINA: Linear Autoregressive Image Generative Models with Continuous Tokens
|
arXiv:2601.22630v1 Announce Type: new Abstract: Autoregressive models with continuous tokens form a promising paradigm for visual generation, especially for text-to-image (T2I) synthesis, but they suffer from high computational cost. We study how to design compute-efficient linear attention within this framework. Specifically, we conduct a systematic empirical analysis of scaling behavior with respect to parameter counts under different design choices, focusing on (1) normalization paradigms in linear attention (division-based vs. subtraction-based) and (2) depthwise convolution for locality augmentation. Our results show that although subtraction-based normalization is effective for image classification, division-based normalization scales better for linear generative transformers. In addition, incorporating convolution for locality modeling plays a crucial role in autoregressive generation, consistent with findings in diffusion models. We further extend gating mechanisms, commonly used in causal linear attention, to the bidirectional setting and propose a KV gate. By introducing data-independent learnable parameters to the key and value states, the KV gate assigns token-wise memory weights, enabling flexible memory management similar to forget gates in language models. Based on these findings, we present LINA, a simple and compute-efficient T2I model built entirely on linear attention, capable of generating high-fidelity 1024x1024 images from user instructions. LINA achieves competitive performance on both class-conditional and T2I benchmarks, obtaining 2.18 FID on ImageNet (about 1.4B parameters) and 0.74 on GenEval (about 1.5B parameters). A single linear attention module reduces FLOPs by about 61 percent compared to softmax attention. Code and models are available at: https://github.com/techmonsterwang/LINA.
|
https://arxiv.org/abs/2601.22630
|
Academic Papers
|
svg
|
9250228494596f65205801608afd79ac1cb09776b219e1f51731a2f5e6df67b9
|
2026-02-02T00:00:00-05:00
|
PEFT-MuTS: A Multivariate Parameter-Efficient Fine-Tuning Framework for Remaining Useful Life Prediction based on Cross-domain Time Series Representation Model
|
arXiv:2601.22631v1 Announce Type: new Abstract: The application of data-driven remaining useful life (RUL) prediction has long been constrained by the availability of large amount of degradation data. Mainstream solutions such as domain adaptation and meta-learning still rely on large amounts of historical degradation data from equipment that is identical or similar to the target, which imposes significant limitations in practical applications. This study investigates PEFT-MuTS, a Parameter-Efficient Fine-Tuning framework for few-shot RUL prediction, built on cross-domain pre-trained time-series representation models. Contrary to the widely held view that knowledge transfer in RUL prediction can only occur within similar devices, we demonstrate that substantial benefits can be achieved through pre-training process with large-scale cross-domain time series datasets. A independent feature tuning network and a meta-variable-based low rank multivariate fusion mechanism are developed to enable the pre-trained univariate time-series representation backbone model to fully exploit the multivariate relationships in degradation data for downstream RUL prediction task. Additionally, we introduce a zero-initialized regressor that stabilizes the fine-tuning process under few-shot conditions. Experiments on aero-engine and industrial bearing datasets demonstrate that our method can achieve effective RUL prediction even when less than 1\% of samples of target equipment are used. Meanwhile, it substantially outperforms conventional supervised and few-shot approaches while markedly reducing the data required to achieve high predictive accuracy. Our code is available at https://github.com/fuen1590/PEFT-MuTS.
|
https://arxiv.org/abs/2601.22631
|
Academic Papers
|
svg
|
29c2269e7efc97fdc3248fb2b62361ad393e2557bac027035a9c0948784f79ed
|
2026-02-02T00:00:00-05:00
|
DART-ing Through the Drift: Dynamic Tracing of Knowledge Neurons for Adaptive Inference-Time Pruning
|
arXiv:2601.22632v1 Announce Type: new Abstract: Large Language Models (LLMs) exhibit substantial parameter redundancy, particularly in Feed-Forward Networks (FFNs). Existing pruning methods suffer from two primary limitations. First, reliance on dataset-specific calibration introduces significant data dependency and computational overhead. Second, being predominantly static, they fail to account for the evolving subset of knowledge neurons in LLMs during autoregressive generation as the context evolves. To address this, we introduce DART, i.e., Dynamic Attention-Guided Runtime Tracing), a lightweight, training-free method that performs on-the-fly context-based pruning. DART monitors shifts in attention score distributions to infer context changes, dynamically updating neuron-level masks to retain salient parameters. Across ten benchmarks, DART outperforms prior dynamic baseline, achieving accuracy gains of up to 14.5% on LLAMA-3.1-8B at 70% FFN sparsity. Furthermore, DART achieves up to 3x better ROUGE-L scores with respect to static-masked pruning on summarization tasks, with its performance comparable to the original dense models. We conclusively demonstrate that the proposed framework effectively adapts to diverse semantic contexts, preserves model capabilities across both general and domain-specific tasks while running at less than 10MBs of memory for LLAMA-3.1-8B(16GBs) with 0.1% FLOPs overhead. The code is available at https://github.com/seeder-research/DART.
|
https://arxiv.org/abs/2601.22632
|
Academic Papers
|
svg
|
fe21255fd20e2c8e4f55e527a410b56bf95a4252f86a4aa6429b34db194734f4
|
2026-02-02T00:00:00-05:00
|
MCP-Diag: A Deterministic, Protocol-Driven Architecture for AI-Native Network Diagnostics
|
arXiv:2601.22633v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into network operations (AIOps) is hindered by two fundamental challenges: the stochastic grounding problem, where LLMs struggle to reliably parse unstructured, vendor-specific CLI output, and the security gap of granting autonomous agents shell access. This paper introduces MCP-Diag, a hybrid neuro-symbolic architecture built upon the Model Context Protocol (MCP). We propose a deterministic translation layer that converts raw stdout from canonical utilities (dig, ping, traceroute) into rigorous JSON schemas before AI ingestion. We further introduce a mandatory "Elicitation Loop" that enforces Human-in-the-Loop (HITL) authorization at the protocol level. Our preliminary evaluation demonstrates that MCP-Diag achieving 100% entity extraction accuracy with less than 0.9% execution latency overhead and 3.7x increase in context token usage.
|
https://arxiv.org/abs/2601.22633
|
Academic Papers
|
svg
|
2025538d79beb826f239eae4dfa77e17a584f7c77fc0b16bc433b29f98b38c3f
|
2026-02-02T00:00:00-05:00
|
What can Computer Vision learn from Ranganathan?
|
arXiv:2601.22634v1 Announce Type: new Abstract: The Semantic Gap Problem (SGP) in Computer Vision (CV) arises from the misalignment between visual and lexical semantics leading to flawed CV dataset design and CV benchmarks. This paper proposes that classification principles of S.R. Ranganathan can offer a principled starting point to address SGP and design high-quality CV datasets. We elucidate how these principles, suitably adapted, underpin the vTelos CV annotation methodology. The paper also briefly presents experimental evidence showing improvements in CV annotation and accuracy, thereby, validating vTelos.
|
https://arxiv.org/abs/2601.22634
|
Academic Papers
|
svg
|
840875c75703d150406f3382ad2ea7ab4a2cbbe5a5b8d456936590feb11d32e2
|
2026-02-02T00:00:00-05:00
|
Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling
|
arXiv:2601.22636v1 Announce Type: new Abstract: Large Language Models (LLMs) are typically evaluated for safety under single-shot or low-budget adversarial prompting, which underestimates real-world risk. In practice, attackers can exploit large-scale parallel sampling to repeatedly probe a model until a harmful response is produced. While recent work shows that attack success increases with repeated sampling, principled methods for predicting large-scale adversarial risk remain limited. We propose a scaling-aware Best-of-N estimation of risk, SABER, for modeling jailbreak vulnerability under Best-of-N sampling. We model sample-level success probabilities using a Beta distribution, the conjugate prior of the Bernoulli distribution, and derive an analytic scaling law that enables reliable extrapolation of large-N attack success rates from small-budget measurements. Using only n=100 samples, our anchored estimator predicts ASR@1000 with a mean absolute error of 1.66, compared to 12.04 for the baseline, which is an 86.2% reduction in estimation error. Our results reveal heterogeneous risk scaling profiles and show that models appearing robust under standard evaluation can experience rapid nonlinear risk amplification under parallel adversarial pressure. This work provides a low-cost, scalable methodology for realistic LLM safety assessment. We will release our code and evaluation scripts upon publication to future research.
|
https://arxiv.org/abs/2601.22636
|
Academic Papers
|
svg
|
5e1a4f43d2c278db38c3dbdb1a7af29267dbea44bf13e1247bf2a53b387bfd0a
|
2026-02-02T00:00:00-05:00
|
ScholarPeer: A Context-Aware Multi-Agent Framework for Automated Peer Review
|
arXiv:2601.22638v1 Announce Type: new Abstract: Automated peer review has evolved from simple text classification to structured feedback generation. However, current state-of-the-art systems still struggle with "surface-level" critiques: they excel at summarizing content but often fail to accurately assess novelty and significance or identify deep methodological flaws because they evaluate papers in a vacuum, lacking the external context a human expert possesses. In this paper, we introduce ScholarPeer, a search-enabled multi-agent framework designed to emulate the cognitive processes of a senior researcher. ScholarPeer employs a dual-stream process of context acquisition and active verification. It dynamically constructs a domain narrative using a historian agent, identifies missing comparisons via a baseline scout, and verifies claims through a multi-aspect Q&A engine, grounding the critique in live web-scale literature. We evaluate ScholarPeer on DeepReview-13K and the results demonstrate that ScholarPeer achieves significant win-rates against state-of-the-art approaches in side-by-side evaluations and reduces the gap to human-level diversity.
|
https://arxiv.org/abs/2601.22638
|
Academic Papers
|
svg
|
5ec80844e797b68c4faa7d9b66f199871e9e336ec63ccf56029a7b4eaf665ec1
|
2026-02-02T00:00:00-05:00
|
Pushing the Boundaries of Natural Reasoning: Interleaved Bonus from Formal-Logic Verification
|
arXiv:2601.22642v1 Announce Type: new Abstract: Large Language Models (LLMs) show remarkable capabilities, yet their stochastic next-token prediction creates logical inconsistencies and reward hacking that formal symbolic systems avoid. To bridge this gap, we introduce a formal logic verification-guided framework that dynamically interleaves formal symbolic verification with the natural language generation process, providing real-time feedback to detect and rectify errors as they occur. Distinguished from previous neuro-symbolic methods limited by passive post-hoc validation, our approach actively penalizes intermediate fallacies during the reasoning chain. We operationalize this framework via a novel two-stage training pipeline that synergizes formal logic verification-guided supervised fine-tuning and policy optimization. Extensive evaluation on six benchmarks spanning mathematical, logical, and general reasoning demonstrates that our 7B and 14B models outperform state-of-the-art baselines by average margins of 10.4% and 14.2%, respectively. These results validate that formal verification can serve as a scalable mechanism to significantly push the performance boundaries of advanced LLM reasoning.
|
https://arxiv.org/abs/2601.22642
|
Academic Papers
|
svg
|
3efb7aac7b0a048b77f19b96151e01959aac839ef3fbbdf73cc87a0189c581af
|
2026-02-02T00:00:00-05:00
|
Beyond Medical Chatbots: Meddollina and the Rise of Continuous Clinical Intelligence
|
arXiv:2601.22645v1 Announce Type: new Abstract: Generative medical AI now appears fluent and knowledgeable enough to resemble clinical intelligence, encouraging the belief that scaling will make it safe. But clinical reasoning is not text generation. It is a responsibility-bound process under ambiguity, incomplete evidence, and longitudinal context. Even as benchmark scores rise, generation-centric systems still show behaviours incompatible with clinical deployment: premature closure, unjustified certainty, intent drift, and instability across multi-step decisions. We argue these are structural consequences of treating medicine as next-token prediction. We formalise Clinical Contextual Intelligence (CCI) as a distinct capability class required for real-world clinical use, defined by persistent context awareness, intent preservation, bounded inference, and principled deferral when evidence is insufficient. We introduce Meddollina, a governance-first clinical intelligence system designed to constrain inference before language realisation, prioritising clinical appropriateness over generative completeness. Meddollina acts as a continuous intelligence layer supporting clinical workflows while preserving clinician authority. We evaluate Meddollina using a behaviour-first regime across 16,412+ heterogeneous medical queries, benchmarking against general-purpose models, medical-tuned models, and retrieval-augmented systems. Meddollina exhibits a distinct behavioural profile: calibrated uncertainty, conservative reasoning under underspecification, stable longitudinal constraint adherence, and reduced speculative completion relative to generation-centric baselines. These results suggest deployable medical AI will not emerge from scaling alone, motivating a shift toward Continuous Clinical Intelligence, where progress is measured by clinician-aligned behaviour under uncertainty rather than fluency-driven completion.
|
https://arxiv.org/abs/2601.22645
|
Academic Papers
|
svg
|
4abdb58ad2172b1c040e25b37a2134453286ccf1b16a4a659a0923f99ff5e069
|
2026-02-02T00:00:00-05:00
|
Test-Time Mixture of World Models for Embodied Agents in Dynamic Environments
|
arXiv:2601.22647v1 Announce Type: new Abstract: Language model (LM)-based embodied agents are increasingly deployed in real-world settings. Yet, their adaptability remains limited in dynamic environments, where constructing accurate and flexible world models is crucial for effective reasoning and decision-making. To address this challenge, we extend the Mixture-of-Experts (MoE) paradigm to embodied agents. While conventional MoE architectures modularize knowledge into expert components with pre-trained routing, they remain rigid once deployed, making them less effective for adapting to unseen domains in dynamic environments. We therefore propose Test-time Mixture of World Models (TMoW), a framework that enhances adaptability to unseen and evolving domains. TMoW updates its routing function over world models at test time, unlike conventional MoE where the function remains fixed, enabling agents to recombine existing models and integrate new ones for continual adaptation. It achieves this through (i) multi-granular prototype-based routing, which adapts mixtures across object- to scene-level similarities, (ii) test-time refinement that aligns unseen domain features with prototypes during inference, and (iii) distilled mixture-based augmentation, which efficiently constructs new models from few-shot data and existing prototypes. We evaluate TMoW on VirtualHome, ALFWorld, and RLBench benchmarks, demonstrating strong performance in both zero-shot adaptation and few-shot expansion scenarios, and showing that it enables embodied agents to operate effectively in dynamic environments.
|
https://arxiv.org/abs/2601.22647
|
Academic Papers
|
svg
|
44ae5abc109a715d7658847b77acc51d1bd47c5ea5a2d2f3dc86b397b11ff102
|
2026-02-02T00:00:00-05:00
|
UCPO: Uncertainty-Aware Policy Optimization
|
arXiv:2601.22648v1 Announce Type: new Abstract: The key to building trustworthy Large Language Models (LLMs) lies in endowing them with inherent uncertainty expression capabilities to mitigate the hallucinations that restrict their high-stakes applications. However, existing RL paradigms such as GRPO often suffer from Advantage Bias due to binary decision spaces and static uncertainty rewards, inducing either excessive conservatism or overconfidence. To tackle this challenge, this paper unveils the root causes of reward hacking and overconfidence in current RL paradigms incorporating uncertainty-based rewards, based on which we propose the UnCertainty-Aware Policy Optimization (UCPO) framework. UCPO employs Ternary Advantage Decoupling to separate and independently normalize deterministic and uncertain rollouts, thereby eliminating advantage bias. Furthermore, a Dynamic Uncertainty Reward Adjustment mechanism is introduced to calibrate uncertainty weights in real-time according to model evolution and instance difficulty. Experimental results in mathematical reasoning and general tasks demonstrate that UCPO effectively resolves the reward imbalance, significantly improving the reliability and calibration of the model beyond their knowledge boundaries.
|
https://arxiv.org/abs/2601.22648
|
Academic Papers
|
svg
|
00d6793931f19a07c97dccade2c094767328a9669e811fccb5ad0d6eb6a9e4fe
|
2026-02-02T00:00:00-05:00
|
GUDA: Counterfactual Group-wise Training Data Attribution for Diffusion Models via Unlearning
|
arXiv:2601.22651v1 Announce Type: new Abstract: Training-data attribution for vision generative models aims to identify which training data influenced a given output. While most methods score individual examples, practitioners often need group-level answers (e.g., artistic styles or object classes). Group-wise attribution is counterfactual: how would a model's behavior on a generated sample change if a group were absent from training? A natural realization of this counterfactual is Leave-One-Group-Out (LOGO) retraining, which retrains the model with each group removed; however, it becomes computationally prohibitive as the number of groups grows. We propose GUDA (Group Unlearning-based Data Attribution) for diffusion models, which approximates each counterfactual model by applying machine unlearning to a shared full-data model instead of training from scratch. GUDA quantifies group influence using differences in a likelihood-based scoring rule (ELBO) between the full model and each unlearned counterfactual. Experiments on CIFAR-10 and artistic style attribution with Stable Diffusion show that GUDA identifies primary contributing groups more reliably than semantic similarity, gradient-based attribution, and instance-level unlearning approaches, while achieving x100 speedup on CIFAR-10 over LOGO retraining.
|
https://arxiv.org/abs/2601.22651
|
Academic Papers
|
svg
|
54e2d8f4ad5e23882303ca6797f70b5e3cf3a94d8bfb6b25d5f52cdd83812f6d
|
2026-02-02T00:00:00-05:00
|
Human-Centered Explainability in AI-Enhanced UI Security Interfaces: Designing Trustworthy Copilots for Cybersecurity Analysts
|
arXiv:2601.22653v1 Announce Type: new Abstract: Artificial intelligence (AI) copilots are increasingly integrated into enterprise cybersecurity platforms to assist analysts in threat detection, triage, and remediation. However, the effectiveness of these systems depends not only on the accuracy of underlying models but also on the degree to which users can understand and trust their outputs. Existing research on algorithmic explainability has largely focused on model internals, while little attention has been given to how explanations should be surfaced in user interfaces for high-stakes decision-making contexts [8], [5], [6]. We present a mixed-methods study of explanation design strategies in AI-driven security dashboards. Through a taxonomy of explanation styles and a controlled user study with security practitioners, we compare natural language rationales, confidence visualizations, counterfactual explanations, and hybrid approaches. Our findings show that explanation style significantly affects user trust calibration, decision accuracy, and cognitive load. We contribute (1) empirical evidence on the usability of explanation interfaces for security copilots, (2) design guidelines for integrating explainability into enterprise UIs, and (3) a framework for aligning explanation strategies with analyst needs in security operations centers (SOCs). This work advances the design of human-centered AI tools in cybersecurity and provides broader implications for explainability in other high-stakes domains.
|
https://arxiv.org/abs/2601.22653
|
Academic Papers
|
svg
|
cf0dbe7183a02b767d81461a1aa6e0476aa650191f7cc41666c6245e25434fd5
|
2026-02-02T00:00:00-05:00
|
Parameter conditioned interpretable U-Net surrogate model for data-driven predictions of convection-diffusion-reaction processes
|
arXiv:2601.22654v1 Announce Type: new Abstract: We present a combined numerical and data-driven workflow for efficient prediction of nonlinear, instationary convection-diffusion-reaction dynamics on a two-dimensional phenotypic domain, motivated by macroscopic modeling of cancer cell plasticity. A finite-difference solver, implemented in C++, is developed using second-order spatial discretizations and a step-size controlled Runge-Kutta time integrator. A mesh refinement study confirms the second-order convergence for the spatial discretizations error. Based on simulated input-output pairs and corresponding parameterizations for the diffusion, advection, and reaction mechanisms, we train a parameter-conditioned U-Net surrogate to approximate the fixed-horizon solution map. The surrogate incorporates Feature-wise Linear Modulation (FiLM) for parameter conditioning, coordinate encoding to incorporate spatial location information, and residual blocks to enable multiscale representation learning in combination with the U-Nets skip connections. The trained model achieves low prediction error on held-out test data and provides favorable prediction times due to the GPU based parallelization. Generalization is analyzed using a factorial test dataset, separating initial conditions from parameter conditioning. The results reveal that approximation difficulty varies primarily with the conditioning vector (i.e., the induced PDE regime), rather than with the initial conditions.
|
https://arxiv.org/abs/2601.22654
|
Academic Papers
|
svg
|
57fd5593bd6bd6abe2ce9e4daef8d0b7f298c179dabf56051db19dafd5319f0b
|
2026-02-02T00:00:00-05:00
|
The Semantic Trap: Do Fine-tuned LLMs Learn Vulnerability Root Cause or Just Functional Pattern?
|
arXiv:2601.22655v1 Announce Type: new Abstract: LLMs demonstrate promising performance in software vulnerability detection after fine-tuning. However, it remains unclear whether these gains reflect a genuine understanding of vulnerability root causes or merely an exploitation of functional patterns. In this paper, we identify a critical failure mode termed the "semantic trap," where fine-tuned LLMs achieve high detection scores by associating certain functional domains with vulnerability likelihood rather than reasoning about the underlying security semantics.To systematically evaluate this phenomenon, we propose TrapEval, a comprehensive evaluation framework designed to disentangle vulnerability root cause from functional pattern. TrapEval introduces two complementary datasets derived from real-world open-source projects: V2N, which pairs vulnerable code with unrelated benign code, and V2P, which pairs vulnerable code with its corresponding patched version, forcing models to distinguish near-identical code that differs only in subtle security-critical logic. Using TrapEval, we fine-tune five representative state-of-the-art LLMs across three model families and evaluate them under cross-dataset testing, semantic-preserving perturbations, and varying degrees of semantic gap measured by CodeBLEU.Our empirical results reveal that, despite improvements in metrics, fine-tuned LLMs consistently struggle to distinguish vulnerable code from its patched counterpart, exhibit severe robustness degradation under minor semantic-preserving transformations, and rely heavily on functional-context shortcuts when the semantic gap is small. These findings provide strong evidence that current fine-tuning practices often fail to impart true vulnerability reasoning. Our findings serve as a wake-up call: high benchmark scores on traditional datasets may be illusory, masking the model's inability to understand the true causal logic of vulnerabilities.
|
https://arxiv.org/abs/2601.22655
|
Academic Papers
|
svg
|
987b2333d2f3b632e303e16a441bfab3ca932e3c90bb7e393b6fa0cc57870182
|
2026-02-02T00:00:00-05:00
|
NAG: A Unified Native Architecture for Encoder-free Text-Graph Modeling in Language Models
|
arXiv:2601.22657v1 Announce Type: new Abstract: Prevailing methods for integrating graphs into Language Models (LMs) typically rely on a segregated architecture: external Graph Neural Networks (GNNs) encode structural topology, while LMs process textual semantics. We argue this approach is suboptimal for text-graphs: it creates a conceptually disjointed interaction paradigm. By segregating structural encoding from semantic processing, these systems must perform a complex implicit alignment between abstract graph tokens and concrete textual elements. Challenging the necessity of external encoders, we propose NAG (Native Architecture for Graphs), a unified framework that internalizes graph processing within the LM's native manifold. Instead of bridging disparate embedding spaces, NAG repurposes the self-attention mechanism to enforce topological dependencies and recalibrates positional IDs to ensure structural equivalence. This allows the model to harness its intrinsic linguistic capability to simultaneously comprehend node and edge content alongside structural topology. We introduce two efficient implementations: NAG-Zero for absolute preservation of the base model's linguistic capabilities, and NAG-LoRA for enhanced structural adaptation. Experiments across diverse graph tasks validate that NAG achieves robust graph comprehension without the overhead of external encoders, offering a simpler, more coherent paradigm for text-graph modeling.
|
https://arxiv.org/abs/2601.22657
|
Academic Papers
|
svg
|
1d4b0829b8f0fac623188838fde1ee95a11723de4854857bca28fb0dacb254bb
|
2026-02-02T00:00:00-05:00
|
Layerwise Progressive Freezing Enables STE-Free Training of Deep Binary Neural Networks
|
arXiv:2601.22660v1 Announce Type: new Abstract: We investigate progressive freezing as an alternative to straight-through estimators (STE) for training binary networks from scratch. Under controlled training conditions, we find that while global progressive freezing works for binary-weight networks, it fails for full binary neural networks due to activation-induced gradient blockades. We introduce StoMPP (Stochastic Masked Partial Progressive Binarization), which uses layerwise stochastic masking to progressively replace differentiable clipped weights/activations with hard binary step functions, while only backpropagating through the unfrozen (clipped) subset (i.e., no straight-through estimator). Under a matched minimal training recipe, StoMPP improves accuracy over a BinaryConnect-style STE baseline, with gains that increase with depth (e.g., for ResNet-50 BNN: +18.0 on CIFAR-10, +13.5 on CIFAR-100, and +3.8 on ImageNet; for ResNet-18: +3.1, +4.7, and +1.3). For binary-weight networks, StoMPP achieves 91.2\% accuracy on CIFAR-10 and 69.5\% on CIFAR-100 with ResNet-50. We analyze training dynamics under progressive freezing, revealing non-monotonic convergence and improved depth scaling under binarization constraints.
|
https://arxiv.org/abs/2601.22660
|
Academic Papers
|
svg
|
2ca2a9c0d8d8d39998abe1d5c6737cef6979825ef5987358a5bebe5217eb5334
|
2026-02-02T00:00:00-05:00
|
Evaluating and Rewarding LALMs for Expressive Role-Play TTS via Mean Continuation Log-Probability
|
arXiv:2601.22661v1 Announce Type: new Abstract: Recent advances in Large Audio Language Models (LALMs) have extended Text-to-Speech (TTS) to interactive role-play scenarios, which demand high expressiveness and strict adherence to role-play instructions. However, existing models struggle to maintain stylistic consistency with character profiles and scene descriptions across multi-turn dialogues. A critical bottleneck is the lack of objective metrics for quantifying speaking style. To bridge this gap, we propose Mean Continuation Log-Probability (MCLP) as both an evaluation metric and a reward signal, validated on LALM-based Role-Play TTS (RP-TTS) tasks. Critically, we leverage the In-Context Learning capability of pre-trained LALMs to formulate MCLP via a continuation log-probability prediction. This metric quantifies stylistic consistency by measuring the likelihood of the ground-truth speech conditioned on the generated speech. Furthermore, we employ MCLP as a reinforcement learning reward to enhance the style alignment between generated speech and Role-Play instructions. To facilitate evaluation, we construct an RP-TTS dataset with rich scene and character annotations. Experimental results demonstrate that our method significantly outperforms strong LALM baselines on both objective and subjective metrics.
|
https://arxiv.org/abs/2601.22661
|
Academic Papers
|
svg
|
b70b632c7e7c09b3a6d584a0e1f48a3ed6a7574cc9303b175d3c81a196eea0bd
|
2026-02-02T00:00:00-05:00
|
Task-Aware LLM Council with Adaptive Decision Pathways for Decision Support
|
arXiv:2601.22662v1 Announce Type: new Abstract: Large language models (LLMs) have shown strong capabilities across diverse decision-making tasks. However, existing approaches often overlook the specialization differences among available models, treating all LLMs as uniformly applicable regardless of task characteristics. This limits their ability to adapt to varying reasoning demands and task complexities. In this work, we propose Task-Aware LLM Council (TALC), a task-adaptive decision framework that integrates a council of LLMs with Monte Carlo Tree Search (MCTS) to enable dynamic expert selection and efficient multi-step planning. Each LLM is equipped with a structured success memory profile derived from prior task trajectories, enabling semantic matching between current reasoning context and past successes. At each decision point, TALC routes control to the most contextually appropriate model and estimates node value using a dual-signal mechanism that fuses model-based evaluations with historical utility scores. These signals are adaptively weighted based on intra-node variance and used to guide MCTS selection, allowing the system to balance exploration depth with planning confidence. Experiments on WebShop, HumanEval, and the Game of 24 demonstrate that TALC achieves superior task success rates and improved search efficiency compared to strong baselines, validating the benefits of specialization-aware routing and adaptive planning.
|
https://arxiv.org/abs/2601.22662
|
Academic Papers
|
svg
|
087f38177a639b5032926bc9e33c4b4dbc6af1d74a4da07419eff95662316e64
|
2026-02-02T00:00:00-05:00
|
Unsupervised Synthetic Image Attribution: Alignment and Disentanglement
|
arXiv:2601.22663v1 Announce Type: new Abstract: As the quality of synthetic images improves, identifying the underlying concepts of model-generated images is becoming increasingly crucial for copyright protection and ensuring model transparency. Existing methods achieve this attribution goal by training models using annotated pairs of synthetic images and their original training sources. However, obtaining such paired supervision is challenging, as it requires either well-designed synthetic concepts or precise annotations from millions of training sources. To eliminate the need for costly paired annotations, in this paper, we explore the possibility of unsupervised synthetic image attribution. We propose a simple yet effective unsupervised method called Alignment and Disentanglement. Specifically, we begin by performing basic concept alignment using contrastive self-supervised learning. Next, we enhance the model's attribution ability by promoting representation disentanglement with the Infomax loss. This approach is motivated by an interesting observation: contrastive self-supervised models, such as MoCo and DINO, inherently exhibit the ability to perform simple cross-domain alignment. By formulating this observation as a theoretical assumption on cross-covariance, we provide a theoretical explanation of how alignment and disentanglement can approximate the concept-matching process through a decomposition of the canonical correlation analysis objective. On the real-world benchmarks, AbC, we show that our unsupervised method surprisingly outperforms the supervised methods. As a starting point, we expect our intuitive insights and experimental findings to provide a fresh perspective on this challenging task.
|
https://arxiv.org/abs/2601.22663
|
Academic Papers
|
svg
|
4913eb11134edd21010b8f68df0b6410aaac86d99853aee90795051448db8a90
|
2026-02-02T00:00:00-05:00
|
Real-Time Aligned Reward Model beyond Semantics
|
arXiv:2601.22664v1 Announce Type: new Abstract: Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique for aligning large language models (LLMs) with human preferences, yet it is susceptible to reward overoptimization, in which policy models overfit to the reward model, exploit spurious reward patterns instead of faithfully capturing human intent. Prior mitigations primarily relies on surface semantic information and fails to efficiently address the misalignment between the reward model (RM) and the policy model caused by continuous policy distribution shifts. This inevitably leads to an increasing reward discrepancy, exacerbating reward overoptimization. To address these limitations, we introduce R2M (Real-Time Aligned Reward Model), a novel lightweight RLHF framework. R2M goes beyond vanilla reward models that solely depend on the semantic representations of a pretrained LLM. Instead, it leverages the evolving hidden states of the policy (namely policy feedback) to align with the real-time distribution shift of the policy during the RL process. This work points to a promising new direction for improving the performance of reward models through real-time utilization of feedback from policy models.
|
https://arxiv.org/abs/2601.22664
|
Academic Papers
|
svg
|
d691568f84ef1f43ff9dfb88b5d1d23405f21e03e95e21279b532404069f1ef9
|
2026-02-02T00:00:00-05:00
|
ExpAlign: Expectation-Guided Vision-Language Alignment for Open-Vocabulary Grounding
|
arXiv:2601.22666v1 Announce Type: new Abstract: Open-vocabulary grounding requires accurate vision-language alignment under weak supervision, yet existing methods either rely on global sentence embeddings that lack fine-grained expressiveness or introduce token-level alignment with explicit supervision or heavy cross-attention designs. We propose ExpAlign, a theoretically grounded vision-language alignment framework built on a principled multiple instance learning formulation. ExpAlign introduces an Expectation Alignment Head that performs attention-based soft MIL pooling over token-region similarities, enabling implicit token and instance selection without additional annotations. To further stabilize alignment learning, we develop an energy-based multi-scale consistency regularization scheme, including a Top-K multi-positive contrastive objective and a Geometry-Aware Consistency Objective derived from a Lagrangian-constrained free-energy minimization. Extensive experiments show that ExpAlign consistently improves open-vocabulary detection and zero-shot instance segmentation, particularly on long-tail categories. Most notably, it achieves 36.2 AP$_r$ on the LVIS minival split, outperforming other state-of-the-art methods at comparable model scale, while remaining lightweight and inference-efficient.
|
https://arxiv.org/abs/2601.22666
|
Academic Papers
|
svg
|
f70a226e75a9ec7edfc6969e430bcfaa86f6ec1332a9661e2f7630d7a8aa2a9d
|
2026-02-02T00:00:00-05:00
|
From Horizontal Layering to Vertical Integration: A Comparative Study of the AI-Driven Software Development Paradigm
|
arXiv:2601.22667v1 Announce Type: new Abstract: This paper examines the organizational implications of Generative AI adoption in software engineering through a multiple-case comparative study. We contrast two development environments: a traditional enterprise (brownfield) and an AI-native startup (greenfield). Our analysis reveals that transitioning from Horizontal Layering (functional specialization) to Vertical Integration (end-to-end ownership) yields 8-fold to 33-fold reductions in resource consumption. We attribute these gains to the emergence of Super Employees, AI-augmented engineers who span traditional role boundaries, and the elimination of inter-functional coordination overhead. Theoretically, we propose Human-AI Collaboration Efficacy as the primary optimization target for engineering organizations, supplanting individual productivity metrics. Our Total Factor Productivity analysis identifies an AI Distortion Effect that diminishes returns to labor scale while amplifying technological leverage. We conclude with managerial strategies for organizational redesign, including the reactivation of idle cognitive bandwidth in senior engineers and the suppression of blind scale expansion.
|
https://arxiv.org/abs/2601.22667
|
Academic Papers
|
svg
|
b4887fcce7a3e13cb78eafd4c7313f2698bfcb5d09995cda6b85e74543640f65
|
2026-02-02T00:00:00-05:00
|
Beyond Fixed Rounds: Data-Free Early Stopping for Practical Federated Learning
|
arXiv:2601.22669v1 Announce Type: new Abstract: Federated Learning (FL) facilitates decentralized collaborative learning without transmitting raw data. However, reliance on fixed global rounds or validation data for hyperparameter tuning hinders practical deployment by incurring high computational costs and privacy risks. To address this, we propose a data-free early stopping framework that determines the optimal stopping point by monitoring the task vector's growth rate using solely server-side parameters. The numerical results on skin lesion/blood cell classification demonstrate that our approach is comparable to validation-based early stopping across various state-of-the-art FL methods. In particular, the proposed framework spends an average of 47/20 (skin lesion/blood cell) rounds to achieve over 12.5%/10.3% higher performance than early stopping based on validation data. To the best of our knowledge, this is the first work to propose an early stopping framework for FL methods without using any validation data.
|
https://arxiv.org/abs/2601.22669
|
Academic Papers
|
svg
|
a3e0b7abd49a9a2ca6fcde8c732c43ea7b27ac317f1fcec26b05744d03574ba3
|
2026-02-02T00:00:00-05:00
|
Postural Virtual Fixtures for Ergonomic Physical Interactions with Supernumerary Robotic Bodies
|
arXiv:2601.22672v1 Announce Type: new Abstract: Conjoined collaborative robots, functioning as supernumerary robotic bodies (SRBs), can enhance human load tolerance abilities. However, in tasks involving physical interaction with humans, users may still adopt awkward, non-ergonomic postures, which can lead to discomfort or injury over time. In this paper, we propose a novel control framework that provides kinesthetic feedback to SRB users when a non-ergonomic posture is detected, offering resistance to discourage such behaviors. This approach aims to foster long-term learning of ergonomic habits and promote proper posture during physical interactions. To achieve this, a virtual fixture method is developed, integrated with a continuous, online ergonomic posture assessment framework. Additionally, to improve coordination between the operator and the SRB, which consists of a robotic arm mounted on a floating base, the position of the floating base is adjusted as needed. Experimental results demonstrate the functionality and efficacy of the ergonomics-driven control framework, including two user studies involving practical loco-manipulation tasks with 14 subjects, comparing the proposed framework with a baseline control framework that does not account for human ergonomics.
|
https://arxiv.org/abs/2601.22672
|
Academic Papers
|
svg
|
fef2c932791f7b26a23c129bbddc5bd85100ed3ea3215c38e0645f5d44c1f919
|
2026-02-02T00:00:00-05:00
|
VisionTrim: Unified Vision Token Compression for Training-Free MLLM Acceleration
|
arXiv:2601.22674v1 Announce Type: new Abstract: Multimodal large language models (MLLMs) suffer from high computational costs due to excessive visual tokens, particularly in high-resolution and video-based scenarios. Existing token reduction methods typically focus on isolated pipeline components and often neglect textual alignment, leading to performance degradation. In this paper, we propose VisionTrim, a unified framework for training-free MLLM acceleration, integrating two effective plug-and-play modules: 1) the Dominant Vision Token Selection (DVTS) module, which preserves essential visual tokens via a global-local view, and 2) the Text-Guided Vision Complement (TGVC) module, which facilitates context-aware token merging guided by textual cues. Extensive experiments across diverse image and video multimodal benchmarks demonstrate the performance superiority of our VisionTrim, advancing practical MLLM deployment in real-world applications. The code is available at: https://github.com/hanxunyu/VisionTrim.
|
https://arxiv.org/abs/2601.22674
|
Academic Papers
|
svg
|
a4e97820052104034ef9b1deaccf16aa5b9c9dc7c71d02d511ed17783cb6793c
|
2026-02-02T00:00:00-05:00
|
Fire on Motion: Optimizing Video Pass-bands for Efficient Spiking Action Recognition
|
arXiv:2601.22675v1 Announce Type: new Abstract: Spiking neural networks (SNNs) have gained traction in vision due to their energy efficiency, bio-plausibility, and inherent temporal processing. Yet, despite this temporal capacity, most progress concentrates on static image benchmarks, and SNNs still underperform on dynamic video tasks compared to artificial neural networks (ANNs). In this work, we diagnose a fundamental pass-band mismatch: Standard spiking dynamics behave as a temporal low pass that emphasizes static content while attenuating motion bearing bands, where task relevant information concentrates in dynamic tasks. This phenomenon explains why SNNs can approach ANNs on static tasks yet fall behind on tasks that demand richer temporal understanding.To remedy this, we propose the Pass-Bands Optimizer (PBO), a plug-and-play module that optimizes the temporal pass-band toward task-relevant motion bands. PBO introduces only two learnable parameters, and a lightweight consistency constraint that preserves semantics and boundaries, incurring negligible computational overhead and requires no architectural changes. PBO deliberately suppresses static components that contribute little to discrimination, effectively high passing the stream so that spiking activity concentrates on motion bearing content. On UCF101, PBO yields over ten percentage points improvement. On more complex multi-modal action recognition and weakly supervised video anomaly detection, PBO delivers consistent and significant gains, offering a new perspective for SNN based video processing and understanding.
|
https://arxiv.org/abs/2601.22675
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.