id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
55c3d75cc81d106d42a0e7c69565eb50c525760ac8883f958d708aab66d3c183
|
2026-01-13T00:00:00-05:00
|
Mobility Inequity and Risk Response After Hurricane Helene: Evidence from Real-Time Travel and Social Sentiment Data
|
arXiv:2601.06722v1 Announce Type: new Abstract: Hurricanes severely disrupt infrastructure and restrict access to essential services. While the physical impacts on post-disaster mobility are well studied, less is known about how individual travel behaviors change during and after disasters, and how these responses are shaped by social and geographic disparities. This study examines mobility patterns following Hurricane Helene, a Category 4 storm that struck six southeastern U.S. states on September 26, 2024, causing over 230 fatalities. Using anonymized GPS mobility data, hurricane severity metrics, and county-level social media sentiment, we examine shifts in travel behavior and their implications for equity. We ask two questions: How do post-hurricane mobility patterns reflect community vulnerability and adaptive capacity? and How do sociodemographic conditions and public sentiment factors shape the direction and extent of mobility change? Results from robust linear and ordered logistic regressions indicate that evacuation orders increase mobility; however, severe storm conditions, particularly high wind speeds, can limit travel. Communities with lower incomes, located in rural areas, and with higher percentages of Black populations exhibit the steepest declines in mobility, suggesting resource constraints and infrastructural barriers, while wealthier, urban, and higher-education areas maintain greater flexibility. Results also show that positive social sentiment is associated with higher mobility and a greater likelihood of increased travel during the hurricane. Our findings highlight the need to address structural barriers and social conditions in post-disaster mobility and disaster response.
|
https://arxiv.org/abs/2601.06722
|
Academic Papers
|
svg
|
2e65e7a4e0e0ef0f2d1857fcb01fc03d98bef9b216d178aa7fbd7f9563435cc4
|
2026-01-13T00:00:00-05:00
|
Approximating Matroid Basis Testing for Partition Matroids using Budget-In-Expectation
|
arXiv:2601.06723v1 Announce Type: new Abstract: We consider the following Stochastic Boolean Function Evaluation problem, which is closely related to several problems from the literature. A matroid $\mathcal{M}$ (in compact representation) on ground set $E$ is given, and each element $i\in E$ is active independently with known probability $p_i\in(0,1)$. The elements can be queried, upon which it is revealed whether the respective element is active or not. The goal is to find an adaptive querying strategy for determining whether there is a basis of $\mathcal{M}$ in which all elements are active, with the objective of minimizing the expected number of queries. When $\mathcal{M}$ is a uniform matroid, this is the problem of evaluating a $k$-of-$n$ function, first studied in the 1970s. This problem is well-understood, and has an optimal adaptive strategy that can be computed in polynomial time. Taking $\mathcal{M}$ to instead be a partition matroid, we show that previous approaches fail to give a constant-factor approximation. Our main result is a polynomial-time constant-factor approximation algorithm producing a randomized strategy for this partition matroid problem. We obtain this result by combining a new technique with several well-established techniques. Our algorithm adaptively interleaves solutions to several instances of a novel type of stochastic querying problem, with a constraint on the $\textit{expected}$ cost. We believe that this type of problem is of independent interest, will spark follow-up work, and has the potential for additional applications.
|
https://arxiv.org/abs/2601.06723
|
Academic Papers
|
svg
|
6bd348b5671f6659029c143caf60da0f493eb0769cb8ce2314e6ced15aeb94e4
|
2026-01-13T00:00:00-05:00
|
DS-CIM: Digital Stochastic Computing-In-Memory Featuring Accurate OR-Accumulation via Sample Region Remapping for Edge AI Models
|
arXiv:2601.06724v1 Announce Type: new Abstract: Stochastic computing (SC) offers hardware simplicity but suffers from low throughput, while high-throughput Digital Computing-in-Memory (DCIM) is bottlenecked by costly adder logic for matrix-vector multiplication (MVM). To address this trade-off, this paper introduces a digital stochastic CIM (DS-CIM) architecture that achieves both high accuracy and efficiency. We implement signed multiply-accumulation (MAC) in a compact, unsigned OR-based circuit by modifying the data representation. Throughput is enhanced by replicating this low-cost circuit 64 times with only a 1x area increase. Our core strategy, a shared Pseudo Random Number Generator (PRNG) with 2D partitioning, enables single-cycle mutually exclusive activation to eliminate OR-gate collisions. We also resolve the 1s saturation issue via stochastic process analysis and data remapping, significantly improving accuracy and resilience to input sparsity. Our high-accuracy DS-CIM1 variant achieves 94.45% accuracy for INT8 ResNet18 on CIFAR-10 with a root-mean-squared error (RMSE) of just 0.74%. Meanwhile, our high-efficiency DS-CIM2 variant attains an energy efficiency of 3566.1 TOPS/W and an area efficiency of 363.7 TOPS/mm^2, while maintaining a low RMSE of 3.81%. The DS-CIM capability with larger models is further demonstrated through experiments with INT8 ResNet50 on ImageNet and the FP8 LLaMA-7B model.
|
https://arxiv.org/abs/2601.06724
|
Academic Papers
|
svg
|
257fb9c8f780bf3b44e5176ce637025e8c4ef84cc77773e0bd3f6ad23ee70376
|
2026-01-13T00:00:00-05:00
|
When Humans Judge Irises: Pupil Size Normalization as an Aid and Synthetic Irises as a Challenge
|
arXiv:2601.06725v1 Announce Type: new Abstract: Iris recognition is a mature biometric technology offering remarkable precision and speed, and allowing for large-scale deployments to populations exceeding a billion enrolled users (e.g., AADHAAR in India). However, in forensic applications, a human expert may be needed to review and confirm a positive identification before an iris matching result can be presented as evidence in court, especially in cases where processed samples are degraded (e.g., in post-mortem cases) or where there is a need to judge whether the sample is authentic, rather than a result of a presentation attack. This paper presents a study that examines human performance in iris verification in two controlled scenarios: (a) under varying pupil sizes, with and without a linear/nonlinear alignment of the pupil size between compared images, and (b) when both genuine and impostor iris image pairs are synthetically generated. The results demonstrate that pupil size normalization carried out by a modern autoencoder-based identity-preserving image-to-image translation model significantly improves verification accuracy. Participants were also able to determine whether iris pairs corresponded to the same or different eyes when both images were either authentic or synthetic. However, accuracy declined when subjects were comparing authentic irises against high-quality, same-eye synthetic counterparts. These findings (a) demonstrate the importance of pupil-size alignment for iris matching tasks in which humans are involved, and (b) indicate that despite the high fidelity of modern generative models, same-eye synthetic iris images are more often judged by humans as different-eye images, compared to same-eye authentic image pairs. We offer data and human judgments along with this paper to allow full replicability of this study and future works.
|
https://arxiv.org/abs/2601.06725
|
Academic Papers
|
svg
|
580b9c199083cee236c118668f1c150622e69bbded0957f113ff9885287374f3
|
2026-01-13T00:00:00-05:00
|
Vextra: A Unified Middleware Abstraction for Heterogeneous Vector Database Systems
|
arXiv:2601.06727v1 Announce Type: new Abstract: The rapid integration of vector search into AI applications, particularly for Retrieval Augmented Generation (RAG), has catalyzed the emergence of a diverse ecosystem of specialized vector databases. While this innovation offers a rich choice of features and performance characteristics, it has simultaneously introduced a significant challenge: severe API fragmentation. Developers face a landscape of disparate, proprietary, and often volatile API contracts, which hinders application portability, increases maintenance overhead, and leads to vendor lock-in. This paper introduces Vextra, a novel middleware abstraction layer designed to address this fragmentation. Vextra presents a unified, high-level API for core database operations, including data upsertion, similarity search, and metadata filtering. It employs a pluggable adapter architecture to translate these unified API calls into the native protocols of various backend databases. We argue that such an abstraction layer is a critical step towards maturing the vector database ecosystem, fostering interoperability, and enabling higher-level query optimization, while imposing minimal performance overhead.
|
https://arxiv.org/abs/2601.06727
|
Academic Papers
|
svg
|
f1e4d9bdd0e65cf160efde585b7ece5e079be7ab8a32ead9704c75e6c745cb56
|
2026-01-13T00:00:00-05:00
|
Robust Evacuation for Multi-Drone Failure in Drone Light Shows
|
arXiv:2601.06728v1 Announce Type: new Abstract: Drone light shows have emerged as a popular form of entertainment in recent years. However, several high-profile incidents involving large-scale drone failures -- where multiple drones simultaneously fall from the sky -- have raised safety and reliability concerns. To ensure robustness, we propose a drone parking algorithm designed specifically for multiple drone failures in drone light shows, aimed at mitigating the risk of cascading collisions by drone evacuation and enabling rapid recovery from failures by leveraging strategically placed hidden drones. Our algorithm integrates a Social LSTM model with attention mechanisms to predict the trajectories of failing drones and compute near-optimal evacuation paths that minimize the likelihood of surviving drones being hit by fallen drones. In the recovery node, our system deploys hidden drones (operating with their LED lights turned off) to replace failed drones so that the drone light show can continue. Our experiments showed that our approach can greatly increase the robustness of a multi-drone system by leveraging deep learning to predict the trajectories of fallen drones.
|
https://arxiv.org/abs/2601.06728
|
Academic Papers
|
svg
|
3583de2a1ae1b9e613f3ef31dcb11cdb5b30013103393ce0bf7b13777d363056
|
2026-01-13T00:00:00-05:00
|
Predicting Student Success with Heterogeneous Graph Deep Learning and Machine Learning Models
|
arXiv:2601.06729v1 Announce Type: new Abstract: Early identification of student success is crucial for enabling timely interventions, reducing dropout rates, and promoting on time graduation. In educational settings, AI powered systems have become essential for predicting student performance due to their advanced analytical capabilities. However, effectively leveraging diverse student data to uncover latent and complex patterns remains a key challenge. While prior studies have explored this area, the potential of dynamic data features and multi category entities has been largely overlooked. To address this gap, we propose a framework that integrates heterogeneous graph deep learning models to enhance early and continuous student performance prediction, using traditional machine learning algorithms for comparison. Our approach employs a graph metapath structure and incorporates dynamic assessment features, which progressively influence the student success prediction task. Experiments on the Open University Learning Analytics (OULA) dataset demonstrate promising results, achieving a 68.6% validation F1 score with only 7% of the semester completed, and reaching up to 89.5% near the semester's end. Our approach outperforms top machine learning models by 4.7% in validation F1 score during the critical early 7% of the semester, underscoring the value of dynamic features and heterogeneous graph representations in student success prediction.
|
https://arxiv.org/abs/2601.06729
|
Academic Papers
|
svg
|
4213721a50430755c0bbda46d2d1e754acde6d27d9303d5a4e44557586c3bbc5
|
2026-01-13T00:00:00-05:00
|
Why are there many equally good models? An Anatomy of the Rashomon Effect
|
arXiv:2601.06730v1 Announce Type: new Abstract: The Rashomon effect -- the existence of multiple, distinct models that achieve nearly equivalent predictive performance -- has emerged as a fundamental phenomenon in modern machine learning and statistics. In this paper, we explore the causes underlying the Rashomon effect, organizing them into three categories: statistical sources arising from finite samples and noise in the data-generating process; structural sources arising from non-convexity of optimization objectives and unobserved variables that create fundamental non-identifiability; and procedural sources arising from limitations of optimization algorithms and deliberate restrictions to suboptimal model classes. We synthesize insights from machine learning, statistics, and optimization literature to provide a unified framework for understanding why the multiplicity of good models arises. A key distinction emerges: statistical multiplicity diminishes with more data, structural multiplicity persists asymptotically and cannot be resolved without different data or additional assumptions, and procedural multiplicity reflects choices made by practitioners. Beyond characterizing causes, we discuss both the challenges and opportunities presented by the Rashomon effect, including implications for inference, interpretability, fairness, and decision-making under uncertainty.
|
https://arxiv.org/abs/2601.06730
|
Academic Papers
|
svg
|
09a184e57f07b5e551e299e87de68db9a5288687c2866d72afeb820f8b7ecbd8
|
2026-01-13T00:00:00-05:00
|
Study of Adaptive Reliability-Driven Conditional Innovation Decoding for LDPC Codes
|
arXiv:2601.06732v1 Announce Type: new Abstract: In this work, we present an adaptive reliability-driven conditional innovation (AR-CID) decoding algorithm for low-density parity check (LDPC) codes. The proposed AR-CID decoding algorithm consists of one stage of message quality checking and another stage of message passing refinement, which are incorporated into a residual belief propagation decoding strategy. An analysis of the AR-CID decoding algorithm is carried out along with a study of its computational complexity and latency characteristics. Simulation results for several examples of LDPC codes, including short and medium-length codes over an extended range of channel conditions, indicate that the proposed AR-CID decoding algorithm outperforms competing decoding techniques and has an extremely fast convergence, making it particularly suitable for low-delay applications.
|
https://arxiv.org/abs/2601.06732
|
Academic Papers
|
svg
|
b4367fd604f68e85e2371f26daafe4f17e7fde76f38c204d9ba547efd7467bad
|
2026-01-13T00:00:00-05:00
|
Logic-Driven Semantic Communication for Resilient Multi-Agent Systems
|
arXiv:2601.06733v1 Announce Type: new Abstract: The advent of 6G networks is accelerating autonomy and intelligence in large-scale, decentralized multi-agent systems (MAS). While this evolution enables adaptive behavior, it also heightens vulnerability to stressors such as environmental changes and adversarial behavior. Existing literature on resilience in decentralized MAS largely focuses on isolated aspects, such as fault tolerance, without offering a principled unified definition of multi-agent resilience. This gap limits the ability to design systems that can continuously sense, adapt, and recover under dynamic conditions. This article proposes a formal definition of MAS resilience grounded in two complementary dimensions: epistemic resilience, wherein agents recover and sustain accurate knowledge of the environment, and action resilience, wherein agents leverage that knowledge to coordinate and sustain goals under disruptions. We formalize resilience via temporal epistemic logic and quantify it using recoverability time (how quickly desired properties are re-established after a disturbance) and durability time (how long accurate beliefs and goal-directed behavior are sustained after recovery). We design an agent architecture and develop decentralized algorithms to achieve both epistemic and action resilience. We provide formal verification guarantees, showing that our specifications are sound with respect to the metric bounds and admit finite-horizon verification, enabling design-time certification and lightweight runtime monitoring. Through a case study on distributed multi-agent decision-making under stressors, we show that our approach outperforms baseline methods. Our formal verification analysis and simulation results highlight that the proposed framework enables resilient, knowledge-driven decision-making and sustained operation, laying the groundwork for resilient decentralized MAS in next-generation communication systems.
|
https://arxiv.org/abs/2601.06733
|
Academic Papers
|
svg
|
11e6dbd186300b61b1ff2dc1fded2bd6e26b123bdc92d3d396eec60131d6f294
|
2026-01-13T00:00:00-05:00
|
Deep Recurrent Hidden Markov Learning Framework for Multi-Stage Advanced Persistent Threat Prediction
|
arXiv:2601.06734v1 Announce Type: new Abstract: Advanced Persistent Threats (APTs) represent hidden, multi\-stage cyberattacks whose long term persistence and adaptive behavior challenge conventional intrusion detection systems (IDS). Although recent advances in machine learning and probabilistic modeling have improved APT detection performance, most existing approaches remain reactive and alert\-centric, providing limited capability for stage-aware prediction and principled inference under uncertainty, particularly when observations are sparse or incomplete. This paper proposes E\-HiDNet, a unified hybrid deep probabilistic learning framework that integrates convolutional and recurrent neural networks with a Hidden Markov Model (HMM) to allow accurate prediction of the progression of the APT campaign. The deep learning component extracts hierarchical spatio\-temporal representations from correlated alert sequences, while the HMM models latent attack stages and their stochastic transitions, allowing principled inference under uncertainty and partial observability. A modified Viterbi algorithm is introduced to handle incomplete observations, ensuring robust decoding under uncertainty. The framework is evaluated using a synthetically generated yet structurally realistic APT dataset (S\-DAPT\-2026). Simulation results show that E\-HiDNet achieves up to 98.8\-100\% accuracy in stage prediction and significantly outperforms standalone HMMs when four or more observations are available, even under reduced training data scenarios. These findings highlight that combining deep semantic feature learning with probabilistic state\-space modeling enhances predictive APT stage performance and situational awareness for proactive APT defense.
|
https://arxiv.org/abs/2601.06734
|
Academic Papers
|
svg
|
9074b6c10b162a243cde76480c94f4260623a72d9390dfc7d028ba9090aa2013
|
2026-01-13T00:00:00-05:00
|
Algorithmic Reductions: Network Flow and NP-Completeness in Real-World Scheduling Problems
|
arXiv:2601.06737v1 Announce Type: new Abstract: This paper presents two real-world scheduling problems and their algorithmic solutions through polynomial-time reductions. First, we address the Hospital Patient-to-Bed Assignment problem, demonstrating its reduction to Maximum Bipartite Matching and solution via Network Flow algorithms. Second, we tackle the University Course Scheduling problem, proving its NP-Completeness through reduction from Graph Coloring and providing greedy approximation algorithms. Both problems are implemented in Python, with experimental results validating theoretical complexity analyses. Our Network Flow solution achieves O(n2.51) empirical complexity, while the greedy coloring algorithms demonstrate O(n2) behavior with approximation ratios consistently below the theoretical delta + 1 bound.
|
https://arxiv.org/abs/2601.06737
|
Academic Papers
|
svg
|
5dfbe352c3147fca4c6de013b03476d947e92e22b94b39c4ba2cf77d2f2fd035
|
2026-01-13T00:00:00-05:00
|
Entropy-based Thermal Sensor Placement and Temperature Reconstruction based on Adaptive Compressive Sensing Theory
|
arXiv:2601.06740v1 Announce Type: new Abstract: This paper addresses the challenges of thermal sensor allocation and full-chip temperature reconstruction in multi-core systems by leveraging an entropy-based sensor placement strategy and an adaptive compressive sensing approach. By selecting sensor locations that capture diverse thermal behaviors and dynamically adjusting the measurement matrix, our method significantly enhances the accuracy of the full-chip temperature reconstruction. Experimental results demonstrate that our approach reduces full-chip temperature reconstruction error by 18% to 95%. In addition to the full-chip temperature reconstruction efficiency enhancement, our proposed method improves hardware efficiency by 5% to 514% over the related works. These findings highlight the potential of our method for more effective dynamic temperature management in future high-performance multi-core systems.
|
https://arxiv.org/abs/2601.06740
|
Academic Papers
|
svg
|
04961543e589cad64d85de53e84d37e8ec16cf27c109215628cf3c5a0bf5fa2c
|
2026-01-13T00:00:00-05:00
|
Federated Continual Learning for Privacy-Preserving Hospital Imaging Classification
|
arXiv:2601.06742v1 Announce Type: new Abstract: Deep learning models for radiology interpretation increasingly rely on multi-institutional data, yet privacy regulations and distribution shift across hospitals limit central data pooling. Federated learning (FL) allows hospitals to collaboratively train models without sharing raw images, but current FL algorithms typically assume a static data distribution. In practice, hospitals experience continual evolution in case mix, annotation protocols, and imaging devices, which leads to catastrophic forgetting when models are updated sequentially. Federated continual learning (FCL) aims to reconcile these challenges but existing methods either ignore the stringent privacy constraints of healthcare or rely on replay buffers and public surrogate datasets that are difficult to justify in clinical settings. We study FCL for chest radiography classification in a setting where hospitals are clients that receive temporally evolving streams of cases and labels. We introduce DP-FedEPC (Differentially Private Federated Elastic Prototype Consolidation), a method that combines elastic weight consolidation (EWC), prototype-based rehearsal, and client-side differential privacy within a standard FedAvg framework. EWC constrains updates along parameters deemed important for previous tasks, while a memory of latent prototypes preserves class structure without storing raw images. Differentially private stochastic gradient descent (DP-SGD) at each client adds calibrated Gaussian noise to clipped gradients, providing formal privacy guarantees for individual radiographs.
|
https://arxiv.org/abs/2601.06742
|
Academic Papers
|
svg
|
13db4b354d9c99fd816221952d760efd956cd0966d9347c6d0ad415323455f1f
|
2026-01-13T00:00:00-05:00
|
FinForge: Semi-Synthetic Financial Benchmark Generation
|
arXiv:2601.06747v1 Announce Type: new Abstract: Evaluating Language Models (LMs) in specialized, high-stakes domains such as finance remains a significant challenge due to the scarcity of open, high-quality, and domain-specific datasets. Existing general-purpose benchmarks provide broad coverage but lack the depth and domain fidelity needed to assess LMs' capabilities for real-world financial reasoning, which requires both conceptual understanding and quantitative rigor. To address this gap, we introduce FinForge, a scalable, semi-synthetic pipeline for constructing finance-specific evaluation benchmarks through a hybrid of expert-guided data curation and controlled LM-based synthesis. FinForge combines manual and programmatic corpus construction from authoritative financial sources with structured question generation and validation using Gemini 2.5 Flash. To demonstrate the pipeline's efficacy, we produce FinForge-5k, a snapshot benchmark comprising over 5,000 human-validated question-answer pairs across 11 finance subdomains, derived from a curated corpus of 100,000 verified documents totaling 143M tokens. Evaluation of state-of-the-art open-source and closed-source models on FinForge-5k reveals significant differences in financial reasoning, with leading models achieving accuracy levels near 80%. These findings underscore the framework's utility for diagnosing current model limitations and guiding future improvements in financial domain competence. All code and data are available at https://github.com/gtfintechlab/FinForge.
|
https://arxiv.org/abs/2601.06747
|
Academic Papers
|
svg
|
7322d0857955f44ae9450009b74a637bb3ce6b47d5ab6e6275875b85e170c82d
|
2026-01-13T00:00:00-05:00
|
On-the-Fly VLA Adaptation via Test-Time Reinforcement Learning
|
arXiv:2601.06748v1 Announce Type: new Abstract: Vision-Language-Action models have recently emerged as a powerful paradigm for general-purpose robot learning, enabling agents to map visual observations and natural-language instructions into executable robotic actions. Though popular, they are primarily trained via supervised fine-tuning or training-time reinforcement learning, requiring explicit fine-tuning phases, human interventions, or controlled data collection. Consequently, existing methods remain unsuitable for challenging simulated- or physical-world deployments, where robots must respond autonomously and flexibly to evolving environments. To address this limitation, we introduce a Test-Time Reinforcement Learning for VLAs (TT-VLA), a framework that enables on-the-fly policy adaptation during inference. TT-VLA formulates a dense reward mechanism that leverages step-by-step task-progress signals to refine action policies during test time while preserving the SFT/RL-trained priors, making it an effective supplement to current VLA models. Empirical results show that our approach enhances overall adaptability, stability, and task success in dynamic, previously unseen scenarios under simulated and real-world settings. We believe TT-VLA offers a principled step toward self-improving, deployment-ready VLAs.
|
https://arxiv.org/abs/2601.06748
|
Academic Papers
|
svg
|
f6aae1d44dd25cdf9df2536b99b48f4984fdc2454b93c6f2548e2e6d19a30cf8
|
2026-01-13T00:00:00-05:00
|
Benchmarking Egocentric Clinical Intent Understanding Capability for Medical Multimodal Large Language Models
|
arXiv:2601.06750v1 Announce Type: new Abstract: Medical Multimodal Large Language Models (Med-MLLMs) require egocentric clinical intent understanding for real-world deployment, yet existing benchmarks fail to evaluate this critical capability. To address these challenges, we introduce MedGaze-Bench, the first benchmark leveraging clinician gaze as a Cognitive Cursor to assess intent understanding across surgery, emergency simulation, and diagnostic interpretation. Our benchmark addresses three fundamental challenges: visual homogeneity of anatomical structures, strict temporal-causal dependencies in clinical workflows, and implicit adherence to safety protocols. We propose a Three-Dimensional Clinical Intent Framework evaluating: (1) Spatial Intent: discriminating precise targets amid visual noise, (2) Temporal Intent: inferring causal rationale through retrospective and prospective reasoning, and (3) Standard Intent: verifying protocol compliance through safety checks. Beyond accuracy metrics, we introduce Trap QA mechanisms to stress-test clinical reliability by penalizing hallucinations and cognitive sycophancy. Experiments reveal current MLLMs struggle with egocentric intent due to over-reliance on global features, leading to fabricated observations and uncritical acceptance of invalid instructions.
|
https://arxiv.org/abs/2601.06750
|
Academic Papers
|
svg
|
97d875da838d2eed20b78cb310e9db6e93627ffb76dfc6b8afc3ee58ba08bc61
|
2026-01-13T00:00:00-05:00
|
Towards Computational Chinese Paleography
|
arXiv:2601.06753v1 Announce Type: new Abstract: Chinese paleography, the study of ancient Chinese writing, is undergoing a computational turn powered by artificial intelligence. This position paper charts the trajectory of this emerging field, arguing that it is evolving from automating isolated visual tasks to creating integrated digital ecosystems for scholarly research. We first map the landscape of digital resources, analyzing critical datasets for oracle bone, bronze, and bamboo slip scripts. The core of our analysis follows the field's methodological pipeline: from foundational visual processing (image restoration, character recognition), through contextual analysis (artifact rejoining, dating), to the advanced reasoning required for automated decipherment and human-AI collaboration. We examine the technological shift from classical computer vision to modern deep learning paradigms, including transformers and large multimodal models. Finally, we synthesize the field's core challenges -- notably data scarcity and a disconnect between current AI capabilities and the holistic nature of humanistic inquiry -- and advocate for a future research agenda focused on creating multimodal, few-shot, and human-centric systems to augment scholarly expertise.
|
https://arxiv.org/abs/2601.06753
|
Academic Papers
|
svg
|
462539c7105e598d44d7ee7601f3ae6785850586c0fa335edddda59f6efef29e
|
2026-01-13T00:00:00-05:00
|
MTMCS-Bench: Evaluating Contextual Safety of Multimodal Large Language Models in Multi-Turn Dialogues
|
arXiv:2601.06757v1 Announce Type: new Abstract: Multimodal large language models (MLLMs) are increasingly deployed as assistants that interact through text and images, making it crucial to evaluate contextual safety when risk depends on both the visual scene and the evolving dialogue. Existing contextual safety benchmarks are mostly single-turn and often miss how malicious intent can emerge gradually or how the same scene can support both benign and exploitative goals. We introduce the Multi-Turn Multimodal Contextual Safety Benchmark (MTMCS-Bench), a benchmark of realistic images and multi-turn conversations that evaluates contextual safety in MLLMs under two complementary settings, escalation-based risk and context-switch risk. MTMCS-Bench offers paired safe and unsafe dialogues with structured evaluation. It contains over 30 thousand multimodal (image+text) and unimodal (text-only) samples, with metrics that separately measure contextual intent recognition, safety-awareness on unsafe cases, and helpfulness on benign ones. Across eight open-source and seven proprietary MLLMs, we observe persistent trade-offs between contextual safety and utility, with models tending to either miss gradual risks or over-refuse benign dialogues. Finally, we evaluate five current guardrails and find that they mitigate some failures but do not fully resolve multi-turn contextual risks.
|
https://arxiv.org/abs/2601.06757
|
Academic Papers
|
svg
|
c9152c0846a701d840eacd135c86c9f7c812dbec85093c324d1bb55b24ef0bb9
|
2026-01-13T00:00:00-05:00
|
A Backpropagation-Free Feedback-Hebbian Network for Continual Learning Dynamics
|
arXiv:2601.06758v1 Announce Type: new Abstract: Feedback-rich neural architectures can regenerate earlier representations and inject temporal context, making them a natural setting for strictly local synaptic plasticity. We ask whether a minimal, backpropagation-free feedback--Hebbian system can already express interpretable continual-learning--relevant behaviors under controlled training schedules. We introduce a compact prediction--reconstruction architecture with two feedforward layers for supervised association learning and two dedicated feedback layers trained to reconstruct earlier activity and re-inject it as additive temporal context. All synapses are updated by a unified local rule combining centered Hebbian covariance, Oja-style stabilization, and a local supervised drive where targets are available, requiring no weight transport or global error backpropagation. On a small two-pair association task, we characterize learning through layer-wise activity snapshots, connectivity trajectories (row/column means of learned weights), and a normalized retention index across phases. Under sequential A->B training, forward output connectivity exhibits a long-term depression (LTD)-like suppression of the earlier association while feedback connectivity preserves an A-related trace during acquisition of B. Under deterministic interleaving A,B,A,B,..., both associations are concurrently maintained rather than sequentially suppressed. Architectural controls and rule-term ablations isolate the role of dedicated feedback in regeneration and co-maintenance, and the role of the local supervised term in output selectivity and unlearning. Together, the results show that a compact feedback pathway trained with local plasticity can support regeneration and continual-learning--relevant dynamics in a minimal, mechanistically transparent setting.
|
https://arxiv.org/abs/2601.06758
|
Academic Papers
|
svg
|
23ae68bcbea57fb43cd1f664088d05c8af9e8d922ad39de2abb6f9c271c15c22
|
2026-01-13T00:00:00-05:00
|
Comparative Separation: Evaluating Separation on Comparative Judgment Test Data
|
arXiv:2601.06761v1 Announce Type: new Abstract: This research seeks to benefit the software engineering society by proposing comparative separation, a novel group fairness notion to evaluate the fairness of machine learning software on comparative judgment test data. Fairness issues have attracted increasing attention since machine learning software is increasingly used for high-stakes and high-risk decisions. It is the responsibility of all software developers to make their software accountable by ensuring that the machine learning software do not perform differently on different sensitive groups -- satisfying the separation criterion. However, evaluation of separation requires ground truth labels for each test data point. This motivates our work on analyzing whether separation can be evaluated on comparative judgment test data. Instead of asking humans to provide the ratings or categorical labels on each test data point, comparative judgments are made between pairs of data points such as A is better than B. According to the law of comparative judgment, providing such comparative judgments yields a lower cognitive burden for humans than providing ratings or categorical labels. This work first defines the novel fairness notion comparative separation on comparative judgment test data, and the metrics to evaluate comparative separation. Then, both theoretically and empirically, we show that in binary classification problems, comparative separation is equivalent to separation. Lastly, we analyze the number of test data points and test data pairs required to achieve the same level of statistical power in the evaluation of separation and comparative separation, respectively. This work is the first to explore fairness evaluation on comparative judgment test data. It shows the feasibility and the practical benefits of using comparative judgment test data for model evaluations.
|
https://arxiv.org/abs/2601.06761
|
Academic Papers
|
svg
|
cf314a49387e28a0506c2f97a40c4bb434d3298c219eb6d7a63f7fd9add86e07
|
2026-01-13T00:00:00-05:00
|
The Complexity of Finding Missing Answer Repairs
|
arXiv:2601.06764v1 Announce Type: new Abstract: We investigate the problem of identifying database repairs for missing tuples in query answers. We show that when the query is part of the input - the combined complexity setting - determining whether or not a repair exists is polynomial-time is equivalent to the satisfiability problem for classes of queries admitting a weak form of projection and selection. We then identify the sub-classes of unions of conjunctive queries with negated atoms, defined by the relational algebra operations permitted to appear in the query, for which the minimal repair problem can be solved in polynomial time. In contrast, we show that the problem is NP-hard, as well as set cover-hard to approximate via strict reductions, whenever both projection and join are permitted in the input query. Additionally, we show that finding the size of a minimal repair for unions of conjunctive queries (with negated atoms permitted) is OptP[log(n)]-complete, while computing a minimal repair is possible with O($n^2$) queries to an NP oracle. With recursion permitted, the combined complexity of all of these variants increases significantly, with an EXP lower bound. However, from the data complexity perspective, we show that minimal repairs can be identified in polynomial time for all queries expressible as semi-positive datalog programs.
|
https://arxiv.org/abs/2601.06764
|
Academic Papers
|
svg
|
2c08f73bcdfea1c0527c23b6c301dbecfed29ae8a2840e6dd5774863411aacf0
|
2026-01-13T00:00:00-05:00
|
Control and Stability of a Multilevel Power System for a Future Distribution Network
|
arXiv:2601.06766v1 Announce Type: new Abstract: The growing integration of renewable energy sources into distribution networks poses significant challenges to frequency and voltage stability due to their intermittent nature and low-inertia dynamics. This paper proposes a multilevel control framework for a future decarbonized power system, using energy storage systems as power buffers to mitigate frequency and voltage fluctuations. A nonlinear interconnected model is formulated to characterize the complex dynamics across multiple levels of the distribution network. To reduce operational complexity and communication overhead of these dynamics, a distributed linear quadratic regulator control strategy is developed for information exchange in a bottom-up approach, where each level implements local feedback control within a short time horizon. Stability conditions for both open-loop and closed-loop systems are established using Lyapunov-based analysis. In addition, explicit performance bounds are derived to quantify the optimal difference between the proposed distributed strategy and the centralized control method, demonstrating the effectiveness of the proposed framework.
|
https://arxiv.org/abs/2601.06766
|
Academic Papers
|
svg
|
5d37a587925b0d479a186a9c3dacae57a9ac0086c13c556420b3988b3718bc44
|
2026-01-13T00:00:00-05:00
|
GanitLLM: Difficulty-Aware Bengali Mathematical Reasoning through Curriculum-GRPO
|
arXiv:2601.06767v1 Announce Type: new Abstract: We present a Bengali mathematical reasoning model called GanitLLM (named after the Bangla word for mathematics, "Ganit"), together with a new difficulty-aware Bengali math corpus and a curriculum-based GRPO pipeline. Bengali is one of the world's most widely spoken languages, yet existing LLMs either reason in English and then translate, or simply fail on multi-step Bengali math, in part because reinforcement learning recipes are tuned for high-resource languages and collapse under reward sparsity in low-resource settings. To address this, we construct Ganit, a rigorously filtered and decontaminated Bengali math dataset with automatic difficulty tags derived from the pass@k of a strong evaluator model. Building on this dataset, we propose Curriculum-GRPO, which combines multi-stage training (SFT + GRPO) with difficulty-aware sampling and verifiable rewards for format, numerical correctness, and Bengali reasoning. On Bn-MGSM and Bn-MSVAMP, GanitLLM-4B improves over its Qwen3-4B base by +8 and +7 accuracy points, respectively, while increasing the percentage of Bengali reasoning tokens from 14% to over 88% and reducing average solution length from 943 to 193 words.
|
https://arxiv.org/abs/2601.06767
|
Academic Papers
|
svg
|
4199abb8d39845962521bd25cdccdea459aabdca0981698a3f543883a64bb23b
|
2026-01-13T00:00:00-05:00
|
ALFA: A Safe-by-Design Approach to Mitigate Quishing Attacks Launched via Fancy QR Codes
|
arXiv:2601.06768v1 Announce Type: new Abstract: Phishing with Quick Response (QR) codes is termed as Quishing. The attackers exploit this method to manipulate individuals into revealing their confidential data. Recently, we see the colorful and fancy representations of QR codes, the 2D matrix of QR codes which does not reflect a typical mixture of black-white modules anymore. Instead, they become more tempting as an attack vector for adversaries which can evade the state-of-the-art deep learning visual-based and other prevailing countermeasures. We introduce "ALFA", a safe-by-design approach, to mitigate Quishing and prevent everyone from accessing the post-scan harmful payload of fancy QR codes. Our method first converts a fancy QR code into the replica of binary grid and then identify the erroneous representation of modules in that grid. Following that, we present "FAST" method which can conveniently recover erroneous modules from that binary grid. Afterwards, using this binary grid, our solution extracts the structural features of fancy QR code and predicts its legitimacy using a pre-trained model. The effectiveness of our proposal is demonstrated by the experimental evaluation on a synthetic dataset (containing diverse variations of fancy QR codes) and achieve a FNR of 0.06% only. We also develop the mobile app to test the practical feasibility of our solution and provide a performance comparison of the app with the real-world QR readers. This comparison further highlights the classification reliability and detection accuracy of this solution in real-world environments.
|
https://arxiv.org/abs/2601.06768
|
Academic Papers
|
svg
|
d8f00ba482b6445d986e2ac9154207e32bf1d6257fb8d98fb2f57477e08849ef
|
2026-01-13T00:00:00-05:00
|
Structure-preserving learning and prediction in optimal control of collective motion
|
arXiv:2601.06770v1 Announce Type: new Abstract: Wide-spread adoption of unmanned vehicle technologies requires the ability to predict the motion of the combined vehicle operation from observations. While the general prediction of such motion for an arbitrary control mechanism is difficult, for a particular choice of control, the dynamics reduces to the Lie-Poisson equations [33,34]. Our goal is to learn the phase-space dynamics and predict the motion solely from observations, without any knowledge of the control Hamiltonian or the nature of interaction between vehicles. To achieve that goal, we propose the Control Optimal Lie-Poisson Neural Networks (CO-LPNets) for learning and predicting the dynamics of the system from data. Our methods learn the mapping of the phase space through the composition of Poisson maps, which are obtained as flows from Hamiltonians that could be integrated explicitly. CO-LPNets preserve the Poisson bracket and thus preserve Casimirs to machine precision. We discuss the completeness of the derived neural networks and their efficiency in approximating the dynamics. To illustrate the power of the method, we apply these techniques to systems of $N=3$ particles evolving on ${\rm SO}(3)$ group, which describe coupled rigid bodies rotating about their center of mass, and ${\rm SE}(3)$ group, applicable to the movement of unmanned air and water vehicles. Numerical results demonstrate that CO-LPNets learn the dynamics in phase space from data points and reproduce trajectories, with good accuracy, over hundreds of time steps. The method uses a limited number of points ($\sim200$/dimension) and parameters ($\sim 1000$ in our case), demonstrating potential for practical applications and edge deployment.
|
https://arxiv.org/abs/2601.06770
|
Academic Papers
|
svg
|
dca89e68fcf5c2ab7bb2d801e808d4a6210f13773eb148e755681055e156f79b
|
2026-01-13T00:00:00-05:00
|
Heterogeneous Interaction Network Analysis (HINA): A New Learning Analytics Approach for Modelling, Analyzing, and Visualizing Complex Interactions in Learning Processes
|
arXiv:2601.06771v1 Announce Type: new Abstract: Existing learning analytics approaches, which often model learning processes as sequences of learner actions or homogeneous relationships, are limited in capturing the distributed, multi-faceted nature of interactions in contemporary learning environments. To address this, we propose Heterogeneous Interaction Network Analysis (HINA), a novel multi-level learning analytics framework for modeling complex learning processes across diverse entities (e.g., learners, behaviours, AI agents, and task designs). HINA integrates a set of original methods, including summative measures and a new non-parametric clustering technique, with established practices for statistical testing and interactive visualization to provide a flexible and powerful analytical toolkit. In this paper, we first detail the theoretical and mathematical foundations of HINA for individual, dyadic, and meso-level analysis. We then demonstrate HINA's utility through a case study on AI-mediated small-group collaborative learning, revealing students' interaction profiles with peers versus AI; distinct engagement patterns that emerge from these interactions; and specific types of learning behaviors (e.g., asking questions, planning) directed to AI versus peers. By transforming process data into Heterogeneous Interaction Networks (HINs), HINA introduces a new paradigm for modeling learning processes and provides the dedicated, multi-level analytical methods required to extract meaning from them. It thereby moves beyond a single process data type to quantify and visualize how different elements in a learning environment interact and co-influence each other, opening new avenues for understanding complex educational dynamics.
|
https://arxiv.org/abs/2601.06771
|
Academic Papers
|
svg
|
9cbe90a1bf09e59383a17833b6c888fa38001b51194bbe8782a28a073c48fffe
|
2026-01-13T00:00:00-05:00
|
The optimal error analysis of nonuniform L1 method for the variable-exponent subdiffusion model
|
arXiv:2601.06773v1 Announce Type: new Abstract: This work investigates the optimal error estimate of the fully discrete scheme for the variable-exponent subdiffusion model under the nonuniform temporal mesh. We apply the perturbation method to reformulate the original model into its equivalent form, and apply the L1 scheme as well as the interpolation quadrature rule to discretize the Caputo derivative term and the convolution term in the reformulated model, respectively. We then prove the temporal convergence rates $O(N^{-\min\{2-\alpha(0), r\alpha(0)\}})$ under the nonuniform mesh, which improves the existing convergence results in [Zheng, CSIAM T. Appl. Math. 2025] for $r\geq \frac{2-\alpha(0)}{\alpha(0)}$. Numerical results are presented to substantiate the theoretical findings.
|
https://arxiv.org/abs/2601.06773
|
Academic Papers
|
svg
|
42852e2f128640abebecfbffdcdd2f3a92d085de92aec747d0e5e77efbc63bd5
|
2026-01-13T00:00:00-05:00
|
ImmuniFraug: A Metacognitive Intervention Anti-Fraud Approach to Enhance Undergraduate Students' Cyber Fraud Awareness
|
arXiv:2601.06774v1 Announce Type: new Abstract: Cyber fraud now constitutes over half of criminal cases in China, with undergraduate students experiencing a disproportionate rise in victimization. Traditional anti-fraud training remains predominantly passive, yielding limited engagement and retention. This paper introduces ImmuniFraug, a Large Language Model (LLM)-based metacognitive intervention that delivers immersive, multimodal fraud simulations integrating text, voice, and visual avatars across ten prevalent fraud types. Each scenario is designed to replicate real-world persuasion tactics and psychological pressure, while post-interaction debriefs provide grounded feedback in protection motivation theory and reflective prompts to reinforce learning. In a controlled study with 846 Chinese undergraduates, ImmuniFraug was compared to official text-based materials. Linear Mixed-Effects Modeling (LMEM) reveals that the interactive intervention significantly improved fraud awareness (p = 0.026), successfully providing incremental learning value even when controlling for participants' extensive prior exposure to anti-fraud education, alongside high narrative immersion (M = 56.95/77). Thematic analysis of interviews revealed key effectiveness factors: perceived realism, adaptive deception, enforced time pressure, emotional manipulation awareness, and enhanced self-efficacy. Findings demonstrate that by shifting the focus from passive knowledge acquisition to active metacognitive engagement, LLM-based simulations offer a scalable and ecologically valid new paradigm for anti-fraud training and fostering fraud resilience.
|
https://arxiv.org/abs/2601.06774
|
Academic Papers
|
svg
|
ddd296778f96a93809632c99cfc71a4813282cb510540c8802523cec91a57096
|
2026-01-13T00:00:00-05:00
|
From Text to Simulation: A Multi-Agent LLM Workflow for Automated Chemical Process Design
|
arXiv:2601.06776v1 Announce Type: new Abstract: Process simulation is a critical cornerstone of chemical engineering design. Current automated chemical design methodologies focus mainly on various representations of process flow diagrams. However, transforming these diagrams into executable simulation flowsheets remains a time-consuming and labor-intensive endeavor, requiring extensive manual parameter configuration within simulation software. In this work, we propose a novel multi-agent workflow that leverages the semantic understanding capabilities of large language models(LLMs) and enables iterative interactions with chemical process simulation software, achieving end-to-end automated simulation from textual process specifications to computationally validated software configurations for design enhancement. Our approach integrates four specialized agents responsible for task understanding, topology generation, parameter configuration, and evaluation analysis, respectively, coupled with Enhanced Monte Carlo Tree Search to accurately interpret semantics and robustly generate configurations. Evaluated on Simona, a large-scale process description dataset, our method achieves a 31.1% improvement in the simulation convergence rate compared to state-of-the-art baselines and reduces the design time by 89. 0% compared to the expert manual design. This work demonstrates the potential of AI-assisted chemical process design, which bridges the gap between conceptual design and practical implementation. Our workflow is applicable to diverse process-oriented industries, including pharmaceuticals, petrochemicals, food processing, and manufacturing, offering a generalizable solution for automated process design.
|
https://arxiv.org/abs/2601.06776
|
Academic Papers
|
svg
|
59108c5c6dbea2171e52c38c00553c09b9dc8a146f2e5115219cf5101c51e0e3
|
2026-01-13T00:00:00-05:00
|
The Normalized Difference Layer: A Differentiable Spectral Index Formulation for Deep Learning
|
arXiv:2601.06777v1 Announce Type: new Abstract: Normalized difference indices have been a staple in remote sensing for decades. They stay reliable under lighting changes produce bounded values and connect well to biophysical signals. Even so, they are usually treated as a fixed pre processing step with coefficients set to one, which limits how well they can adapt to a specific learning task. In this study, we introduce the Normalized Difference Layer that is a differentiable neural network module. The proposed method keeps the classical idea but learns the band coefficients from data. We present a complete mathematical framework for integrating this layer into deep learning architectures that uses softplus reparameterization to ensure positive coefficients and bounded denominators. We describe forward and backward pass algorithms enabling end to end training through backpropagation. This approach preserves the key benefits of normalized differences, namely illumination invariance and outputs bounded to $[-1,1]$ while allowing gradient descent to discover task specific band weightings. We extend the method to work with signed inputs, so the layer can be stacked inside larger architectures. Experiments show that models using this layer reach similar classification accuracy to standard multilayer perceptrons while using about 75\% fewer parameters. They also handle multiplicative noise well, at 10\% noise accuracy drops only 0.17\% versus 3.03\% for baseline MLPs. The learned coefficient patterns stay consistent across different depths.
|
https://arxiv.org/abs/2601.06777
|
Academic Papers
|
svg
|
82ae3c8c19d98a4e185c2b0b7bb6adf83265a2863b90de604725abe0d8c7adc8
|
2026-01-13T00:00:00-05:00
|
CyberLLM-FINDS 2025: Instruction-Tuned Fine-tuning of Domain-Specific LLMs with Retrieval-Augmented Generation and Graph Integration for MITRE Evaluation
|
arXiv:2601.06779v1 Announce Type: new Abstract: Large Language Models (LLMs) such as Gemma-2B have shown strong performance in various natural language processing tasks. However, general-purpose models often lack the domain expertise required for cybersecurity applications. This work presents a methodology to fine-tune the Gemma-2B model into a domain-specific cybersecurity LLM. We detail the processes of dataset preparation, fine-tuning, and synthetic data generation, along with implications for real-world applications in threat detection, forensic investigation, and attack analysis. Experiments highlight challenges in prompt length distribution during domain-specific fine-tuning. Uneven prompt lengths limit the model's effective use of the context window, constraining local inference to 200-400 tokens despite hardware support for longer sequences. Chain-of-thought styled prompts, paired with quantized weights, yielded the best performance under these constraints. To address context limitations, we employed a hybrid strategy using cloud LLMs for synthetic data generation and local fine-tuning for deployment efficiency. To extend the evaluation, we introduce a Retrieval-Augmented Generation (RAG) pipeline and graph-based reasoning framework. This approach enables structured alignment with MITRE ATT&CK techniques through STIX-based threat intelligence, enhancing recall in multi-hop and long-context scenarios. Graph modules encode entity-neighborhood context and tactic chains, helping mitigate the constraints of short prompt windows. Results demonstrate improved model alignment with tactic, technique, and procedure (TTP) coverage, validating the utility of graph-augmented LLMs in cybersecurity threat intelligence applications.
|
https://arxiv.org/abs/2601.06779
|
Academic Papers
|
svg
|
157dda4b5b633dd4f5d6e6be26b2aa00e5db38ec4e18f2375ae6aaa6d1d350bb
|
2026-01-13T00:00:00-05:00
|
Multi-Stage Evolutionary Model Merging with Meta Data Driven Curriculum Learning for Sentiment-Specialized Large Language Modeling
|
arXiv:2601.06780v1 Announce Type: new Abstract: The emergence of large language models (LLMs) has significantly transformed natural language processing (NLP), enabling more generalized models to perform various tasks with minimal training. However, traditional sentiment analysis methods, which focus on individual tasks such as sentiment classification or aspect-based analysis, are not practical for real-world applications that usually require handling multiple tasks. While offering flexibility, LLMs in sentiment-specific tasks often fall short of the required accuracy. Techniques like fine-tuning and evolutionary model merging help integrate models into a unified framework, which can improve the learning performance while reducing computational costs. The use of task meta-data and curriculum learning to optimize learning processes remains underexplored, while sentiment analysis is a critical task in NLP that requires high accuracy and scalability across multiple subtasks. In this study, we propose a hybrid learning model called Multi-stage Evolutionary Model Merging with Meta data driven Curriculum Learning (MEM-MCL), to enhance the sentiment analysis in large language modeling. In particular, expert models are created through instruction tuning for specific sentiment tasks and then merged using evolutionary algorithms to form a unified model. The merging process is optimized with weak data to enhance performance across tasks. The curriculum learning is incorporated to provide a learning sequence based on task difficulty, improving knowledge extraction from LLMs. Experiment results demonstrate that the proposed MEM-MCL model outperforms conventional LLMs in a majority of sentiment analysis tasks, achieving superior results across various subtasks.
|
https://arxiv.org/abs/2601.06780
|
Academic Papers
|
svg
|
391206ed06f2d2e4f351549047415994277085cd41b464df64e803f18bae96f6
|
2026-01-13T00:00:00-05:00
|
AutoTour: Automatic Photo Tour Guide with Smartphones and LLMs
|
arXiv:2601.06781v1 Announce Type: new Abstract: We present AutoTour, a system that enhances user exploration by automatically generating fine-grained landmark annotations and descriptive narratives for photos captured by users. The key idea of AutoTour is to fuse visual features extracted from photos with nearby geospatial features queried from open matching databases. Unlike existing tour applications that rely on pre-defined content or proprietary datasets, AutoTour leverages open and extensible data sources to provide scalable and context-aware photo-based guidance. To achieve this, we design a training-free pipeline that first extracts and filters relevant geospatial features around the user's GPS location. It then detects major landmarks in user photos through VLM-based feature detection and projects them into the horizontal spatial plane. A geometric matching algorithm aligns photo features with corresponding geospatial entities based on their estimated distance and direction. The matched features are subsequently grounded and annotated directly on the original photo, accompanied by large language model-generated textual and audio descriptions to provide an informative, tour-like experience. We demonstrate that AutoTour can deliver rich, interpretable annotations for both iconic and lesser-known landmarks, enabling a new form of interactive, context-aware exploration that bridges visual perception and geospatial understanding.
|
https://arxiv.org/abs/2601.06781
|
Academic Papers
|
svg
|
12665a04eb29916831f1b39ef1ce0b002330caa6936180f14b4aa7610ed4e15e
|
2026-01-13T00:00:00-05:00
|
EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs
|
arXiv:2601.06786v1 Announce Type: new Abstract: Improving the reasoning abilities of large language models (LLMs) has largely relied on iterative self-training with model-generated data. While effective at boosting accuracy, existing approaches primarily reinforce successful reasoning paths, incurring a substantial calibration cost: models become overconfident and lose the ability to represent uncertainty. This failure has been characterized as a form of model collapse in alignment, where predictive distributions degenerate toward low-variance point estimates. We address this issue by reframing reasoning training as an epistemic learning problem, in which models must learn not only how to reason, but also when their reasoning should be trusted. We propose epistemically-calibrated reasoning (EpiCaR) as a training objective that jointly optimizes reasoning performance and calibration, and instantiate it within an iterative supervised fine-tuning framework using explicit self-evaluation signals. Experiments on Llama-3 and Qwen-3 families demonstrate that our approach achieves Pareto-superiority over standard baselines in both accuracy and calibration, particularly in models with sufficient reasoning capacity (e.g., 3B+). This framework generalizes effectively to OOD mathematical reasoning (GSM8K) and code generation (MBPP). Ultimately, our approach enables a 3X reduction in inference compute, matching the K=30 performance of STaR with only K=10 samples in capable models.
|
https://arxiv.org/abs/2601.06786
|
Academic Papers
|
svg
|
d399da91534a5ea6f38254e73b6afa4864e8a4a3b9f6f8de796555087bd4e765
|
2026-01-13T00:00:00-05:00
|
Garbage Attention in Large Language Models: BOS Sink Heads and Sink-aware Pruning
|
arXiv:2601.06787v1 Announce Type: new Abstract: Large Language Models (LLMs) are known to contain significant redundancy, yet a systematic explanation for why certain components, particularly in higher layers, are more redundant has remained elusive. In this work, we identify the BOS sink phenomenon as a key mechanism driving this layer-wise sensitivity. We show that attention heads with high BOS sink scores are strongly associated with functional redundancy: such heads, especially in deeper layers, contribute little to predictive performance and effectively serve as \emph{dumping grounds} for superfluous attention weights. This provides a concrete functional explanation for the structural redundancy reported in prior studies. Leveraging this insight, we introduce a simple pruning strategy that removes high-BOS sink heads. Experiments on Gemma-3, Llama-3.1, and Qwen3 demonstrate that this approach identifies redundant transformer components more reliably than weight- or activation-based criteria, while preserving performance close to dense baselines even under aggressive pruning. Moreover, we find that the behavior of sink heads remains stable across different sequence lengths. Overall, our results suggest that structural properties of attention offer a more intuitive and robust basis for model compression than magnitude-based methods.
|
https://arxiv.org/abs/2601.06787
|
Academic Papers
|
svg
|
46dd3cabb9d6aa4ef89aadd10927405a8433b23200c5a87c78fa288d235c8a4f
|
2026-01-13T00:00:00-05:00
|
Artificial Entanglement in the Fine-Tuning of Large Language Models
|
arXiv:2601.06788v1 Announce Type: new Abstract: Large language models (LLMs) can be adapted to new tasks using parameter-efficient fine-tuning (PEFT) methods that modify only a small number of trainable parameters, often through low-rank updates. In this work, we adopt a quantum-information-inspired perspective to understand their effectiveness. From this perspective, low-rank parameterizations naturally correspond to low-dimensional Matrix Product States (MPS) representations, which enable entanglement-based characterizations of parameter structure. Thereby, we term and measure "Artificial Entanglement", defined as the entanglement entropy of the parameters in artificial neural networks (in particular the LLMs). We first study the representative low-rank adaptation (LoRA) PEFT method, alongside full fine-tuning (FFT), using LLaMA models at the 1B and 8B scales trained on the Tulu3 and OpenThoughts3 datasets, and uncover: (i) Internal artificial entanglement in the updates of query and value projection matrices in LoRA follows a volume law with a central suppression (termed as the "Entanglement Valley"), which is sensitive to hyper-parameters and is distinct from that in FFT; (ii) External artificial entanglement in attention matrices, corresponding to token-token correlations in representation space, follows an area law with logarithmic corrections and remains robust to LoRA hyper-parameters and training steps. Drawing a parallel to the No-Hair Theorem in black hole physics, we propose that although LoRA and FFT induce distinct internal entanglement signatures, such differences do not manifest in the attention outputs, suggesting a "no-hair" property that results in the effectiveness of low rank updates. We further provide theoretical support based on random matrix theory, and extend our analysis to an MPS Adaptation PEFT method, which exhibits qualitatively similar behaviors.
|
https://arxiv.org/abs/2601.06788
|
Academic Papers
|
svg
|
2290bd7bf7a17e355227a717ecf061c19ce802101907e532f3d7989258a3a290
|
2026-01-13T00:00:00-05:00
|
MemGovern: Enhancing Code Agents through Learning from Governed Human Experiences
|
arXiv:2601.06789v1 Announce Type: new Abstract: While autonomous software engineering (SWE) agents are reshaping programming paradigms, they currently suffer from a "closed-world" limitation: they attempt to fix bugs from scratch or solely using local context, ignoring the immense historical human experience available on platforms like GitHub. Accessing this open-world experience is hindered by the unstructured and fragmented nature of real-world issue-tracking data. In this paper, we introduce MemGovern, a framework designed to govern and transform raw GitHub data into actionable experiential memory for agents. MemGovern employs experience governance to convert human experience into agent-friendly experience cards and introduces an agentic experience search strategy that enables logic-driven retrieval of human expertise. By producing 135K governed experience cards, MemGovern achieves a significant performance boost, improving resolution rates on the SWE-bench Verified by 4.65%. As a plug-in approach, MemGovern provides a solution for agent-friendly memory infrastructure.
|
https://arxiv.org/abs/2601.06789
|
Academic Papers
|
svg
|
4e79eeb903ab831709b91c67dcc423c9fb59fe4424d4ca889b651891d028debe
|
2026-01-13T00:00:00-05:00
|
SecMoE: Communication-Efficient Secure MoE Inference via Select-Then-Compute
|
arXiv:2601.06790v1 Announce Type: new Abstract: Privacy-preserving Transformer inference has gained attention due to the potential leakage of private information. Despite recent progress, existing frameworks still fall short of practical model scales, with gaps up to a hundredfold. A possible way to close this gap is the Mixture of Experts (MoE) architecture, which has emerged as a promising technique to scale up model capacity with minimal overhead. However, given that the current secure two-party (2-PC) protocols allow the server to homomorphically compute the FFN layer with its plaintext model weight, under the MoE setting, this could reveal which expert is activated to the server, exposing token-level privacy about the client's input. While naively evaluating all the experts before selection could protect privacy, it nullifies MoE sparsity and incurs the heavy computational overhead that sparse MoE seeks to avoid. To address the privacy and efficiency limitations above, we propose a 2-PC privacy-preserving inference framework, \SecMoE. Unifying per-entry circuits in both the MoE layer and piecewise polynomial functions, \SecMoE obliviously selects the extracted parameters from circuits and only computes one encrypted entry, which we refer to as Select-Then-Compute. This makes the model for private inference scale to 63$\times$ larger while only having a 15.2$\times$ increase in end-to-end runtime. Extensive experiments show that, under 5 expert settings, \SecMoE lowers the end-to-end private inference communication by 1.8$\sim$7.1$\times$ and achieves 1.3$\sim$3.8$\times$ speedup compared to the state-of-the-art (SOTA) protocols.
|
https://arxiv.org/abs/2601.06790
|
Academic Papers
|
svg
|
db25efa6791d9259c723af78066518d8931626422f8f5a67be79f572d75f79b7
|
2026-01-13T00:00:00-05:00
|
Cross-Modal Computational Model of Brain-Heart Interactions via HRV and EEG Feature
|
arXiv:2601.06792v1 Announce Type: new Abstract: The electroencephalogram (EEG) has been the gold standard for quantifying mental workload; however, due to its complexity and non-portability, it can be constraining. ECG signals, which are feasible on wearable equipment pieces such as headbands, present a promising method for cognitive state monitoring. This research explores whether electrocardiogram (ECG) signals are able to indicate mental workload consistently and act as surrogates for EEG-based cognitive indicators. This study investigates whether ECG-derived features can serve as surrogate indicators of cognitive load, a concept traditionally quantified using EEG. Using a publicly available multimodal dataset (OpenNeuro) of EEG and ECG recorded during working-memory and listening tasks, features of HRV and Catch22 descriptors are extracted from ECG, and spectral band-power with Catch22 features from EEG. A cross-modal regression framework based on XGBoost was trained to map ECG-derived HRV representations to EEG-derived cognitive features. In order to address data sparsity and model brain-heart interactions, we integrated the PSV-SDG to produce EEG-conditioned synthetic HRV time series.This addresses the challenge of inferring cognitive load solely from ECG-derived features using a combination of multimodal learning, signal processing, and synthetic data generation. These outcomes form a basis for light, interpretable machine learning models that are implemented through wearable biosensors in non-lab environments. Synthetic HRV inclusion enhances robustness, particularly in sparse data situations. Overall, this work is an initiation for building low-cost, explainable, and real-time cognitive monitoring systems for mental health, education, and human-computer interaction, with a focus on ageing and clinical populations.
|
https://arxiv.org/abs/2601.06792
|
Academic Papers
|
svg
|
74c2f654e3a11dd5a89eed7af12419e04d1ae3486f6b0147709203e2c38fa361
|
2026-01-13T00:00:00-05:00
|
CliffordNet: All You Need is Geometric Algebra
|
arXiv:2601.06793v1 Announce Type: new Abstract: Modern computer vision architectures, from CNNs to Transformers, predominantly rely on the stacking of heuristic modules: spatial mixers (Attention/Conv) followed by channel mixers (FFNs). In this work, we challenge this paradigm by returning to mathematical first principles. We propose the \textbf{Clifford Algebra Network (CAN)}, also referred to as CliffordNet, a vision backbone grounded purely in Geometric Algebra. Instead of engineering separate modules for mixing and memory, we derive a unified interaction mechanism based on the \textbf{Clifford Geometric Product} ($uv = u \cdot v + u \wedge v$). This operation ensures algebraic completeness regarding the Geometric Product by simultaneously capturing feature coherence (via the generalized inner product) and structural variation (via the exterior wedge product). Implemented via an efficient sparse rolling mechanism with \textbf{strict linear complexity $\mathcal{O}(N)$}, our model reveals a surprising emergent property: the geometric interaction is so representationally dense that standard Feed-Forward Networks (FFNs) become redundant. Empirically, CliffordNet establishes a new Pareto frontier: our \textbf{Nano} variant achieves \textbf{76.41\%} accuracy on CIFAR-100 with only \textbf{1.4M} parameters, effectively matching the heavy-weight ResNet-18 (11.2M) with \textbf{$8\times$ fewer parameters}, while our \textbf{Base} variant sets a new SOTA for tiny models at \textbf{78.05\%}. Our results suggest that global understanding can emerge solely from rigorous, algebraically complete local interactions, potentially signaling a shift where \textit{geometry is all you need}. Code is available at https://github.com/ParaMind2025/CAN.
|
https://arxiv.org/abs/2601.06793
|
Academic Papers
|
svg
|
86271eda70ee4f72dbd5dda7d1ee4f76e9414eaac71fd8bdcfd69ef10e02617c
|
2026-01-13T00:00:00-05:00
|
No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning
|
arXiv:2601.06794v1 Announce Type: new Abstract: Critique-guided reinforcement learning (RL) has emerged as a powerful paradigm for training LLM agents by augmenting sparse outcome rewards with natural-language feedback. However, current methods often rely on static or offline critic models, which fail to adapt as the policy evolves. In on-policy RL, the agent's error patterns shift over time, causing stationary critics to become stale and providing feedback of diminishing utility. To address this, we introduce ECHO (Evolving Critic for Hindsight-Guided Optimization)}, a framework that jointly optimizes the policy and critic through a synchronized co-evolutionary loop. ECHO utilizes a cascaded rollout mechanism where the critic generates multiple diagnoses for an initial trajectory, followed by policy refinement to enable group-structured advantage estimation. We address the challenge of learning plateaus via a saturation-aware gain shaping objective, which rewards the critic for inducing incremental improvements in high-performing trajectories. By employing dual-track GRPO updates, ECHO ensures the critic's feedback stays synchronized with the evolving policy. Experimental results show that ECHO yields more stable training and higher long-horizon task success across open-world environments.
|
https://arxiv.org/abs/2601.06794
|
Academic Papers
|
svg
|
6d7f6bdee6673f41549c65a4d7c4e4a7f6468d51169e8b50747b9474ff9a78d3
|
2026-01-13T00:00:00-05:00
|
GDEPO: Group Dual-dynamic and Equal-right-advantage Policy Optimization with Enhanced Training Data Utilization for Sample-Constrained Reinforcement Learning
|
arXiv:2601.06795v1 Announce Type: new Abstract: Automated Theorem Proving (ATP) represents a fundamental challenge in Artificial Intelligence (AI), requiring the construction of machine-verifiable proofs in formal languages such as Lean to evaluate AI reasoning capabilities. Reinforcement learning (RL), particularly the high-performance Group Relative Policy Optimization (GRPO) algorithm, has emerged as a mainstream approach for this task. However, in ATP scenarios, GRPO faces two critical issues: when composite rewards are used, its relative advantage estimation may conflict with the binary feedback from the formal verifier; meanwhile, its static sampling strategy may discard entire batches of data if no valid proof is found, resulting in zero contribution to model updates and significant data waste. To address these limitations, we propose Group Dual-dynamic and Equal-right-advantage Policy Optimization (GDEPO), a method incorporating three core mechanisms: 1) dynamic additional sampling, which resamples invalid batches until a valid proof is discovered; 2) equal-right advantage, decoupling the sign of the advantage function (based on correctness) from its magnitude (modulated by auxiliary rewards) to ensure stable and correct policy updates; and 3) dynamic additional iterations, applying extra gradient steps to initially failed but eventually successful samples to accelerate learning on challenging cases. Experiments conducted on three datasets of varying difficulty (MinF2F-test, MathOlympiadBench, PutnamBench) confirm the effectiveness of GDEPO, while ablation studies validate the necessity of its synergistic components. The proposed method enhances data utilization and optimization efficiency, offering a novel training paradigm for ATP.
|
https://arxiv.org/abs/2601.06795
|
Academic Papers
|
svg
|
898c5107b8ed692ec10635bab066d2e8be65fb406d84f2ded1462ecdda29a6ef
|
2026-01-13T00:00:00-05:00
|
Unleashing the Native Recommendation Potential: LLM-Based Generative Recommendation via Structured Term Identifiers
|
arXiv:2601.06798v1 Announce Type: new Abstract: Leveraging the vast open-world knowledge and understanding capabilities of Large Language Models (LLMs) to develop general-purpose, semantically-aware recommender systems has emerged as a pivotal research direction in generative recommendation. However, existing methods face bottlenecks in constructing item identifiers. Text-based methods introduce LLMs' vast output space, leading to hallucination, while methods based on Semantic IDs (SIDs) encounter a semantic gap between SIDs and LLMs' native vocabulary, requiring costly vocabulary expansion and alignment training. To address this, this paper introduces Term IDs (TIDs), defined as a set of semantically rich and standardized textual keywords, to serve as robust item identifiers. We propose GRLM, a novel framework centered on TIDs, employs Context-aware Term Generation to convert item's metadata into standardized TIDs and utilizes Integrative Instruction Fine-tuning to collaboratively optimize term internalization and sequential recommendation. Additionally, Elastic Identifier Grounding is designed for robust item mapping. Extensive experiments on real-world datasets demonstrate that GRLM significantly outperforms baselines across multiple scenarios, pointing a promising direction for generalizable and high-performance generative recommendation systems.
|
https://arxiv.org/abs/2601.06798
|
Academic Papers
|
svg
|
8c84b094bb473fd0ba3eafb302681d8fd5b155a365f1158038f1ba030570d9ac
|
2026-01-13T00:00:00-05:00
|
CIRAG: Construction-Integration Retrieval and Adaptive Generation for Multi-hop Question Answering
|
arXiv:2601.06799v1 Announce Type: new Abstract: Triple-based Iterative Retrieval-Augmented Generation (iRAG) mitigates document-level noise for multi-hop question answering. However, existing methods still face limitations: (i) greedy single-path expansion, which propagates early errors and fails to capture parallel evidence from different reasoning branches, and (ii) granularity-demand mismatch, where a single evidence representation struggles to balance noise control with contextual sufficiency. In this paper, we propose the Construction-Integration Retrieval and Adaptive Generation model, CIRAG. It introduces an Iterative Construction-Integration module that constructs candidate triples and history-conditionally integrates them to distill core triples and generate the next-hop query. This module mitigates the greedy trap by preserving multiple plausible evidence chains. Besides, we propose an Adaptive Cascaded Multi-Granularity Generation module that progressively expands contextual evidence based on the problem requirements, from triples to supporting sentences and full passages. Moreover, we introduce Trajectory Distillation, which distills the teacher model's integration policy into a lightweight student, enabling efficient and reliable long-horizon reasoning. Extensive experiments demonstrate that CIRAG achieves superior performance compared to existing iRAG methods.
|
https://arxiv.org/abs/2601.06799
|
Academic Papers
|
svg
|
36466e18118c48a52e54b8d6c4cb9ab6d747b5568bfcfb5fdee333c555f7747a
|
2026-01-13T00:00:00-05:00
|
Graph Neural Network with One-side Edge Sampling for Fraud Detection
|
arXiv:2601.06800v1 Announce Type: new Abstract: Financial fraud is always a major problem in the field of finance, as it can cause significant consequences. As a result, many approaches have been designed to detect it, and lately Graph Neural Networks (GNNs) have been demonstrated as a competent candidate. However, when trained with a large amount of data, they are slow and computationally demanding. In addition, GNNs may need a deep architecture to detect complex fraud patterns, but doing so may make them suffer from problems such as over-fitting or over-smoothing. Over-fitting leads to reduced generalisation of the model on unseen data, while over-smoothing causes all nodes' features to converge to a fixed point due to excessive aggregation of information from neighbouring nodes. In this research, I propose an approach called One-Side Edge Sampling (OES) that can potentially reduce training duration as well as the effects of over-smoothing and over-fitting. The approach leverages predictive confidence in an edge classification task to sample edges from the input graph during a certain number of epochs. To explain why OES can alleviate over-smoothing, I perform a theoretical analysis of the proposed approach. In addition, to validate the effect of OES, I conduct experiments using different GNNs on two datasets. The results show that OES can empirically outperform backbone models in both shallow and deep architectures while also reducing training time.
|
https://arxiv.org/abs/2601.06800
|
Academic Papers
|
svg
|
5b55fae67e7744637018f667b17d19570c759c1b7e1b3f380813cb8b31e8d864
|
2026-01-13T00:00:00-05:00
|
Thinking with Deltas: Incentivizing Reinforcement Learning via Differential Visual Reasoning Policy
|
arXiv:2601.06801v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has significantly advanced reasoning capabilities in Large Language Models. However, adapting RLVR to multimodal domains suffers from a critical \textit{perception-reasoning decoupling}. Existing paradigms, driven by text-centric outcome rewards, reasoning in language medium, inadvertently encourage models to bypass visual perception. We empirically validate this through blind experiments: state-of-the-art policies maintain or surprisingly improve performance even when visual inputs are entirely removed. This reveals that these models degenerate into \textit{blind reasoners}, exploiting linguistic priors to generate plausible answers instead of attending to visual evidence. In response, we propose \textbf{Thinking with Deltas}, a framework driven by a \textbf{Differential Visual Reasoning Policy (DVRP)}. DVRP introduces intrinsic supervision via visual triplets, comprising original, masked, and perturbed inputs. It optimizes the model to maximize reasoning divergence from masked inputs (enforcing \textit{visual sensitivity}) while minimizing divergence from perturbed inputs (ensuring \textit{visual robustness}). By aligning reasoning variations strictly with the \textit{Delta} of visual information, DVRP inherently bolsters visual understanding capabilities and significantly outperforms state-of-the-art methods on both general and medical benchmarks, without requiring external annotations or auxiliary tools.
|
https://arxiv.org/abs/2601.06801
|
Academic Papers
|
svg
|
9f468502cdd5a47fd2b0bbdd63b4fb74dbec75790c7ccc8840fb62c874fae12a
|
2026-01-13T00:00:00-05:00
|
Doing More with Less: Data Augmentation for Sudanese Dialect Automatic Speech Recognition
|
arXiv:2601.06802v1 Announce Type: new Abstract: Although many Automatic Speech Recognition (ASR) systems have been developed for Modern Standard Arabic (MSA) and Dialectal Arabic (DA), few studies have focused on dialect-specific implementations, particularly for low-resource Arabic dialects such as Sudanese. This paper presents a comprehensive study of data augmentation techniques for fine-tuning OpenAI Whisper models and establishes the first benchmark for the Sudanese dialect. Two augmentation strategies are investigated: (1) self-training with pseudo-labels generated from unlabeled speech, and (2) TTS-based augmentation using synthetic speech from the Klaam TTS system. The best-performing model, Whisper-Medium fine-tuned with combined self-training and TTS augmentation (28.4 hours), achieves a Word Error Rate (WER) of 57.1% on the evaluation set and 51.6% on an out-of-domain holdout set substantially outperforming zero-shot multilingual Whisper (78.8% WER) and MSA-specialized Arabic models (73.8-123% WER). All experiments used low-cost resources (Kaggle free tier and Lightning.ai trial), demonstrating that strategic data augmentation can overcome resource limitations for low-resource dialects and provide a practical roadmap for developing ASR systems for low-resource Arabic dialects and other marginalized language varieties. The models, evaluation benchmarks, and reproducible training pipelines are publicly released to facilitate future research on low-resource Arabic ASR.
|
https://arxiv.org/abs/2601.06802
|
Academic Papers
|
svg
|
741ddf662c9dad86fcea87220d8340cee7919cb8069327f84716166069b0a671
|
2026-01-13T00:00:00-05:00
|
Forest Before Trees: Latent Superposition for Efficient Visual Reasoning
|
arXiv:2601.06803v1 Announce Type: new Abstract: While Chain-of-Thought empowers Large Vision-Language Models with multi-step reasoning, explicit textual rationales suffer from an information bandwidth bottleneck, where continuous visual details are discarded during discrete tokenization. Recent latent reasoning methods attempt to address this challenge, but often fall prey to premature semantic collapse due to rigid autoregressive objectives. In this paper, we propose Laser, a novel paradigm that reformulates visual deduction via Dynamic Windowed Alignment Learning (DWAL). Instead of forcing a point-wise prediction, Laser aligns the latent state with a dynamic validity window of future semantics. This mechanism enforces a "Forest-before-Trees" cognitive hierarchy, enabling the model to maintain a probabilistic superposition of global features before narrowing down to local details. Crucially, Laser maintains interpretability via decodable trajectories while stabilizing unconstrained learning via Self-Refined Superposition. Extensive experiments on 6 benchmarks demonstrate that Laser achieves state-of-the-art performance among latent reasoning methods, surpassing the strong baseline Monet by 5.03% on average. Notably, it achieves these gains with extreme efficiency, reducing inference tokens by more than 97%, while demonstrating robust generalization to out-of-distribution domains.
|
https://arxiv.org/abs/2601.06803
|
Academic Papers
|
svg
|
bb17417cc2272f36680583745e5897f6d0eb1f2a2d74543e38ee8edb5b891cd3
|
2026-01-13T00:00:00-05:00
|
SpatialNav: Leveraging Spatial Scene Graphs for Zero-Shot Vision-and-Language Navigation
|
arXiv:2601.06806v1 Announce Type: new Abstract: Although learning-based vision-and-language navigation (VLN) agents can learn spatial knowledge implicitly from large-scale training data, zero-shot VLN agents lack this process, relying primarily on local observations for navigation, which leads to inefficient exploration and a significant performance gap. To deal with the problem, we consider a zero-shot VLN setting that agents are allowed to fully explore the environment before task execution. Then, we construct the Spatial Scene Graph (SSG) to explicitly capture global spatial structure and semantics in the explored environment. Based on the SSG, we introduce SpatialNav, a zero-shot VLN agent that integrates an agent-centric spatial map, a compass-aligned visual representation, and a remote object localization strategy for efficient navigation. Comprehensive experiments in both discrete and continuous environments demonstrate that SpatialNav significantly outperforms existing zero-shot agents and clearly narrows the gap with state-of-the-art learning-based methods. Such results highlight the importance of global spatial representations for generalizable navigation.
|
https://arxiv.org/abs/2601.06806
|
Academic Papers
|
svg
|
a72fffacf2585636ffbd6fc0430e46c8c1b08973b78369d43b95e434a7577b83
|
2026-01-13T00:00:00-05:00
|
WFR-FM: Simulation-Free Dynamic Unbalanced Optimal Transport
|
arXiv:2601.06810v1 Announce Type: new Abstract: The Wasserstein-Fisher-Rao (WFR) metric extends dynamic optimal transport (OT) by coupling displacement with change of mass, providing a principled geometry for modeling unbalanced snapshot dynamics. Existing WFR solvers, however, are often unstable, computationally expensive, and difficult to scale. Here we introduce WFR Flow Matching (WFR-FM), a simulation-free training algorithm that unifies flow matching with dynamic unbalanced OT. Unlike classical flow matching which regresses only a transport vector field, WFR-FM simultaneously regresses a vector field for displacement and a scalar growth rate function for birth-death dynamics, yielding continuous flows under the WFR geometry. Theoretically, we show that minimizing the WFR-FM loss exactly recovers WFR geodesics. Empirically, WFR-FM yields more accurate and robust trajectory inference in single-cell biology, reconstructing consistent dynamics with proliferation and apoptosis, estimating time-varying growth fields, and applying to generative dynamics under imbalanced data. It outperforms state-of-the-art baselines in efficiency, stability, and reconstruction accuracy. Overall, WFR-FM establishes a unified and efficient paradigm for learning dynamical systems from unbalanced snapshots, where not only states but also mass evolve over time.
|
https://arxiv.org/abs/2601.06810
|
Academic Papers
|
svg
|
9a2d7bc965b811ae8cbd7af6551f6522ce27bfadc4ebfdf3aabb922c73f76937
|
2026-01-13T00:00:00-05:00
|
Analyzing the effect of prediction accuracy on the distributionally-robust competitive ratio
|
arXiv:2601.06813v1 Announce Type: new Abstract: The field of algorithms with predictions aims to improve algorithm performance by integrating machine learning predictions into algorithm design. A central question in this area is how predictions can improve performance, and a key aspect of this analysis is the role of prediction accuracy. In this context, prediction accuracy is defined as a guaranteed probability that an instance drawn from the distribution belongs to the predicted set. As a performance measure that incorporates prediction accuracy, we focus on the distributionally-robust competitive ratio (DRCR), introduced by Sun et al.~(ICML 2024). The DRCR is defined as the expected ratio between the algorithm's cost and the optimal cost, where the expectation is taken over the worst-case instance distribution that satisfies the given prediction and accuracy requirement. A known structural property is that, for any fixed algorithm, the DRCR decreases linearly as prediction accuracy increases. Building on this result, we establish that the optimal DRCR value (i.e., the infimum over all algorithms) is a monotone and concave function of prediction accuracy. We further generalize the DRCR framework to a multiple-prediction setting and show that monotonicity and concavity are preserved in this setting. Finally, we apply our results to the ski rental problem, a benchmark problem in online optimization, to identify the conditions on prediction accuracies required for the optimal DRCR to attain a target value. Moreover, we provide a method for computing the critical accuracy, defined as the minimum accuracy required for the optimal DRCR to strictly improve upon the performance attainable without any accuracy guarantee.
|
https://arxiv.org/abs/2601.06813
|
Academic Papers
|
svg
|
58c918e9024053ef5775f301e9f9117ce7f393255b6d2d26930e7a09a105e671
|
2026-01-13T00:00:00-05:00
|
AgentHallu: Benchmarking Automated Hallucination Attribution of LLM-based Agents
|
arXiv:2601.06818v1 Announce Type: new Abstract: As LLM-based agents operate over sequential multi-step reasoning, hallucinations arising at intermediate steps risk propagating along the trajectory, thus degrading overall reliability. Unlike hallucination detection in single-turn responses, diagnosing hallucinations in multi-step workflows requires identifying which step causes the initial divergence. To fill this gap, we propose a new research task, automated hallucination attribution of LLM-based agents, aiming to identify the step responsible for the hallucination and explain why. To support this task, we introduce AgentHallu, a comprehensive benchmark with: (1) 693 high-quality trajectories spanning 7 agent frameworks and 5 domains, (2) a hallucination taxonomy organized into 5 categories (Planning, Retrieval, Reasoning, Human-Interaction, and Tool-Use) and 14 sub-categories, and (3) multi-level annotations curated by humans, covering binary labels, hallucination-responsible steps, and causal explanations. We evaluate 13 leading models, and results show the task is challenging even for top-tier models (like GPT-5, Gemini-2.5-Pro). The best-performing model achieves only 41.1\% step localization accuracy, where tool-use hallucinations are the most challenging at just 11.6\%. We believe AgentHallu will catalyze future research into developing robust, transparent, and reliable agentic systems.
|
https://arxiv.org/abs/2601.06818
|
Academic Papers
|
svg
|
6e3ec2cd480f78e6e04c71b3a6f6d60d32282a73dfe46790d4fc75a3ff6de2b5
|
2026-01-13T00:00:00-05:00
|
Generative Modeling of Human-Computer Interfaces with Diffusion Processes and Conditional Control
|
arXiv:2601.06823v1 Announce Type: new Abstract: This study investigates human-computer interface generation based on diffusion models to overcome the limitations of traditional template-based design and fixed rule-driven methods. It first analyzes the key challenges of interface generation, including the diversity of interface elements, the complexity of layout logic, and the personalization of user needs. A generative framework centered on the diffusion-reverse diffusion process is then proposed, with conditional control introduced in the reverse diffusion stage to integrate user intent, contextual states, and task constraints, enabling unified modeling of visual presentation and interaction logic. In addition, regularization constraints and optimization objectives are combined to ensure the rationality and stability of the generated interfaces. Experiments are conducted on a public interface dataset with systematic evaluations, including comparative experiments, hyperparameter sensitivity tests, environmental sensitivity tests, and data sensitivity tests. Results show that the proposed method outperforms representative models in mean squared error, structural similarity, peak signal-to-noise ratio, and mean absolute error, while maintaining strong robustness under different parameter settings and environmental conditions. Overall, the diffusion model framework effectively improves the diversity, rationality, and intelligence of interface generation, providing a feasible solution for automated interface generation in complex interaction scenarios.
|
https://arxiv.org/abs/2601.06823
|
Academic Papers
|
svg
|
0216c9c7959f903efb6da5989e8d763f57dfe119faf791fe48a2f3f2fd6df8d6
|
2026-01-13T00:00:00-05:00
|
PDR: A Plug-and-Play Positional Decay Framework for LLM Pre-training Data Detection
|
arXiv:2601.06827v1 Announce Type: new Abstract: Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.
|
https://arxiv.org/abs/2601.06827
|
Academic Papers
|
svg
|
df33bab88f58e08fe022077a69ed9d3ab6df99bb7a6786a534b76d249a90bcd7
|
2026-01-13T00:00:00-05:00
|
Spectral Shadows: When Communication Complexity Meets Linear Invariance Testing
|
arXiv:2601.06828v1 Announce Type: new Abstract: In this short note, we initiate the study of the Linear Isomorphism Testing Problem in the setting of communication complexity, a natural linear algebraic generalization of the classical Equality problem. Given Boolean functions $f, g : \mathbb{F}_2^n \to \{-1, +1\}$, Alice and Bob are tasked with determining whether $f$ and $g$ are equivalent up to a nonsingular linear transformation of the input variables, or far from being so. This problem has been extensively investigated in several models of computation, including standard algorithmic and property testing frameworks, owing to its fundamental connections with combinatorial circuit design, complexity theory, and cryptography. However, despite its broad relevance, it has remained unexplored in the context of communication complexity, a gap we address in this work. Our main results demonstrate that the approximate spectral norm of the input functions plays a central role in governing the communication complexity of this problem. We design a simple deterministic protocol whose communication cost is polynomial in the approximate spectral norm, and complement it with nearly matching lower bounds (up to a quadratic gap). In the randomised setting with private coins, we present an even more efficient protocol, though equally simple, that achieves a quadratically improved dependence on the approximate spectral norm compared to the deterministic case, and we prove that such a dependence is essentially unavoidable. These results identify the approximate spectral norm as a key complexity measure for testing linear invariance in the communication complexity framework. As a core technical ingredient, we establish new junta theorems for Boolean functions with small approximate spectral norm, which may be of independent interest in Fourier analysis and learning theory.
|
https://arxiv.org/abs/2601.06828
|
Academic Papers
|
svg
|
bc846a2141508504fe9204ff3a666a71cc5c17312f6a72670c43869c6e47eb6c
|
2026-01-13T00:00:00-05:00
|
MoEScore: Mixture-of-Experts-Based Text-Audio Relevance Score Prediction for Text-to-Audio System Evaluation
|
arXiv:2601.06829v1 Announce Type: new Abstract: Recent advances in generative models have enabled modern Text-to-Audio (TTA) systems to synthesize audio with high perceptual quality. However, TTA systems often struggle to maintain semantic consistency with the input text, leading to mismatches in sound events, temporal tructures, or contextual relationships. Evaluating semantic fidelity in TTA remains a significant challenge. Traditional methods primarily rely on subjective human listening tests, which is time-consuming. To solve this, we propose an objective evaluator based on a Mixture of Experts (MoE) architecture with Sequential Cross-Attention (SeqCoAttn). Our model achieves the first rank in the XACLE Challenge, with an SRCC of 0.6402 (an improvement of 30.6% over the challenge baseline) on the test dataset. Code is available at: https://github.com/S-Orion/MOESCORE.
|
https://arxiv.org/abs/2601.06829
|
Academic Papers
|
svg
|
9ce30ed2a4829e13c904f48714f2c1cd9e238ad18bad2ceeed509df16d7d6860
|
2026-01-13T00:00:00-05:00
|
SARA: Scene-Aware Reconstruction Accelerator
|
arXiv:2601.06831v1 Announce Type: new Abstract: We present SARA (Scene-Aware Reconstruction Accelerator), a geometry-driven pair selection module for Structure-from-Motion (SfM). Unlike conventional pipelines that select pairs based on visual similarity alone, SARA introduces geometry-first pair selection by scoring reconstruction informativeness - the product of overlap and parallax - before expensive matching. A lightweight pre-matching stage uses mutual nearest neighbors and RANSAC to estimate these cues, then constructs an Information-Weighted Spanning Tree (IWST) augmented with targeted edges for loop closure, long-baseline anchors, and weak-view reinforcement. Compared to exhaustive matching, SARA reduces rotation errors by 46.5+-5.5% and translation errors by 12.5+-6.5% across modern learned detectors, while achieving at most 50x speedup through 98% pair reduction (from 30,848 to 580 pairs). This reduces matching complexity from quadratic to quasi-linear, maintaining within +-3% of baseline reconstruction metrics for 3D Gaussian Splatting and SVRaster.
|
https://arxiv.org/abs/2601.06831
|
Academic Papers
|
svg
|
e64f80df9addb83f1d6b4812a3e2b1985adda5403ddab19fd390788317c86afc
|
2026-01-13T00:00:00-05:00
|
SPINE Gripper: A Twisted Underactuated Mechanism-based Passive Mode-Transition Gripper
|
arXiv:2601.06833v1 Announce Type: new Abstract: This paper presents a single-actuator passive gripper that achieves both stable grasping and continuous bidirectional in-hand rotation through mechanically encoded power transmission logic. Unlike conventional multifunctional grippers that require multiple actuators, sensors, or control-based switching, the proposed gripper transitions between grasping and rotation solely according to the magnitude of the applied input torque. The key enabler of this behavior is a Twisted Underactuated Mechanism (TUM), which generates non-coplanar motions, namely axial contraction and rotation, from a single rotational input while producing identical contraction regardless of rotation direction. A friction generator mechanically defines torque thresholds that govern passive mode switching, enabling stable grasp establishment before autonomously transitioning to in-hand rotation without sensing or active control. Analytical models describing the kinematics, elastic force generation, and torque transmission of the TUM are derived and experimentally validated. The fabricated gripper is evaluated through quantitative experiments on grasp success, friction-based grasp force regulation, and bidirectional rotation performance. System-level demonstrations, including bolt manipulation, object reorientation, and manipulator-integrated tasks driven solely by wrist torque, confirm reliable grasp to rotate transitions in both rotational directions. These results demonstrate that non-coplanar multifunctional manipulation can be realized through mechanical design alone, establishing mechanically encoded power transmission logic as a robust alternative to actuator and control intensive gripper architectures.
|
https://arxiv.org/abs/2601.06833
|
Academic Papers
|
svg
|
bc683a855df38ab23562de11bce46d0a008c0ded0b483d93ae4122173eb94cdc
|
2026-01-13T00:00:00-05:00
|
Enhancing Low-resolution Image Representation Through Normalizing Flows
|
arXiv:2601.06834v1 Announce Type: new Abstract: Low-resolution image representation is a special form of sparse representation that retains only low-frequency information while discarding high-frequency components. This property reduces storage and transmission costs and benefits various image processing tasks. However, a key challenge is to preserve essential visual content while maintaining the ability to accurately reconstruct the original images. This work proposes LR2Flow, a nonlinear framework that learns low-resolution image representations by integrating wavelet tight frame blocks with normalizing flows. We conduct a reconstruction error analysis of the proposed network, which demonstrates the necessity of designing invertible neural networks in the wavelet tight frame domain. Experimental results on various tasks, including image rescaling, compression, and denoising, demonstrate the effectiveness of the learned representations and the robustness of the proposed framework.
|
https://arxiv.org/abs/2601.06834
|
Academic Papers
|
svg
|
9740396879c30d60ab1096c199405ecd12a4f0585419cdf248cdb2f8f5b8d154
|
2026-01-13T00:00:00-05:00
|
OSCAR: Optical-aware Semantic Control for Aleatoric Refinement in Sar-to-Optical Translation
|
arXiv:2601.06835v1 Announce Type: new Abstract: Synthetic Aperture Radar (SAR) provides robust all-weather imaging capabilities; however, translating SAR observations into photo-realistic optical images remains a fundamentally ill-posed problem. Current approaches are often hindered by the inherent speckle noise and geometric distortions of SAR data, which frequently result in semantic misinterpretation, ambiguous texture synthesis, and structural hallucinations. To address these limitations, a novel SAR-to-Optical (S2O) translation framework is proposed, integrating three core technical contributions: (i) Cross-Modal Semantic Alignment, which establishes an Optical-Aware SAR Encoder by distilling robust semantic priors from an Optical Teacher into a SAR Student (ii) Semantically-Grounded Generative Guidance, realized by a Semantically-Grounded ControlNet that integrates class-aware text prompts for global context with hierarchical visual prompts for local spatial guidance; and (iii) an Uncertainty-Aware Objective, which explicitly models aleatoric uncertainty to dynamically modulate the reconstruction focus, effectively mitigating artifacts caused by speckle-induced ambiguity. Extensive experiments demonstrate that the proposed method achieves superior perceptual quality and semantic consistency compared to state-of-the-art approaches.
|
https://arxiv.org/abs/2601.06835
|
Academic Papers
|
svg
|
0b416fb6bdffcfc3d2f767432bdd622bd9d030f598245fbdc4a00b11dfe00528
|
2026-01-13T00:00:00-05:00
|
Optimal Rate Region for Multi-server Secure Aggregation with User Collusion
|
arXiv:2601.06836v1 Announce Type: new Abstract: Secure aggregation is a fundamental primitive in privacy-preserving distributed learning systems, where an aggregator aims to compute the sum of users' inputs without revealing individual data. In this paper, we study a multi-server secure aggregation problem in a two-hop network consisting of multiple aggregation servers and multiple users per server, under the presence of user collusion. Each user communicates only with its associated server, while the servers exchange messages to jointly recover the global sum. We adopt an information-theoretic security framework, allowing up to $T$ users to collude with any server. We characterize the complete optimal rate region in terms of user-to-server communication rate, server-to-server communication rate, individual key rate, and source key rate. Our main result shows that the minimum communication and individual key rates are all one symbol per input symbol, while the optimal source key rate is given by $\min\{U+V+T-2,\, UV-1\}$, where $U$ denotes the number of servers and $V$ the number of users per server. The achievability is established via a linear key construction that ensures correctness and security against colluding users, while the converse proof relies on tight entropy bounds derived from correctness and security constraints. The results reveal a fundamental tradeoff between security and key efficiency and demonstrate that the multi-server architecture can significantly reduce the required key randomness compared to single-server secure aggregation. Our findings provide a complete information-theoretic characterization of secure aggregation in multi-server systems with user collusion.
|
https://arxiv.org/abs/2601.06836
|
Academic Papers
|
svg
|
112b172b9946da3be6e92c3bf90d2b7fa47ca3682341919b34035955badeae48
|
2026-01-13T00:00:00-05:00
|
CHASE: LLM Agents for Dissecting Malicious PyPI Packages
|
arXiv:2601.06838v1 Announce Type: new Abstract: Modern software package registries like PyPI have become critical infrastructure for software development, but are increasingly exploited by threat actors distributing malicious packages with sophisticated multi-stage attack chains. While Large Language Models (LLMs) offer promising capabilities for automated code analysis, their application to security-critical malware detection faces fundamental challenges, including hallucination and context confusion, which can lead to missed detections or false alarms. We present CHASE (Collaborative Hierarchical Agents for Security Exploration), a high-reliability multi-agent architecture that addresses these limitations through a Plan-and-Execute coordination model, specialized Worker Agents focused on specific analysis aspects, and integration with deterministic security tools for critical operations. Our key insight is that reliability in LLM-based security analysis emerges not from improving individual model capabilities but from architecting systems that compensate for LLM weaknesses while leveraging their semantic understanding strengths. Evaluation on a dataset of 3,000 packages (500 malicious, 2,500 benign) demonstrates that CHASE achieves 98.4% recall with only 0.08% false positive rate, while maintaining a practical median analysis time of 4.5 minutes per package, making it suitable for operational deployment in automated package screening. Furthermore, we conducted a survey with cybersecurity professionals to evaluate the generated analysis reports, identifying their key strengths and areas for improvement. This work provides a blueprint for building reliable AI-powered security tools that can scale with the growing complexity of modern software supply chains. Our project page is available at https://t0d4.github.io/CHASE-AIware25/
|
https://arxiv.org/abs/2601.06838
|
Academic Papers
|
svg
|
618f4eec5b7ae5c817330afa6f325bad9318fd7d6a371dc9c6de637ac02eee18
|
2026-01-13T00:00:00-05:00
|
PRISM: Color-Stratified Point Cloud Sampling
|
arXiv:2601.06839v1 Announce Type: new Abstract: We present PRISM, a novel color-guided stratified sampling method for RGB-LiDAR point clouds. Our approach is motivated by the observation that unique scene features often exhibit chromatic diversity while repetitive, redundant features are homogeneous in color. Conventional downsampling methods (Random Sampling, Voxel Grid, Normal Space Sampling) enforce spatial uniformity while ignoring this photometric content. In contrast, PRISM allocates sampling density proportional to chormatic diversity. By treating RGB color space as the stratification domain and imposing a maximum capacity k per color bin, the method preserves texture-rich regions with high color variation while substantially reducing visually homogeneous surfaces. This shifts the sampling space from spatial coverage to visual complexity to produce sparser point clouds that retain essential features for 3D reconstruction tasks.
|
https://arxiv.org/abs/2601.06839
|
Academic Papers
|
svg
|
7479d22e78171a655b9c68c67dce01c5124b0afe7720007dc417ba3bb8d26e8f
|
2026-01-13T00:00:00-05:00
|
Efficient Subdivision of B\'{e}zier Curves/Surfaces via Blossoms
|
arXiv:2601.06841v1 Announce Type: new Abstract: We consider the problem of B\'{e}zier curves/surfaces subdivision using blossoms. We propose closed-form formulae for blossoms evaluation, as needed for the calculation of control points. This approach leads to direct and efficient way to obtain subdivisions for B\'{e}zier curves and both tensor product and triangular B\'{e}zier surfaces. It simplifies considerably the computation of control points of subdivisions which is crucial in applications where curves/surfaces need to be refined or adapted dynamically. For instance, in CAD/CAM systems, architectural design, or animation, the ability to quickly and accurately determine new control points is essential for manipulation and rendering complex shapes. More efficient subdivision can facilitate complex operations like finding intersections between surfaces or smoothly blending multiple surfaces.
|
https://arxiv.org/abs/2601.06841
|
Academic Papers
|
svg
|
2ed59b008d3c5a360302be2586f97809ad792c6ea68eab909ee59a38c4389f49
|
2026-01-13T00:00:00-05:00
|
Seeing through the Conflict: Transparent Knowledge Conflict Handling in Retrieval-Augmented Generation
|
arXiv:2601.06842v1 Announce Type: new Abstract: Large language models (LLMs) equipped with retrieval--the Retrieval-Augmented Generation (RAG) paradigm--should combine their parametric knowledge with external evidence, yet in practice they often hallucinate, over-trust noisy snippets, or ignore vital context. We introduce TCR (Transparent Conflict Resolution), a plug-and-play framework that makes this decision process observable and controllable. TCR (i) disentangles semantic match and factual consistency via dual contrastive encoders, (ii) estimates self-answerability to gauge confidence in internal memory, and (iii) feeds the three scalar signals to the generator through a lightweight soft-prompt with SNR-based weighting. Across seven benchmarks TCR improves conflict detection (+5-18 F1), raises knowledge-gap recovery by +21.4 pp and cuts misleading-context overrides by -29.3 pp, while adding only 0.3% parameters. The signals align with human judgements and expose temporal decision patterns.
|
https://arxiv.org/abs/2601.06842
|
Academic Papers
|
svg
|
1a09f5713eadee7d5d6022e24ef25a64bbf94764abbb3266045838d5f4485ccf
|
2026-01-13T00:00:00-05:00
|
Speak While Watching: Unleashing TRUE Real-Time Video Understanding Capability of Multimodal Large Language Models
|
arXiv:2601.06843v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) have achieved strong performance across many tasks, yet most systems remain limited to offline inference, requiring complete inputs before generating outputs. Recent streaming methods reduce latency by interleaving perception and generation, but still enforce a sequential perception-generation cycle, limiting real-time interaction. In this work, we target a fundamental bottleneck that arises when extending MLLMs to real-time video understanding: the global positional continuity constraint imposed by standard positional encoding schemes. While natural in offline inference, this constraint tightly couples perception and generation, preventing effective input-output parallelism. To address this limitation, we propose a parallel streaming framework that relaxes positional continuity through three designs: Overlapped, Group-Decoupled, and Gap-Isolated. These designs enable simultaneous perception and generation, allowing the model to process incoming inputs while producing responses in real time. Extensive experiments reveal that Group-Decoupled achieves the best efficiency-performance balance, maintaining high fluency and accuracy while significantly reducing latency. We further show that the proposed framework yields up to 2x acceleration under balanced perception-generation workloads, establishing a principled pathway toward speak-while-watching real-time systems. We make all our code publicly available: https://github.com/EIT-NLP/Speak-While-Watching.
|
https://arxiv.org/abs/2601.06843
|
Academic Papers
|
svg
|
e71ba185ac735d0c03b4ad752ac6df9440f3d3ea56fa36bfe84c88383a435e44
|
2026-01-13T00:00:00-05:00
|
Variational decomposition autoencoding improves disentanglement of latent representations
|
arXiv:2601.06844v1 Announce Type: new Abstract: Understanding the structure of complex, nonstationary, high-dimensional time-evolving signals is a central challenge in scientific data analysis. In many domains, such as speech and biomedical signal processing, the ability to learn disentangled and interpretable representations is critical for uncovering latent generative mechanisms. Traditional approaches to unsupervised representation learning, including variational autoencoders (VAEs), often struggle to capture the temporal and spectral diversity inherent in such data. Here we introduce variational decomposition autoencoding (VDA), a framework that extends VAEs by incorporating a strong structural bias toward signal decomposition. VDA is instantiated through variational decomposition autoencoders (DecVAEs), i.e., encoder-only neural networks that combine a signal decomposition model, a contrastive self-supervised task, and variational prior approximation to learn multiple latent subspaces aligned with time-frequency characteristics. We demonstrate the effectiveness of DecVAEs on simulated data and three publicly available scientific datasets, spanning speech recognition, dysarthria severity evaluation, and emotional speech classification. Our results demonstrate that DecVAEs surpass state-of-the-art VAE-based methods in terms of disentanglement quality, generalization across tasks, and the interpretability of latent encodings. These findings suggest that decomposition-aware architectures can serve as robust tools for extracting structured representations from dynamic signals, with potential applications in clinical diagnostics, human-computer interaction, and adaptive neurotechnologies.
|
https://arxiv.org/abs/2601.06844
|
Academic Papers
|
svg
|
a622de662d8c40f40b2ffb277de21dea0a58ad6fe9345ba497bf2e7963ff3d17
|
2026-01-13T00:00:00-05:00
|
Code Evolution for Control: Synthesizing Policies via LLM-Driven Evolutionary Search
|
arXiv:2601.06845v1 Announce Type: new Abstract: Designing effective control policies for autonomous systems remains a fundamental challenge, traditionally addressed through reinforcement learning or manual engineering. While reinforcement learning has achieved remarkable success, it often suffers from high sample complexity, reward shaping difficulties, and produces opaque neural network policies that are hard to interpret or verify. Manual design, on the other hand, requires substantial domain expertise and struggles to scale across diverse tasks. In this work, we demonstrate that LLM-driven evolutionary search can effectively synthesize interpretable control policies in the form of executable code. By treating policy synthesis as a code evolution problem, we harness the LLM's prior knowledge of programming patterns and control heuristics while employing evolutionary search to explore the solution space systematically. We implement our approach using EvoToolkit, a framework that seamlessly integrates LLM-driven evolution with customizable fitness evaluation. Our method iteratively evolves populations of candidate policy programs, evaluating them against task-specific objectives and selecting superior individuals for reproduction. This process yields compact, human-readable control policies that can be directly inspected, modified, and formally verified. This work highlights the potential of combining foundation models with evolutionary computation for synthesizing trustworthy control policies in autonomous systems. Code is available at https://github.com/pgg3/EvoControl.
|
https://arxiv.org/abs/2601.06845
|
Academic Papers
|
svg
|
c3ea2fdaea14b3acaabc425e3136e849cd2668f4b79a701491884f8c5a0eadc2
|
2026-01-13T00:00:00-05:00
|
MedGround: Bridging the Evidence Gap in Medical Vision-Language Models with Verified Grounding Data
|
arXiv:2601.06847v1 Announce Type: new Abstract: Vision-Language Models (VLMs) can generate convincing clinical narratives, yet frequently struggle to visually ground their statements. We posit this limitation arises from the scarcity of high-quality, large-scale clinical referring-localization pairs. To address this, we introduce MedGround, an automated pipeline that transforms segmentation resources into high-quality medical referring grounding data. Leveraging expert masks as spatial anchors, MedGround precisely derives localization targets, extracts shape and spatial cues, and guides VLMs to synthesize natural, clinically grounded queries that reflect morphology and location. To ensure data rigor, a multi-stage verification system integrates strict formatting checks, geometry- and medical-prior rules, and image-based visual judging to filter out ambiguous or visually unsupported samples. Finally, we present MedGround-35K, a novel multimodal medical dataset. Extensive experiments demonstrate that VLMs trained with MedGround-35K consistently achieve improved referring grounding performance, enhance multi-object semantic disambiguation, and exhibit strong generalization to unseen grounding settings. This work highlights MedGround as a scalable, data-driven approach to anchor medical language to verifiable visual evidence. Dataset and code will be released publicly upon acceptance.
|
https://arxiv.org/abs/2601.06847
|
Academic Papers
|
svg
|
b0f8e1b4cd8897f0d3505683075cd164edddaac9866350d4ce5ba45770694ce9
|
2026-01-13T00:00:00-05:00
|
Explainable Multimodal Aspect-Based Sentiment Analysis with Dependency-guided Large Language Model
|
arXiv:2601.06848v1 Announce Type: new Abstract: Multimodal aspect-based sentiment analysis (MABSA) aims to identify aspect-level sentiments by jointly modeling textual and visual information, which is essential for fine-grained opinion understanding in social media. Existing approaches mainly rely on discriminative classification with complex multimodal fusion, yet lacking explicit sentiment explainability. In this paper, we reformulate MABSA as a generative and explainable task, proposing a unified framework that simultaneously predicts aspect-level sentiment and generates natural language explanations. Based on multimodal large language models (MLLMs), our approach employs a prompt-based generative paradigm, jointly producing sentiment and explanation. To further enhance aspect-oriented reasoning capabilities, we propose a dependency-syntax-guided sentiment cue strategy. This strategy prunes and textualizes the aspect-centered dependency syntax tree, guiding the model to distinguish different sentiment aspects and enhancing its explainability. To enable explainability, we use MLLMs to construct new datasets with sentiment explanations to fine-tune. Experiments show that our approach not only achieves consistent gains in sentiment classification accuracy, but also produces faithful, aspect-grounded explanations.
|
https://arxiv.org/abs/2601.06848
|
Academic Papers
|
svg
|
9259705c6ec3080a16d654525eceb0b95d3bd651eaf7cfcc20fc3d5e1ac1f2e3
|
2026-01-13T00:00:00-05:00
|
Analysis and Efficient Sylvester-Based Implementation of a Dimension-Split ETD2RK Scheme for Multidimensional Reaction-Diffusion Equations
|
arXiv:2601.06849v1 Announce Type: new Abstract: We propose and analyze a second-order, dimension-split exponential time differencing Runge--Kutta scheme (ETD2RK-DS) for multidimensional reaction--diffusion equations in two and three spatial dimensions. Under mild assumptions on the nonlinear source term, we establish uniform stability bounds and prove second-order temporal convergence for the underlying dimension-split scheme. To enable efficient implementation, we employ Pad\'e approximations of the matrix exponential, converting each required matrix-exponential--vector product into the solution of a shifted linear system. A convergence analysis of the resulting Pad\'e-based ETD2RK-DS formulation is provided. We derive explicit and reproducible tensor-slicing and reshaping algorithms that realize the dimension-splitting strategy, decomposing multidimensional systems into collections of independent one-dimensional problems. This leads to a reduction of the dominant per-time-step computational cost from $\mathcal{O}(m^3)$ to $\mathcal{O}(m^2)$ in two dimensions and from $\mathcal{O}(m^5)$ to $\mathcal{O}(m^3)$ in three dimensions when compared with banded LU solvers for the unsplit problem, where $m$ denotes the number of grid points per spatial direction. Furthermore, we develop a Sylvester-equation reformulation of the resulting one-dimensional systems, enabling a highly efficient spectral implementation based on reusable eigendecompositions, matrix--vector multiplications, and Hadamard divisions. Numerical experiments in two and three dimensions, including a coupled FitzHugh--Nagumo system, confirm the second-order temporal accuracy, stability of the underlying scheme, and scalability of the proposed ETD2RK-DS framework, as well as the substantial computational advantages of the Sylvester-based implementation over classical LU-based solvers.
|
https://arxiv.org/abs/2601.06849
|
Academic Papers
|
svg
|
47f5b0384e40b8be9e41c367295c8da5ad8aac4a99f376ddd1357884eec89d80
|
2026-01-13T00:00:00-05:00
|
A Brain-like Synergistic Core in LLMs Drives Behaviour and Learning
|
arXiv:2601.06851v1 Announce Type: new Abstract: The independent evolution of intelligence in biological and artificial systems offers a unique opportunity to identify its fundamental computational principles. Here we show that large language models spontaneously develop synergistic cores -- components where information integration exceeds individual parts -- remarkably similar to those in the human brain. Using principles of information decomposition across multiple LLM model families and architectures, we find that areas in middle layers exhibit synergistic processing while early and late layers rely on redundancy, mirroring the informational organisation in biological brains. This organisation emerges through learning and is absent in randomly initialised networks. Crucially, ablating synergistic components causes disproportionate behavioural changes and performance loss, aligning with theoretical predictions about the fragility of synergy. Moreover, fine-tuning synergistic regions through reinforcement learning yields significantly greater performance gains than training redundant components, yet supervised fine-tuning shows no such advantage. This convergence suggests that synergistic information processing is a fundamental property of intelligence, providing targets for principled model design and testable predictions for biological intelligence.
|
https://arxiv.org/abs/2601.06851
|
Academic Papers
|
svg
|
d2313d876074963a9b087386bc05b94ffe665a2c3e2ba5353058bd60d3ca386c
|
2026-01-13T00:00:00-05:00
|
{\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems
|
arXiv:2601.06853v1 Announce Type: new Abstract: Chain-of-Thought (CoT) prompting is widely adopted for mathematical problem solving, including in low-resource languages, yet its behavior under irrelevant context remains underexplored. To systematically study this challenge, we introduce DISTRACTMATH-BN, a Bangla benchmark that augments MGSM and MSVAMP with semantically coherent but computationally irrelevant information. Evaluating seven models ranging from 3B to 12B parameters, we observe substantial performance degradation under distractors: standard models drop by up to 41 points, while reasoning-specialized models decline by 14 to 20 points despite consuming five times more tokens. We propose {\dag}DAGGER, which reformulates mathematical problem solving as executable computational graph generation with explicit modeling of distractor nodes. Fine-tuning Gemma-3 models using supervised fine-tuning followed by Group Relative Policy Optimization achieves comparable weighted accuracy on augmented benchmarks while using 89 percent fewer tokens than reasoning models. Importantly, this robustness emerges without explicit training on distractor-augmented examples. Our results suggest that enforcing structured intermediate representations improves robustness and inference efficiency in mathematical reasoning compared to free-form approaches, particularly in noisy, low-resource settings.
|
https://arxiv.org/abs/2601.06853
|
Academic Papers
|
svg
|
2118f3a1f113bc948b62e4c632a46cd94460b6c3eda88bfd90c40ffb566d5564
|
2026-01-13T00:00:00-05:00
|
Semilinear single-track vehicle models with distributed tyre friction dynamics
|
arXiv:2601.06854v1 Announce Type: new Abstract: This paper introduces a novel family of single-track vehicle models that incorporate a distributed representation of transient tyre dynamics, whilst simultaneously accounting for nonlinear effects induced by friction. The core of the proposed framework is represented by the distributed Friction with Bristle Dynamics (FrBD) model, which unifies and extends classical formulations such as Dahl and LuGre by describing the rolling contact process as a spatially distributed system governed by semilinear partial differential equations (PDEs). This model is systematically integrated into a single-track vehicle framework, where the resulting semilinear ODE-PDE interconnection captures the interaction between lateral vehicle motion and tyre deformation. Two main variants are considered: one with rigid tyre carcass and another with flexible carcass, each admitting a compact state-space representation. Local and global well-posedness properties for the coupled system are established rigorously, highlighting the dissipative and physically consistent properties of the distributed FrBD model. A linearisation procedure is also presented, enabling spectral analysis and transfer function derivation, and potentially facilitating the synthesis of controllers and observers. Numerical simulations demonstrate the model's capability to capture micro-shimmy oscillations and transient lateral responses to advanced steering manoeuvres. The proposed formulation advances the state-of-the-art in vehicle dynamics modelling by providing a physically grounded, mathematically rigorous, and computationally tractable approach to incorporating transient tyre behaviour in lateral vehicle dynamics, when accounting for the effect of limited friction.
|
https://arxiv.org/abs/2601.06854
|
Academic Papers
|
svg
|
bca2919c59a11cbff1cd74dabdcce5298ccff424fa9233f0243540fb4addbdd6
|
2026-01-13T00:00:00-05:00
|
MoE-DisCo:Low Economy Cost Training Mixture-of-Experts Models
|
arXiv:2601.06857v1 Announce Type: new Abstract: Training large-scale Mixture-of-Experts (MoE) models typically requires high-memory, high-bandwidth GPUs (e.g., A100), and their high cost has become a major barrier to large-model training. In contrast, affordable hardware is low-cost but constrained by memory capacity and bandwidth, making it unsuitable for direct LLM training. To address this, we propose MoE-DisCo (Mixture-of-Experts with Disentangled Clustering and Coordination), a staged training framework. MoE-DisCo decomposes the MoE model into multiple dense submodels, each consisting of a shared backbone and a single expert, and partitions the training data into subsets using unsupervised clustering. Each submodel is trained independently and in parallel on its assigned data subset using low-cost devices, without any inter-device communication. Subsequently, all experts are integrated into a complete MoE model and fine-tuned globally for a short period on high-memory, high-bandwidth GPUs. Experiments show that our method matches or even surpasses full-parameter training in performance across multiple downstream tasks, loss function, and perplexity (PPL), while reducing training cost by 47.6 percent to 69.5 percent on Qwen1.5-MoE-2.7B and Llama-MoE-3.5B across different datasets.
|
https://arxiv.org/abs/2601.06857
|
Academic Papers
|
svg
|
f9081b3a9255975cb25f715e01be70a8b57ee3219110d357d06717f638c63b24
|
2026-01-13T00:00:00-05:00
|
ET-Agent: Incentivizing Effective Tool-Integrated Reasoning Agent via Behavior Calibration
|
arXiv:2601.06860v1 Announce Type: new Abstract: Large Language Models (LLMs) can extend their parameter knowledge limits by adopting the Tool-Integrated Reasoning (TIR) paradigm. However, existing LLM-based agent training framework often focuses on answers' accuracy, overlooking specific alignment for behavior patterns. Consequently, agent often exhibits ineffective actions during TIR tasks, such as redundant and insufficient tool calls. How to calibrate erroneous behavioral patterns when executing TIR tasks, thereby exploring effective trajectories, remains an open-ended problem. In this paper, we propose ET-Agent, a training framework for calibrating agent's tool-use behavior through two synergistic perspectives: Self-evolving Data Flywheel and Behavior Calibration Training. Specifically, we introduce a self-evolutionary data flywheel to generate enhanced data, used to fine-tune LLM to improve its exploration ability. Based on this, we implement an two-phases behavior-calibration training framework. It is designed to progressively calibrate erroneous behavioral patterns to optimal behaviors. Further in-depth experiments confirm the superiority of \ourmodel{} across multiple dimensions, including correctness, efficiency, reasoning conciseness, and tool execution accuracy. Our ET-Agent framework provides practical insights for research in the TIR field. Codes can be found in https://github.com/asilverlight/ET-Agent
|
https://arxiv.org/abs/2601.06860
|
Academic Papers
|
svg
|
dca246955f7c9df535901b314e19dd8000bcb9e724e23b3fbb306b0a57fdfdb6
|
2026-01-13T00:00:00-05:00
|
BiasLab: A Multilingual, Dual-Framing Framework for Robust Measurement of Output-Level Bias in Large Language Models
|
arXiv:2601.06861v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly deployed in high-stakes contexts where their outputs influence real-world decisions. However, evaluating bias in LLM outputs remains methodologically challenging due to sensitivity to prompt wording, limited multilingual coverage, and the lack of standardized metrics that enable reliable comparison across models. This paper introduces BiasLab, an open-source, model-agnostic evaluation framework for quantifying output-level (extrinsic) bias through a multilingual, robustness-oriented experimental design. BiasLab constructs mirrored probe pairs under a strict dual-framing scheme: an affirmative assertion favoring Target A and a reverse assertion obtained by deterministic target substitution favoring Target B, while preserving identical linguistic structure. To reduce dependence on prompt templates, BiasLab performs repeated evaluation under randomized instructional wrappers and enforces a fixed-choice Likert response format to maximize comparability across models and languages. Responses are normalized into agreement labels using an LLM-based judge, aligned for polarity consistency across framings, and aggregated into quantitative bias indicators with descriptive statistics including effect sizes and neutrality rates. The framework supports evaluation across diverse bias axes, including demographic, cultural, political, and geopolitical topics, and produces reproducible artifacts such as structured reports and comparative visualizations. BiasLab contributes a standardized methodology for cross-lingual and framing-sensitive bias measurement that complements intrinsic and dataset-based audits, enabling researchers and institutions to benchmark robustness and make better-informed deployment decisions.
|
https://arxiv.org/abs/2601.06861
|
Academic Papers
|
svg
|
1c98a02d6847204b34b69ef87f47cd4e7cc7dad9a0010de400edbf15dcf7fa4e
|
2026-01-13T00:00:00-05:00
|
qAttCNN - Self Attention Mechanism for Video QoE Prediction in Encrypted Traffic
|
arXiv:2601.06862v1 Announce Type: new Abstract: The rapid growth of multimedia consumption, driven by major advances in mobile devices since the mid-2000s, has led to widespread use of video conferencing applications (VCAs) such as Zoom and Google Meet, as well as instant messaging applications (IMAs) like WhatsApp and Telegram, which increasingly support video conferencing as a core feature. Many of these systems rely on the Web Real-Time Communication (WebRTC) protocol, enabling direct peer-to-peer media streaming without requiring a third-party server to relay data, reducing the latency and facilitating a real-time communication. Despite WebRTC's potential, adverse network conditions can degrade streaming quality and consequently reduce users' Quality of Experience (QoE). Maintaining high QoE therefore requires continuous monitoring and timely intervention when QoE begins to deteriorate. While content providers can often estimate QoE by directly comparing transmitted and received media, this task is significantly more challenging for internet service providers (ISPs). End-to-end encryption, commonly used by modern VCAs and IMAs, prevent ISPs from accessing the original media stream, leaving only Quality of Service (QoS) and routing information available. To address this limitation, we propose the QoE Attention Convolutional Neural Network (qAttCNN), a model that leverages packet size parameter of the traffic to infer two no-reference QoE metrics viz. BRISQUE and frames per second (FPS). We evaluate qAttCNN on a custom dataset collected from WhatsApp video calls and compare it against existing QoE models. Using mean absolute error percentage (MAEP), our approach achieves 2.14% error for BRISQUE and 7.39% for FPS prediction.
|
https://arxiv.org/abs/2601.06862
|
Academic Papers
|
svg
|
5751f155644b35579783abf5618b713924b264b04dea14186c97438532ac88b9
|
2026-01-13T00:00:00-05:00
|
United We Defend: Collaborative Membership Inference Defenses in Federated Learning
|
arXiv:2601.06866v1 Announce Type: new Abstract: Membership inference attacks (MIAs), which determine whether a specific data point was included in the training set of a target model, have posed severe threats in federated learning (FL). Unfortunately, existing MIA defenses, typically applied independently to each client in FL, are ineffective against powerful trajectory-based MIAs that exploit temporal information throughout the training process to infer membership status. In this paper, we investigate a new FL defense scenario driven by heterogeneous privacy needs and privacy-utility trade-offs, where only a subset of clients are defended, as well as a collaborative defense mode where clients cooperate to mitigate membership privacy leakage. To this end, we introduce CoFedMID, a collaborative defense framework against MIAs in FL, which limits local model memorization of training samples and, through a defender coalition, enhances privacy protection and model utility. Specifically, CoFedMID consists of three modules: a class-guided partition module for selective local training samples, a utility-aware compensation module to recycle contributive samples and prevent their overconfidence, and an aggregation-neutral perturbation module that injects noise for cancellation at the coalition level into client updates. Extensive experiments on three datasets show that our defense framework significantly reduces the performance of seven MIAs while incurring only a small utility loss. These results are consistently verified across various defense settings.
|
https://arxiv.org/abs/2601.06866
|
Academic Papers
|
svg
|
2694a7006451c9562101164a7f7a06bbdf288ba22e391d7207fc5669747df4ce
|
2026-01-13T00:00:00-05:00
|
U-MASK: User-adaptive Spatio-Temporal Masking for Personalized Mobile AI Applications
|
arXiv:2601.06867v1 Announce Type: new Abstract: Personalized mobile artificial intelligence applications are widely deployed, yet they are expected to infer user behavior from sparse and irregular histories under a continuously evolving spatio-temporal context. This setting induces a fundamental tension among three requirements, i.e., immediacy to adapt to recent behavior, stability to resist transient noise, and generalization to support long-horizon prediction and cold-start users. Most existing approaches satisfy at most two of these requirements, resulting in an inherent impossibility triangle in data-scarce, non-stationary personalization. To address this challenge, we model mobile behavior as a partially observed spatio-temporal tensor and unify short-term adaptation, long-horizon forecasting, and cold-start recommendation as a conditional completion problem, where a user- and task-specific mask specifies which coordinates are treated as evidence. We propose U-MASK, a user-adaptive spatio-temporal masking method that allocates evidence budgets based on user reliability and task sensitivity. To enable mask generation under sparse observations, U-MASK learns a compact, task-agnostic user representation from app and location histories via U-SCOPE, which serves as the sole semantic conditioning signal. A shared diffusion transformer then performs mask-guided generative completion while preserving observed evidence, so personalization and task differentiation are governed entirely by the mask and the user representation. Experiments on real-world mobile datasets demonstrate consistent improvements over state-of-the-art methods across short-term prediction, long-horizon forecasting, and cold-start settings, with the largest gains under severe data sparsity. The code and dataset will be available at https://github.com/NICE-HKU/U-MASK.
|
https://arxiv.org/abs/2601.06867
|
Academic Papers
|
svg
|
182d929a4128097d041ddaedbb9c3d8afa6f7870bcf5ac40568d2271d134653f
|
2026-01-13T00:00:00-05:00
|
DaQ-MSA: Denoising and Qualifying Diffusion Augmentations for Multimodal Sentiment Analysis
|
arXiv:2601.06870v1 Announce Type: new Abstract: Multimodal large language models (MLLMs) have demonstrated strong performance on vision-language tasks, yet their effectiveness on multimodal sentiment analysis remains constrained by the scarcity of high-quality training data, which limits accurate multimodal understanding and generalization. To alleviate this bottleneck, we leverage diffusion models to perform semantics-preserving augmentation on the video and audio modalities, expanding the multimodal training distribution. However, increasing data quantity alone is insufficient, as diffusion-generated samples exhibit substantial quality variation and noisy augmentations may degrade performance. We therefore propose DaQ-MSA (Denoising and Qualifying Diffusion Augmentations for Multimodal Sentiment Analysis), which introduces a quality scoring module to evaluate the reliability of augmented samples and assign adaptive training weights. By down-weighting low-quality samples and emphasizing high-fidelity ones, DaQ-MSA enables more stable learning. By integrating the generative capability of diffusion models with the semantic understanding of MLLMs, our approach provides a robust and generalizable automated augmentation strategy for training MLLMs without any human annotation or additional supervision.
|
https://arxiv.org/abs/2601.06870
|
Academic Papers
|
svg
|
0e8ab5cc6914871ce3bf94b9570529853af8d89c2e62ce40031f45ff7f633af8
|
2026-01-13T00:00:00-05:00
|
Applying Embedding-Based Retrieval to Airbnb Search
|
arXiv:2601.06873v1 Announce Type: new Abstract: The goal of Airbnb search is to match guests with the ideal accommodation that fits their travel needs. This is a challenging problem, as popular search locations can have around a hundred thousand available homes, and guests themselves have a wide variety of preferences. Furthermore, the launch of new product features, such as \textit{flexible date search,} significantly increased the number of eligible homes per search query. As such, there is a need for a sophisticated retrieval system which can provide high-quality candidates with low latency in a way that integrates with the overall ranking stack. This paper details our journey to build an efficient and high-quality retrieval system for Airbnb search. We describe the key unique challenges we encountered when implementing an Embedding-Based Retrieval (EBR) system for a two sided marketplace like Airbnb -- such as the dynamic nature of the inventory, a lengthy user funnel with multiple stages, and a variety of product surfaces. We cover unique insights when modeling the retrieval problem, how to build robust evaluation systems, and design choices for online serving. The EBR system was launched to production and powers several use-cases such as regular search, flexible date and promotional emails for marketing campaigns. The system demonstrated statistically-significant improvements in key metrics, such as booking conversion, via A/B testing.
|
https://arxiv.org/abs/2601.06873
|
Academic Papers
|
svg
|
cd3043250d58993cdac85cfb75a27a10c8e67a8200eb3ccdf628d0003395436a
|
2026-01-13T00:00:00-05:00
|
MVGGT: Multimodal Visual Geometry Grounded Transformer for Multiview 3D Referring Expression Segmentation
|
arXiv:2601.06874v1 Announce Type: new Abstract: Most existing 3D referring expression segmentation (3DRES) methods rely on dense, high-quality point clouds, while real-world agents such as robots and mobile phones operate with only a few sparse RGB views and strict latency constraints. We introduce Multi-view 3D Referring Expression Segmentation (MV-3DRES), where the model must recover scene structure and segment the referred object directly from sparse multi-view images. Traditional two-stage pipelines, which first reconstruct a point cloud and then perform segmentation, often yield low-quality geometry, produce coarse or degraded target regions, and run slowly. We propose the Multimodal Visual Geometry Grounded Transformer (MVGGT), an efficient end-to-end framework that integrates language information into sparse-view geometric reasoning through a dual-branch design. Training in this setting exposes a critical optimization barrier, termed Foreground Gradient Dilution (FGD), where sparse 3D signals lead to weak supervision. To resolve this, we introduce Per-view No-target Suppression Optimization (PVSO), which provides stronger and more balanced gradients across views, enabling stable and efficient learning. To support consistent evaluation, we build MVRefer, a benchmark that defines standardized settings and metrics for MV-3DRES. Experiments show that MVGGT establishes the first strong baseline and achieves both high accuracy and fast inference, outperforming existing alternatives. Code and models are publicly available at https://mvggt.github.io.
|
https://arxiv.org/abs/2601.06874
|
Academic Papers
|
svg
|
d0d54d320d17b0f99be8cea0ba3abdc94a3ac7defec8457caa59aa0559cc67ee
|
2026-01-13T00:00:00-05:00
|
An Ubuntu-Guided Large Language Model Framework for Cognitive Behavioral Mental Health Dialogue
|
arXiv:2601.06875v1 Announce Type: new Abstract: South Africa's escalating mental health crisis, compounded by limited access to culturally responsive care, calls for innovative and contextually grounded interventions. While large language models show considerable promise for mental health support, their predominantly Western-centric training data limit cultural and linguistic applicability in African contexts. This study introduces a proof-of-concept framework that integrates cognitive behavioral therapy with the African philosophy of Ubuntu to create a culturally sensitive, emotionally intelligent, AI-driven mental health dialogue system. Guided by a design science research methodology, the framework applies both deep theoretical and therapeutic adaptations as well as surface-level linguistic and communicative cultural adaptations. Key CBT techniques, including behavioral activation and cognitive restructuring, were reinterpreted through Ubuntu principles that emphasize communal well-being, spiritual grounding, and interconnectedness. A culturally adapted dataset was developed through iterative processes of language simplification, spiritual contextualization, and Ubuntu-based reframing. The fine-tuned model was evaluated through expert-informed case studies, employing UniEval for conversational quality assessment alongside additional measures of CBT reliability and cultural linguistic alignment. Results demonstrate that the model effectively engages in empathetic, context-aware dialogue aligned with both therapeutic and cultural objectives. Although real-time end-user testing has not yet been conducted, the model underwent rigorous review and supervision by domain specialist clinical psychologists. The findings highlight the potential of culturally embedded emotional intelligence to enhance the contextual relevance, inclusivity, and effectiveness of AI-driven mental health interventions across African settings.
|
https://arxiv.org/abs/2601.06875
|
Academic Papers
|
svg
|
eedaeb88e5b41f618a40f9128cc6ffe7384ddaf070f83698ecd7389a55318260
|
2026-01-13T00:00:00-05:00
|
Personality-Aware Reinforcement Learning for Persuasive Dialogue with LLM-Driven Simulation
|
arXiv:2601.06877v1 Announce Type: new Abstract: Effective persuasive dialogue agents adapt their strategies to individual users, accounting for the evolution of their psychological states and intentions throughout conversations. We present a personality-aware reinforcement learning approach comprising three main modules: (1) a Strategy-Oriented Interaction Framework, which serves as an agenda-based strategy controller that selects strategy-level actions and generate responses via Maximal Marginal Relevance (MMR) retrieval to ensure contextual relevance, diversity, and scalable data generation; (2) Personality-Aware User Representation Learning, which produces an 81-dimensional mixed-type embedding predicted at each turn from recent exchanges and appended to the reinforcement learning state; and (3) a Dueling Double DQN (D3QN) model and Reward Prediction, in which the policy is conditioned on dialogue history and turn-level personality estimates and trained using a composite reward incorporating agreement intent, donation amount, and changeof-mind penalties. We use an agenda-based LLM simulation pipeline to generate diverse interactions, from which personality estimation is inferred from the generated utterances. Experiments on the PersuasionForGood (P4G) dataset augmented with simulated dialogues reveal three main findings: (i) turn-level personality conditioning improves policy adaptability and cumulative persuasion rewards; (ii) LLM-driven simulation enhances generalization to unseen user behaviors; and (iii) incorporating a change-of-mind penalty reduces post-agreement retractions while slightly improving donation outcomes. These results demonstrate that structured interaction, dynamic personality estimation, and behaviorally informed rewards together yield more effective persuasive policies.
|
https://arxiv.org/abs/2601.06877
|
Academic Papers
|
svg
|
d278b6382958f671b94807fe6fba9dd3315397160df8c34f0a316ccdf8c9e1aa
|
2026-01-13T00:00:00-05:00
|
Fast frequency response with heterogeneous communication delay management under the SCION Internet architecture
|
arXiv:2601.06879v1 Announce Type: new Abstract: System operators can increasingly exploit distributed energy resources (DERs) and controllable loads (CLs) to provide frequency response services. In conventional practice, communication between the system operator and flexible devices relies on the Border Gateway Protocol (BGP)-based Internet. However, existing BGP-based architectures face challenges in providing latency-guaranteed control, while direct private and proprietary communication networks lead to additional deployment and maintenance costs. In contrast, the SCION-based Internet architecture supports latency-minimum path selection, which makes it suitable for latency-sensitive frequency contingency services such as fast frequency response (FFR). Hence, this paper proposes a real-time reserve dispatch framework to optimally select a portfolio of flexible devices to deliver FFR services using the SCION-based Internet. First, an analytical expression of the system frequency dynamics with respect to heterogeneous communication latencies is derived. Next, a cyber-physical co-optimization model is formulated to jointly schedule communication paths and physical flexibility resources for real-time FFR provision. To improve the computation efficiency, we propose a heuristic FFR allocation algorithm to approximate the optimal response portfolio, integrating contributions from both DERs and CLs. Numerical case studies demonstrate the benefits of the proposed algorithm and its capability to approximate the optimality of the reserves allocation while significantly reducing the computation time.
|
https://arxiv.org/abs/2601.06879
|
Academic Papers
|
svg
|
dd91e9bc5cc5dc40dc352e4a8bef301894838fc0183ee19dcab7e402990502b5
|
2026-01-13T00:00:00-05:00
|
Unsupervised Domain Adaptation with SAM-RefiSeR for Enhanced Brain Tumor Segmentation
|
arXiv:2601.06882v1 Announce Type: new Abstract: Unsupervised Domain Adaptation with SAM-RefiSeR for Enhanced Brain Tumor Segmentation
|
https://arxiv.org/abs/2601.06882
|
Academic Papers
|
svg
|
26e7493f6b124593570fa48b0b5ca2acf48f9ddf6466bc784253d046f345c04b
|
2026-01-13T00:00:00-05:00
|
MixRI: Mixing Features of Reference Images for Novel Object Pose Estimation
|
arXiv:2601.06883v1 Announce Type: new Abstract: We present MixRI, a lightweight network that solves the CAD-based novel object pose estimation problem in RGB images. It can be instantly applied to a novel object at test time without finetuning. We design our network to meet the demands of real-world applications, emphasizing reduced memory requirements and fast inference time. Unlike existing works that utilize many reference images and have large network parameters, we directly match points based on the multi-view information between the query and reference images with a lightweight network. Thanks to our reference image fusion strategy, we significantly decrease the number of reference images, thus decreasing the time needed to process these images and the memory required to store them. Furthermore, with our lightweight network, our method requires less inference time. Though with fewer reference images, experiments on seven core datasets in the BOP challenge show that our method achieves comparable results with other methods that require more reference images and larger network parameters.
|
https://arxiv.org/abs/2601.06883
|
Academic Papers
|
svg
|
4541e3bfbe2be633ae855b08a1ea060bacb14e0904e43c18c97f71bbb614f339
|
2026-01-13T00:00:00-05:00
|
Paraphrasing Adversarial Attack on LLM-as-a-Reviewer
|
arXiv:2601.06884v1 Announce Type: new Abstract: The use of large language models (LLMs) in peer review systems has attracted growing attention, making it essential to examine their potential vulnerabilities. Prior attacks rely on prompt injection, which alters manuscript content and conflates injection susceptibility with evaluation robustness. We propose the Paraphrasing Adversarial Attack (PAA), a black-box optimization method that searches for paraphrased sequences yielding higher review scores while preserving semantic equivalence and linguistic naturalness. PAA leverages in-context learning, using previous paraphrases and their scores to guide candidate generation. Experiments across five ML and NLP conferences with three LLM reviewers and five attacking models show that PAA consistently increases review scores without changing the paper's claims. Human evaluation confirms that generated paraphrases maintain meaning and naturalness. We also find that attacked papers exhibit increased perplexity in reviews, offering a potential detection signal, and that paraphrasing submissions can partially mitigate attacks.
|
https://arxiv.org/abs/2601.06884
|
Academic Papers
|
svg
|
4af622502c3b864ede9dde05bcc8e22b5fe6b9fba024c10f04d7cb754f293422
|
2026-01-13T00:00:00-05:00
|
Understanding the Performance Behaviors of End-to-End Protein Design Pipelines on GPUs
|
arXiv:2601.06885v1 Announce Type: new Abstract: Recent computational advances enable protein design pipelines to run end-to-end on GPUs, yet their heterogeneous computational behaviors remain undercharacterized at the system level. We implement and profile a representative pipeline at both component and full-pipeline granularities across varying inputs and hyperparameters. Our characterization identifies generally low GPU utilization and high sensitivity to sequence length and sampling strategies. We outline future research directions based on these insights and release an open-source pipeline and profiling scripts to facilitate further studies.
|
https://arxiv.org/abs/2601.06885
|
Academic Papers
|
svg
|
47c098f45b6ba41ac4b98bbaa478c6a45a1a8e32ea8baa5c85ac2c6337e2f46b
|
2026-01-13T00:00:00-05:00
|
Learning-Augmented Performance Model for Tensor Product Factorization in High-Order FEM
|
arXiv:2601.06886v1 Announce Type: new Abstract: Accurate performance prediction is essential for optimizing scientific applications on modern high-performance computing (HPC) architectures. Widely used performance models primarily focus on cache and memory bandwidth, which is suitable for many memory-bound workloads. However, it is unsuitable for highly arithmetic intensive cases such as the sum-factorization with tensor $n$-mode product kernels, which are an optimization technique for high-order finite element methods (FEM). On processors with relatively high single instruction multiple data (SIMD) instruction latency, such as the Fujitsu A64FX, the performance of these kernels is strongly influenced by loop-body splitting strategies. Memory-bandwidth-oriented models are therefore not appropriate for evaluating these splitting configurations, and a model that directly reflects instruction-level efficiency is required. To address this need, we develop a dependency-chain-based analytical formulation that links loop-splitting configurations to instruction dependencies in the tensor $n$-mode product kernel. We further use XGBoost to estimate key parameters in the analytical model that are difficult to model explicitly. Evaluations show that the learning-augmented model outperforms the widely used standard Roofline and Execution-Cache-Memory (ECM) models. On the Fujitsu A64FX processor, the learning-augmented model achieves mean absolute percentage errors (MAPE) between 1% and 24% for polynomial orders ($P$) from 1 to 15. In comparison, the standard Roofline and ECM models yield errors of 42%-256% and 5%-117%, respectively. On the Intel Xeon Gold 6230 processor, the learning-augmented model achieves MAPE values from 1% to 13% for $P$=1 to $P$=14, and 24% at $P$=15. In contrast, the standard Roofline and ECM models produce errors of 1%-73% and 8%-112% for $P$=1 to $P$=15, respectively.
|
https://arxiv.org/abs/2601.06886
|
Academic Papers
|
svg
|
1452310b28ea5fd588b26d38c461e0877600bcc9f650b16f95d5ec12dff24ec0
|
2026-01-13T00:00:00-05:00
|
Observability-Enhanced Target Motion Estimation via Bearing-Box: Theory and MAV Applications
|
arXiv:2601.06887v1 Announce Type: new Abstract: Monocular vision-based target motion estimation is a fundamental challenge in numerous applications. This work introduces a novel bearing-box approach that fully leverages modern 3D detection measurements that are widely available nowadays but have not been well explored for motion estimation so far. Unlike existing methods that rely on restrictive assumptions such as isotropic target shape and lateral motion, our bearing-box estimator can estimate both the target's motion and its physical size without these assumptions by exploiting the information buried in a 3D bounding box. When applied to multi-rotor micro aerial vehicles (MAVs), the estimator yields an interesting advantage: it further removes the need for higher-order motion assumptions by exploiting the unique coupling between MAV's acceleration and thrust. This is particularly significant, as higher-order motion assumptions are widely believed to be necessary in state-of-the-art bearing-based estimators. We support our claims with rigorous observability analyses and extensive experimental validation, demonstrating the estimator's superior performance in real-world scenarios.
|
https://arxiv.org/abs/2601.06887
|
Academic Papers
|
svg
|
1479d2e2190ed12dbcfbcdf0d00a631f8867743cc2930bf1271dd8941ef6df9b
|
2026-01-13T00:00:00-05:00
|
CLIMP: Contrastive Language-Image Mamba Pretraining
|
arXiv:2601.06891v1 Announce Type: new Abstract: Contrastive Language-Image Pre-training (CLIP) relies on Vision Transformers whose attention mechanism is susceptible to spurious correlations, and scales quadratically with resolution. To address these limitations, We present CLIMP, the first fully Mamba-based contrastive vision-language model that replaces both the vision and text encoders with Mamba. The new architecture encodes sequential structure in both vision and language, with VMamba capturing visual spatial inductive biases, reducing reliance on spurious correlations and producing an embedding space favorable for cross-modal retrieval and out-of-distribution robustness-surpassing OpenAI's CLIP-ViT-B by 7.5% on ImageNet-O. CLIMP naturally supports variable input resolutions without positional encoding interpolation or specialized training, achieving up to 6.6% higher retrieval accuracy at 16x training resolution while using 5x less memory and 1.8x fewer FLOPs. The autoregressive text encoder further overcomes CLIP's fixed context limitation, enabling dense captioning retrieval. Our findings suggest that Mamba exhibits advantageous properties for vision-language learning, making it a compelling alternative to Transformer-based CLIP.
|
https://arxiv.org/abs/2601.06891
|
Academic Papers
|
svg
|
e063feb79818cc9c215b493bf9fad0e00178b21bfa0ce6abf0b84832c41662d4
|
2026-01-13T00:00:00-05:00
|
How Do Ports Organise Innovation? Linking Port Governance, Ownership, and Living Labs
|
arXiv:2601.06894v1 Announce Type: new Abstract: Ports are pivotal to decarbonisation and resilience, yet port studies rarely examine how ownership and decision rights shape the process and outcomes of sustainability and digital pilots. Living Lab (LL) scholarship offers strong concepts, but limited sector-grounded explanation of LL-governance fit in ports. We develop and apply a governance-LL fit framework linking port governance and ownership to four LL pillars: co-creation, real-life setting, iterative learning, and institutional embedding (multi-level decision-making). We apply the framework in a comparative case study of two analytically contrasting ports, anchored in port-defined priorities: an Energy Community pilot in Aalborg and a Green Coordinator function in Trelleborg. Using an LL macro-meso-micro lens, we distinguish the stable constellation of actors and mandates (macro), the governance of specific projects (meso), and the methods used to generate and test solutions (micro). Findings show that Landlord governance offers contract- and procurement-based landing zones (concessions/leases and tender clauses) that can codify LL outputs and support scaling across tenants and infrastructures. Tool/Public Service governance embeds learning mainly through SOPs, procurement specifications, and municipal coordination, enabling internal operational gains but limiting external codification without bespoke agreements. Across both ports, key needs are clear role definition, sustained stakeholder engagement, and timely alignment with decision windows. Overall, LL effectiveness is governance-contingent, reflecting where decision rights sit and which instruments embed learning into routine practice.
|
https://arxiv.org/abs/2601.06894
|
Academic Papers
|
svg
|
34663c7124cf91af319935d76041aa72454c38e882462cd1096261c375703eb8
|
2026-01-13T00:00:00-05:00
|
Resilience by Design: A KPI for Heavy-Duty Megawatt Charging
|
arXiv:2601.06898v1 Announce Type: new Abstract: We introduce a stressor-agnostic Resilience Key Performance Indicator (Resilience KPI) for megawatt charging stations (MSC) serving heavy-duty vehicles. Beyond routine performance statistics (e.g., availability, throughput), the KPI quantifies a site's ability to anticipate, operate under degradation, and recover from disruptions using observable signals already in the framework: ride-through capability, restoration speed, service under N-1, expected unserved charging energy, and queue impacts. The headline score is normalised to 0-100 for fair cross-site and cross-vendor benchmarking, with optional stressor-specific breakouts (grid, ICT, thermal, flooding, on-site incidents) for diagnostics and robustness checks. DATEX II provides a solid baseline for resilience KPIs centred on infrastructure inventory, status, and pricing, while additional KPIs, especially around grid capacity, on-site flexibility, heavy-vehicle geometry, environmental hardening, maintenance, and market exposure, are essential for a complete resilience picture and will require extensions or complementary data sources. The KPI is designed for monthly/quarterly reporting to support design and operational decisions and cost-benefit assessment of mitigations (e.g., backup power, spares, procedures). It offers a consistent, transparent methodology that consolidates heterogeneous logs and KPIs into a single, auditable indicator, making resilience comparable across sites, vendors, and jurisdictions.
|
https://arxiv.org/abs/2601.06898
|
Academic Papers
|
svg
|
9fdcdf12c1ba1a80df80ab20e18d00c84f3c9dc74bf278f7b79eb12e06232339
|
2026-01-13T00:00:00-05:00
|
V2P: Visual Attention Calibration for GUI Grounding via Background Suppression and Center Peaking
|
arXiv:2601.06899v1 Announce Type: new Abstract: Precise localization of GUI elements is crucial for the development of GUI agents. Traditional methods rely on bounding box or center-point regression, neglecting spatial interaction uncertainty and visual-semantic hierarchies. Recent methods incorporate attention mechanisms but still face two key issues: (1) ignoring processing background regions causes attention drift from the desired area, and (2) uniform modeling the target UI element fails to distinguish between its center and edges, leading to click imprecision. Inspired by how humans visually process and interact with GUI elements, we propose the Valley-to-Peak (V2P) method to address these issues. To mitigate background distractions, V2P introduces a suppression attention mechanism that minimizes the model's focus on irrelevant regions to highlight the intended region. For the issue of center-edge distinction, V2P applies a Fitts' Law-inspired approach by modeling GUI interactions as 2D Gaussian heatmaps where the weight gradually decreases from the center towards the edges. The weight distribution follows a Gaussian function, with the variance determined by the target's size. Consequently, V2P effectively isolates the target area and teaches the model to concentrate on the most essential point of the UI element. The model trained by V2P achieves the performance with 92.4\% and 52.5\% on two benchmarks ScreenSpot-v2 and ScreenSpot-Pro (see Fig.~\ref{fig:main_results_charts}). Ablations further confirm each component's contribution, underscoring V2P's generalizability in precise GUI grounding tasks and its potential for real-world deployment in future GUI agents.
|
https://arxiv.org/abs/2601.06899
|
Academic Papers
|
svg
|
25dbf246fed42edae521cc45d129f2d7ea5656308482793dbf9bb0d75bbda123
|
2026-01-13T00:00:00-05:00
|
Santa Clara 3D: Digital Reconstruction and Storytelling of a Francoist Concentration Camp
|
arXiv:2601.06902v1 Announce Type: new Abstract: This paper explores the potential of digital reconstruction and interactive storytelling to preserve historically suppressed sites. The main objective of an interdisciplinary team of data scientists from the MEMORISE project and associates of the memory association Asociacion Recuerdo y Dignidad was to preserve the memory of the Francoist Santa Clara concentration camp in Soria, Spain, through the use of digital technology. Combining archival research, 3D modelling, 360-degree photography, and web development, a prototype digital platform was created to visualise the transformation of the site across three historical phases: its origin as a convent, its use as a Francoist concentration camp, and its present-day condition. The platform allows users to navigate through spatial and temporal layers. Clickable media markers encourage exploration and interaction. Drawing on principles of participatory design, narrative visualisation, and open-ended user engagement, the project demonstrates how digital tools can support memory work, public engagement, and historical reflection. Our low-cost concept is especially adaptable to other physical sites that have been erased or forgotten.
|
https://arxiv.org/abs/2601.06902
|
Academic Papers
|
svg
|
decae0afb065eab5db0a68c62b27d70b58d626209354a6532d0089668c1a6783
|
2026-01-13T00:00:00-05:00
|
Divergence-Based Adaptive Aggregation for Byzantine Robust Federated Learning
|
arXiv:2601.06903v1 Announce Type: new Abstract: Inherent client drifts caused by data heterogeneity, as well as vulnerability to Byzantine attacks within the system, hinder effective model training and convergence in federated learning (FL). This paper presents two new frameworks, named DiveRgence-based Adaptive aGgregation (DRAG) and Byzantine-Resilient DRAG (BR-DRAG), to mitigate client drifts and resist attacks while expediting training. DRAG designs a reference direction and a metric named divergence of degree to quantify the deviation of local updates. Accordingly, each worker can align its local update via linear calibration without extra communication cost. BR-DRAG refines DRAG under Byzantine attacks by maintaining a vetted root dataset at the server to produce trusted reference directions. The workers' updates can be then calibrated to mitigate divergence caused by malicious attacks. We analytically prove that DRAG and BR-DRAG achieve fast convergence for non-convex models under partial worker participation, data heterogeneity, and Byzantine attacks. Experiments validate the effectiveness of DRAG and its superior performance over state-of-the-art methods in handling client drifts, and highlight the robustness of BR-DRAG in maintaining resilience against data heterogeneity and diverse Byzantine attacks.
|
https://arxiv.org/abs/2601.06903
|
Academic Papers
|
svg
|
96e9ecd4259711835cdac3ce43cf7079981c89e719c14832d15acb02a527963d
|
2026-01-13T00:00:00-05:00
|
Large Artificial Intelligence Models for Future Wireless Communications
|
arXiv:2601.06906v1 Announce Type: new Abstract: The anticipated integration of large artificial intelligence (AI) models with wireless communications is estimated to usher a transformative wave in the forthcoming information age. As wireless networks grow in complexity, the traditional methodologies employed for optimization and management face increasingly challenges. Large AI models have extensive parameter spaces and enhanced learning capabilities and can offer innovative solutions to these challenges. They are also capable of learning, adapting and optimizing in real-time. We introduce the potential and challenges of integrating large AI models into wireless communications, highlighting existing AIdriven applications and inherent challenges for future large AI models. In this paper, we propose the architecture of large AI models for future wireless communications, introduce their advantages in data analysis, resource allocation and real-time adaptation, discuss the potential challenges and corresponding solutions of energy, architecture design, privacy, security, ethical and regulatory. In addition, we explore the potential future directions of large AI models in wireless communications, laying the groundwork for forthcoming research in this area.
|
https://arxiv.org/abs/2601.06906
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.