id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
4f476332bf8eb3a3d02f719ffe135f371c47f1ecf198485baf88b2e631c13326
|
2026-01-07T00:00:00-05:00
|
Hypothesize-Then-Verify: Speculative Root Cause Analysis for Microservices with Pathwise Parallelism
|
arXiv:2601.02736v1 Announce Type: new Abstract: Microservice systems have become the backbone of cloud-native enterprise applications due to their resource elasticity, loosely coupled architecture, and lightweight deployment. Yet, the intrinsic complexity and dynamic runtime interactions of such systems inevitably give rise to anomalies. Ensuring system reliability therefore hinges on effective root cause analysis (RCA), which entails not only localizing the source of anomalies but also characterizing the underlying failures in a timely and interpretable manner. Recent advances in intelligent RCA techniques, particularly those powered by large language models (LLMs), have demonstrated promising capabilities, as LLMs reduce reliance on handcrafted features while offering cross-platform adaptability, task generalization, and flexibility. However, existing LLM-based methods still suffer from two critical limitations: (a) limited exploration diversity, which undermines accuracy, and (b) heavy dependence on large-scale LLMs, which results in slow inference. To overcome these challenges, we propose SpecRCA, a speculative root cause analysis framework for microservices that adopts a \textit{hypothesize-then-verify} paradigm. SpecRCA first leverages a hypothesis drafting module to rapidly generate candidate root causes, and then employs a parallel root cause verifier to efficiently validate them. Preliminary experiments on the AIOps 2022 dataset demonstrate that SpecRCA achieves superior accuracy and efficiency compared to existing approaches, highlighting its potential as a practical solution for scalable and interpretable RCA in complex microservice environments.
|
https://arxiv.org/abs/2601.02736
|
Academic Papers
|
svg
|
4c46521590974be651474bf9134fbecee690899d1e288f9338dd9e74ac116013
|
2026-01-07T00:00:00-05:00
|
Unveiling and Bridging the Functional Perception Gap in MLLMs: Atomic Visual Alignment and Hierarchical Evaluation via PET-Bench
|
arXiv:2601.02737v1 Announce Type: new Abstract: While Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in tasks such as abnormality detection and report generation for anatomical modalities, their capability in functional imaging remains largely unexplored. In this work, we identify and quantify a fundamental functional perception gap: the inability of current vision encoders to decode functional tracer biodistribution independent of morphological priors. Identifying Positron Emission Tomography (PET) as the quintessential modality to investigate this disconnect, we introduce PET-Bench, the first large-scale functional imaging benchmark comprising 52,308 hierarchical QA pairs from 9,732 multi-site, multi-tracer PET studies. Extensive evaluation of 19 state-of-the-art MLLMs reveals a critical safety hazard termed the Chain-of-Thought (CoT) hallucination trap. We observe that standard CoT prompting, widely considered to enhance reasoning, paradoxically decouples linguistic generation from visual evidence in PET, producing clinically fluent but factually ungrounded diagnoses. To resolve this, we propose Atomic Visual Alignment (AVA), a simple fine-tuning strategy that enforces the mastery of low-level functional perception prior to high-level diagnostic reasoning. Our results demonstrate that AVA effectively bridges the perception gap, transforming CoT from a source of hallucination into a robust inference tool and improving diagnostic accuracy by up to 14.83%. Code and data are available at https://github.com/yezanting/PET-Bench.
|
https://arxiv.org/abs/2601.02737
|
Academic Papers
|
svg
|
ff62f9520fec14f7275550a0920662ab7c822e8a6f9965c6dbf8597aedc5c543
|
2026-01-07T00:00:00-05:00
|
Optimizing Control-Friendly Trajectories with Self-Supervised Residual Learning
|
arXiv:2601.02738v1 Announce Type: new Abstract: Real-world physics can only be analytically modeled with a certain level of precision for modern intricate robotic systems. As a result, tracking aggressive trajectories accurately could be challenging due to the existence of residual physics during controller synthesis. This paper presents a self-supervised residual learning and trajectory optimization framework to address the aforementioned challenges. At first, unknown dynamic effects on the closed-loop model are learned and treated as residuals of the nominal dynamics, jointly forming a hybrid model. We show that learning with analytic gradients can be achieved using only trajectory-level data while enjoying accurate long-horizon prediction with an arbitrary integration step size. Subsequently, a trajectory optimizer is developed to compute the optimal reference trajectory with the residual physics along it minimized. It ends up with trajectories that are friendly to the following control level. The agile flight of quadrotors illustrates that by utilizing the hybrid dynamics, the proposed optimizer outputs aggressive motions that can be precisely tracked.
|
https://arxiv.org/abs/2601.02738
|
Academic Papers
|
svg
|
d74eb4b35cd70bdb805913c9ac5a94c025c86708a5f9a460ae863036a28ee3bd
|
2026-01-07T00:00:00-05:00
|
Mitigating Prompt-Induced Hallucinations in Large Language Models via Structured Reasoning
|
arXiv:2601.02739v1 Announce Type: new Abstract: To address hallucination issues in large language models (LLMs), this paper proposes a method for mitigating prompt-induced hallucinations. Building on a knowledge distillation chain-style model, we introduce a code module to guide knowledge-graph exploration and incorporate code as part of the chain-of-thought prompt, forming an external knowledge input that provides more accurate and structured information to the model. Based on this design, we develop an improved knowledge distillation chain-style model and leverage it to analyze and constrain the reasoning process of LLMs, thereby improving inference accuracy. We empirically evaluate the proposed approach using GPT-4 and LLaMA-3.3 on multiple public datasets. Experimental results demonstrate that incorporating code modules significantly enhances the model's ability to capture contextual information and effectively mitigates prompt-induced hallucinations. Specifically, HIT@1, HIT@3, and HIT@5 improve by 15.64%, 13.38%, and 13.28%, respectively. Moreover, the proposed method achieves HIT@1, HIT@3, and HIT@5 scores exceeding 95% across several evaluation settings. These results indicate that the proposed approach substantially reduces hallucination behavior while improving the accuracy and verifiability of large language models.
|
https://arxiv.org/abs/2601.02739
|
Academic Papers
|
svg
|
3b845772539aab88cff64b8b3fc926702b50dd89d1dd8b1f60919060b83af04f
|
2026-01-07T00:00:00-05:00
|
Language Hierarchization Provides the Optimal Solution to Human Working Memory Limits
|
arXiv:2601.02740v1 Announce Type: new Abstract: Language is a uniquely human trait, conveying information efficiently by organizing word sequences in sentences into hierarchical structures. A central question persists: Why is human language hierarchical? In this study, we show that hierarchization optimally solves the challenge of our limited working memory capacity. We established a likelihood function that quantifies how well the average number of units according to the language processing mechanisms aligns with human working memory capacity (WMC) in a direct fashion. The maximum likelihood estimate (MLE) of this function, tehta_MLE, turns out to be the mean of units. Through computational simulations of symbol sequences and validation analyses of natural language sentences, we uncover that compared to linear processing, hierarchical processing far surpasses it in constraining the tehta_MLE values under the human WMC limit, along with the increase of sequence/sentence length successfully. It also shows a converging pattern related to children's WMC development. These results suggest that constructing hierarchical structures optimizes the processing efficiency of sequential language input while staying within memory constraints, genuinely explaining the universal hierarchical nature of human language.
|
https://arxiv.org/abs/2601.02740
|
Academic Papers
|
svg
|
77374317ecc3c5c576ee1d36fcd4e0fbf71d85f4d0ee6e734d795163f69c49cb
|
2026-01-07T00:00:00-05:00
|
SYNAPSE: Empowering LLM Agents with Episodic-Semantic Memory via Spreading Activation
|
arXiv:2601.02744v1 Announce Type: new Abstract: While Large Language Models (LLMs) excel at generalized reasoning, standard retrieval-augmented approaches fail to address the disconnected nature of long-term agentic memory. To bridge this gap, we introduce Synapse (Synergistic Associative Processing Semantic Encoding), a unified memory architecture that transcends static vector similarity. Drawing from cognitive science, Synapse models memory as a dynamic graph where relevance emerges from spreading activation rather than pre-computed links. By integrating lateral inhibition and temporal decay, the system dynamically highlights relevant sub-graphs while filtering interference. We implement a Triple Hybrid Retrieval strategy that fuses geometric embeddings with activation-based graph traversal. Comprehensive evaluations on the LoCoMo benchmark show that Synapse significantly outperforms state-of-the-art methods in complex temporal and multi-hop reasoning tasks, offering a robust solution to the "Contextual Tunneling" problem. Our code and data will be made publicly available upon acceptance.
|
https://arxiv.org/abs/2601.02744
|
Academic Papers
|
svg
|
38088c095435d131b4b4fa49fbaba812d210477ab08f59252e088a90e6b5a609
|
2026-01-07T00:00:00-05:00
|
D$^3$R-DETR: DETR with Dual-Domain Density Refinement for Tiny Object Detection in Aerial Images
|
arXiv:2601.02747v1 Announce Type: new Abstract: Detecting tiny objects plays a vital role in remote sensing intelligent interpretation, as these objects often carry critical information for downstream applications. However, due to the extremely limited pixel information and significant variations in object density, mainstream Transformer-based detectors often suffer from slow convergence and inaccurate query-object matching. To address these challenges, we propose D$^3$R-DETR, a novel DETR-based detector with Dual-Domain Density Refinement. By fusing spatial and frequency domain information, our method refines low-level feature maps and utilizes their rich details to predict more accurate object density map, thereby guiding the model to precisely localize tiny objects. Extensive experiments on the AI-TOD-v2 dataset demonstrate that D$^3$R-DETR outperforms existing state-of-the-art detectors for tiny object detection.
|
https://arxiv.org/abs/2601.02747
|
Academic Papers
|
svg
|
2ebe0811a23b4703d51bfedcd13218282d10cff9d27463d70fb1190a2ab5ca87
|
2026-01-07T00:00:00-05:00
|
The Path Ahead for Agentic AI: Challenges and Opportunities
|
arXiv:2601.02749v1 Announce Type: new Abstract: The evolution of Large Language Models (LLMs) from passive text generators to autonomous, goal-driven systems represents a fundamental shift in artificial intelligence. This chapter examines the emergence of agentic AI systems that integrate planning, memory, tool use, and iterative reasoning to operate autonomously in complex environments. We trace the architectural progression from statistical models to transformer-based systems, identifying capabilities that enable agentic behavior: long-range reasoning, contextual awareness, and adaptive decision-making. The chapter provides three contributions: (1) a synthesis of how LLM capabilities extend toward agency through reasoning-action-reflection loops; (2) an integrative framework describing core components perception, memory, planning, and tool execution that bridge LLMs with autonomous behavior; (3) a critical assessment of applications and persistent challenges in safety, alignment, reliability, and sustainability. Unlike existing surveys, we focus on the architectural transition from language understanding to autonomous action, emphasizing the technical gaps that must be resolved before deployment. We identify critical research priorities, including verifiable planning, scalable multi-agent coordination, persistent memory architectures, and governance frameworks. Responsible advancement requires simultaneous progress in technical robustness, interpretability, and ethical safeguards to realize potential while mitigating risks of misalignment and unintended consequences.
|
https://arxiv.org/abs/2601.02749
|
Academic Papers
|
svg
|
7b7823269f3d2cc2b7fd4e57b777cdf6ef4cd7625255e375935c182603dda0fb
|
2026-01-07T00:00:00-05:00
|
Ahead of the Spread: Agent-Driven Virtual Propagation for Early Fake News Detection
|
arXiv:2601.02750v1 Announce Type: new Abstract: Early detection of fake news is critical for mitigating its rapid dissemination on social media, which can severely undermine public trust and social stability. Recent advancements show that incorporating propagation dynamics can significantly enhance detection performance compared to previous content-only approaches. However, this remains challenging at early stages due to the absence of observable propagation signals. To address this limitation, we propose AVOID, an \underline{a}gent-driven \underline{v}irtual pr\underline{o}pagat\underline{i}on for early fake news \underline{d}etection. AVOID reformulates early detection as a new paradigm of evidence generation, where propagation signals are actively simulated rather than passively observed. Leveraging LLM-powered agents with differentiated roles and data-driven personas, AVOID realistically constructs early-stage diffusion behaviors without requiring real propagation data. The resulting virtual trajectories provide complementary social evidence that enriches content-based detection, while a denoising-guided fusion strategy aligns simulated propagation with content semantics. Extensive experiments on benchmark datasets demonstrate that AVOID consistently outperforms state-of-the-art baselines, highlighting the effectiveness and practical value of virtual propagation augmentation for early fake news detection. The code and data are available at https://github.com/Ironychen/AVOID.
|
https://arxiv.org/abs/2601.02750
|
Academic Papers
|
svg
|
509665b3a0f963041c719b9d83d48d0d982b3a73d1ef96a3d13028b719f52b42
|
2026-01-07T00:00:00-05:00
|
Window-based Membership Inference Attacks Against Fine-tuned Large Language Models
|
arXiv:2601.02751v1 Announce Type: new Abstract: Most membership inference attacks (MIAs) against Large Language Models (LLMs) rely on global signals, like average loss, to identify training data. This approach, however, dilutes the subtle, localized signals of memorization, reducing attack effectiveness. We challenge this global-averaging paradigm, positing that membership signals are more pronounced within localized contexts. We introduce WBC (Window-Based Comparison), which exploits this insight through a sliding window approach with sign-based aggregation. Our method slides windows of varying sizes across text sequences, with each window casting a binary vote on membership based on loss comparisons between target and reference models. By ensembling votes across geometrically spaced window sizes, we capture memorization patterns from token-level artifacts to phrase-level structures. Extensive experiments across eleven datasets demonstrate that WBC substantially outperforms established baselines, achieving higher AUC scores and 2-3 times improvements in detection rates at low false positive thresholds. Our findings reveal that aggregating localized evidence is fundamentally more effective than global averaging, exposing critical privacy vulnerabilities in fine-tuned LLMs.
|
https://arxiv.org/abs/2601.02751
|
Academic Papers
|
svg
|
a155c429c66310868e675a4120e48eefbe55fcc1ba69ca368fc6bda57c99fdb1
|
2026-01-07T00:00:00-05:00
|
EComStage: Stage-wise and Orientation-specific Benchmarking for Large Language Models in E-commerce
|
arXiv:2601.02752v1 Announce Type: new Abstract: Large Language Model (LLM)-based agents are increasingly deployed in e-commerce applications to assist customer services in tasks such as product inquiries, recommendations, and order management. Existing benchmarks primarily evaluate whether these agents successfully complete the final task, overlooking the intermediate reasoning stages that are crucial for effective decision-making. To address this gap, we propose EComStage, a unified benchmark for evaluating agent-capable LLMs across the comprehensive stage-wise reasoning process: Perception (understanding user intent), Planning (formulating an action plan), and Action (executing the decision). EComStage evaluates LLMs through seven separate representative tasks spanning diverse e-commerce scenarios, with all samples human-annotated and quality-checked. Unlike prior benchmarks that focus only on customer-oriented interactions, EComStage also evaluates merchant-oriented scenarios, including promotion management, content review, and operational support relevant to real-world applications. We evaluate a wide range of over 30 LLMs, spanning from 1B to over 200B parameters, including open-source models and closed-source APIs, revealing stage/orientation- specific strengths and weaknesses. Our results provide fine-grained, actionable insights for designing and optimizing LLM-based agents in real-world e-commerce settings.
|
https://arxiv.org/abs/2601.02752
|
Academic Papers
|
svg
|
940e2c095c6b5b10f761def6bb015ae6071deb2a1dc4c9f29225d6540a7dc64d
|
2026-01-07T00:00:00-05:00
|
Q-Regularized Generative Auto-Bidding: From Suboptimal Trajectories to Optimal Policies
|
arXiv:2601.02754v1 Announce Type: new Abstract: With the rapid development of e-commerce, auto-bidding has become a key asset in optimizing advertising performance under diverse advertiser environments. The current approaches focus on reinforcement learning (RL) and generative models. These efforts imitate offline historical behaviors by utilizing a complex structure with expensive hyperparameter tuning. The suboptimal trajectories further exacerbate the difficulty of policy learning. To address these challenges, we proposes QGA, a novel Q-value regularized Generative Auto-bidding method. In QGA, we propose to plug a Q-value regularization with double Q-learning strategy into the Decision Transformer backbone. This design enables joint optimization of policy imitation and action-value maximization, allowing the learned bidding policy to both leverage experience from the dataset and alleviate the adverse impact of the suboptimal trajectories. Furthermore, to safely explore the policy space beyond the data distribution, we propose a Q-value guided dual-exploration mechanism, in which the DT model is conditioned on multiple return-to-go targets and locally perturbed actions. This entire exploration process is dynamically guided by the aforementioned Q-value module, which provides principled evaluation for each candidate action. Experiments on public benchmarks and simulation environments demonstrate that QGA consistently achieves superior or highly competitive results compared to existing alternatives. Notably, in large-scale real-world A/B testing, QGA achieves a 3.27% increase in Ad GMV and a 2.49% improvement in Ad ROI.
|
https://arxiv.org/abs/2601.02754
|
Academic Papers
|
svg
|
7df25b025ee0d7e25067dadba778244750cf453cc081624afe766e503fc8bc7d
|
2026-01-07T00:00:00-05:00
|
LLM Agent Framework for Intelligent Change Analysis in Urban Environment using Remote Sensing Imagery
|
arXiv:2601.02757v1 Announce Type: new Abstract: Existing change detection methods often lack the versatility to handle diverse real-world queries and the intelligence for comprehensive analysis. This paper presents a general agent framework, integrating Large Language Models (LLM) with vision foundation models to form ChangeGPT. A hierarchical structure is employed to mitigate hallucination. The agent was evaluated on a curated dataset of 140 questions categorized by real-world scenarios, encompassing various question types (e.g., Size, Class, Number) and complexities. The evaluation assessed the agent's tool selection ability (Precision/Recall) and overall query accuracy (Match). ChangeGPT, especially with a GPT-4-turbo backend, demonstrated superior performance, achieving a 90.71 % Match rate. Its strength lies particularly in handling change-related queries requiring multi-step reasoning and robust tool selection. Practical effectiveness was further validated through a real-world urban change monitoring case study in Qianhai Bay, Shenzhen. By providing intelligence, adaptability, and multi-type change analysis, ChangeGPT offers a powerful solution for decision-making in remote sensing applications.
|
https://arxiv.org/abs/2601.02757
|
Academic Papers
|
svg
|
df1f4c87d697fa8be0d4dbfb14122f3b71be05c8b2f845345780ef9d0d23deca
|
2026-01-07T00:00:00-05:00
|
Towards Zero-Shot Point Cloud Registration Across Diverse Scales, Scenes, and Sensor Setups
|
arXiv:2601.02759v1 Announce Type: new Abstract: Some deep learning-based point cloud registration methods struggle with zero-shot generalization, often requiring dataset-specific hyperparameter tuning or retraining for new environments. We identify three critical limitations: (a) fixed user-defined parameters (e.g., voxel size, search radius) that fail to generalize across varying scales, (b) learned keypoint detectors exhibit poor cross-domain transferability, and (c) absolute coordinates amplify scale mismatches between datasets. To address these three issues, we present BUFFER-X, a training-free registration framework that achieves zero-shot generalization through: (a) geometric bootstrapping for automatic hyperparameter estimation, (b) distribution-aware farthest point sampling to replace learned detectors, and (c) patch-level coordinate normalization to ensure scale consistency. Our approach employs hierarchical multi-scale matching to extract correspondences across local, middle, and global receptive fields, enabling robust registration in diverse environments. For efficiency-critical applications, we introduce BUFFER-X-Lite, which reduces total computation time by 43% (relative to BUFFER-X) through early exit strategies and fast pose solvers while preserving accuracy. We evaluate on a comprehensive benchmark comprising 12 datasets spanning object-scale, indoor, and outdoor scenes, including cross-sensor registration between heterogeneous LiDAR configurations. Results demonstrate that our approach generalizes effectively without manual tuning or prior knowledge of test domains. Code: https://github.com/MIT-SPARK/BUFFER-X.
|
https://arxiv.org/abs/2601.02759
|
Academic Papers
|
svg
|
826b317699d3f7d495eb52a0eec6906c03821b3c5c7807795a0b7c82a9a7c737
|
2026-01-07T00:00:00-05:00
|
AnyDepth: Depth Estimation Made Easy
|
arXiv:2601.02760v1 Announce Type: new Abstract: Monocular depth estimation aims to recover the depth information of 3D scenes from 2D images. Recent work has made significant progress, but its reliance on large-scale datasets and complex decoders has limited its efficiency and generalization ability. In this paper, we propose a lightweight and data-centric framework for zero-shot monocular depth estimation. We first adopt DINOv3 as the visual encoder to obtain high-quality dense features. Secondly, to address the inherent drawbacks of the complex structure of the DPT, we design the Simple Depth Transformer (SDT), a compact transformer-based decoder. Compared to the DPT, it uses a single-path feature fusion and upsampling process to reduce the computational overhead of cross-scale feature fusion, achieving higher accuracy while reducing the number of parameters by approximately 85%-89%. Furthermore, we propose a quality-based filtering strategy to filter out harmful samples, thereby reducing dataset size while improving overall training quality. Extensive experiments on five benchmarks demonstrate that our framework surpasses the DPT in accuracy. This work highlights the importance of balancing model design and data quality for achieving efficient and generalizable zero-shot depth estimation. Code: https://github.com/AIGeeksGroup/AnyDepth. Website: https://aigeeksgroup.github.io/AnyDepth.
|
https://arxiv.org/abs/2601.02760
|
Academic Papers
|
svg
|
e7bc5315d36c8a4e3a9f3d736d1084766b1ea2e7229e96054b53c831bdd6e9cb
|
2026-01-07T00:00:00-05:00
|
Unified Meta-Representation and Feedback Calibration for General Disturbance Estimation
|
arXiv:2601.02762v1 Announce Type: new Abstract: Precise control in modern robotic applications is always an open issue due to unknown time-varying disturbances. Existing meta-learning-based approaches require a shared representation of environmental structures, which lack flexibility for realistic non-structural disturbances. Besides, representation error and the distribution shifts can lead to heavy degradation in prediction accuracy. This work presents a generalizable disturbance estimation framework that builds on meta-learning and feedback-calibrated online adaptation. By extracting features from a finite time window of past observations, a unified representation that effectively captures general non-structural disturbances can be learned without predefined structural assumptions. The online adaptation process is subsequently calibrated by a state-feedback mechanism to attenuate the learning residual originating from the representation and generalizability limitations. Theoretical analysis shows that simultaneous convergence of both the online learning error and the disturbance estimation error can be achieved. Through the unified meta-representation, our framework effectively estimates multiple rapidly changing disturbances, as demonstrated by quadrotor flight experiments. See the project page for video, supplementary material and code: https://nonstructural-metalearn.github.io.
|
https://arxiv.org/abs/2601.02762
|
Academic Papers
|
svg
|
9a2b9e9c3330c08dae29fe54a5ecd6216b98e1a2c8a5a5e18faff12483c0535c
|
2026-01-07T00:00:00-05:00
|
ClearAIR: A Human-Visual-Perception-Inspired All-in-One Image Restoration
|
arXiv:2601.02763v1 Announce Type: new Abstract: All-in-One Image Restoration (AiOIR) has advanced significantly, offering promising solutions for complex real-world degradations. However, most existing approaches rely heavily on degradation-specific representations, often resulting in oversmoothing and artifacts. To address this, we propose ClearAIR, a novel AiOIR framework inspired by Human Visual Perception (HVP) and designed with a hierarchical, coarse-to-fine restoration strategy. First, leveraging the global priority of early HVP, we employ a Multimodal Large Language Model (MLLM)-based Image Quality Assessment (IQA) model for overall evaluation. Unlike conventional IQA, our method integrates cross-modal understanding to more accurately characterize complex, composite degradations. Building upon this overall assessment, we then introduce a region awareness and task recognition pipeline. A semantic cross-attention, leveraging semantic guidance unit, first produces coarse semantic prompts. Guided by this regional context, a degradation-aware module implicitly captures region-specific degradation characteristics, enabling more precise local restoration. Finally, to recover fine details, we propose an internal clue reuse mechanism. It operates in a self-supervised manner to mine and leverage the intrinsic information of the image itself, substantially enhancing detail restoration. Experimental results show that ClearAIR achieves superior performance across diverse synthetic and real-world datasets.
|
https://arxiv.org/abs/2601.02763
|
Academic Papers
|
svg
|
aacd31eb02f4776f87e4f1808c021cc9fd6d7b9744d2fda606f1701406bb377e
|
2026-01-07T00:00:00-05:00
|
Netflix Artwork Personalization via LLM Post-training
|
arXiv:2601.02764v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated success in various applications of user recommendation and personalization across e-commerce and entertainment. On many entertainment platforms such as Netflix, users typically interact with a wide range of titles, each represented by an artwork. Since users have diverse preferences, an artwork that appeals to one type of user may not resonate with another with different preferences. Given this user heterogeneity, our work explores the novel problem of personalized artwork recommendations according to diverse user preferences. Similar to the multi-dimensional nature of users' tastes, titles contain different themes and tones that may appeal to different viewers. For example, the same title might feature both heartfelt family drama and intense action scenes. Users who prefer romantic content may like the artwork emphasizing emotional warmth between the characters, while those who prefer action thrillers may find high-intensity action scenes more intriguing. Rather than a one-size-fits-all approach, we conduct post-training of pre-trained LLMs to make personalized artwork recommendations, selecting the most preferred visual representation of a title for each user and thereby improving user satisfaction and engagement. Our experimental results with Llama 3.1 8B models (trained on a dataset of 110K data points and evaluated on 5K held-out user-title pairs) show that the post-trained LLMs achieve 3-5\% improvements over the Netflix production model, suggesting a promising direction for granular personalized recommendations using LLMs.
|
https://arxiv.org/abs/2601.02764
|
Academic Papers
|
svg
|
3e7b898069419fada0d010a531f23a5c30cabc6527ca6b7b93d74938f8cb9f43
|
2026-01-07T00:00:00-05:00
|
Advancing Assistive Robotics: Multi-Modal Navigation and Biophysical Monitoring for Next-Generation Wheelchairs
|
arXiv:2601.02766v1 Announce Type: new Abstract: Assistive electric-powered wheelchairs (EPWs) have become essential mobility aids for people with disabilities such as amyotrophic lateral sclerosis (ALS), post-stroke hemiplegia, and dementia-related mobility impairment. This work presents a novel multi-modal EPW control system designed to prioritize patient needs while allowing seamless switching between control modes. Four complementary interfaces, namely joystick, speech, hand gesture, and electrooculography (EOG), are integrated with a continuous vital sign monitoring framework measuring heart rate variability, oxygen saturation (SpO2), and skin temperature. This combination enables greater patient independence while allowing caregivers to maintain real-time supervision and early intervention capability. Two-point calibration of the biophysical sensors against clinical reference devices resulted in root mean square errors of at most 2 bpm for heart rate, 0.5 degree Celsius for skin temperature, and 1 percent for SpO2. Experimental evaluation involved twenty participants with mobility impairments executing a total of 500 indoor navigation commands. The achieved command recognition accuracies were 99 percent for joystick control, 97 percent plus or minus 2 percent for speech, and 95 percent plus or minus 3 percent for hand gesture, with an average closed-loop latency of 20 plus or minus 0.5 milliseconds. Caregivers receive real-time alerts through an Android application following encrypted cloud transmission of physiological data. By integrating multi-modal mobility control with cloud-enabled health monitoring and reporting latency and energy budgets, the proposed prototype addresses key challenges in assistive robotics, contributes toward compliance with ISO 7176-31 and IEC 80601-2-78 safety standards, and establishes a foundation for future adaptive machine learning enhancements.
|
https://arxiv.org/abs/2601.02766
|
Academic Papers
|
svg
|
d4d9550f3d84f12bd566902890ab75c0d41cd7eeb2c5bf8655a39c0b3e428131
|
2026-01-07T00:00:00-05:00
|
AbductiveMLLM: Boosting Visual Abductive Reasoning Within MLLMs
|
arXiv:2601.02771v1 Announce Type: new Abstract: Visual abductive reasoning (VAR) is a challenging task that requires AI systems to infer the most likely explanation for incomplete visual observations. While recent MLLMs develop strong general-purpose multimodal reasoning capabilities, they fall short in abductive inference, as compared to human beings. To bridge this gap, we draw inspiration from the interplay between verbal and pictorial abduction in human cognition, and propose to strengthen abduction of MLLMs by mimicking such dual-mode behavior. Concretely, we introduce AbductiveMLLM comprising of two synergistic components: REASONER and IMAGINER. The REASONER operates in the verbal domain. It first explores a broad space of possible explanations using a blind LLM and then prunes visually incongruent hypotheses based on cross-modal causal alignment. The remaining hypotheses are introduced into the MLLM as targeted priors, steering its reasoning toward causally coherent explanations. The IMAGINER, on the other hand, further guides MLLMs by emulating human-like pictorial thinking. It conditions a text-to-image diffusion model on both the input video and the REASONER's output embeddings to "imagine" plausible visual scenes that correspond to verbal explanation, thereby enriching MLLMs' contextual grounding. The two components are trained jointly in an end-to-end manner. Experiments on standard VAR benchmarks show that AbductiveMLLM achieves state-of-the-art performance, consistently outperforming traditional solutions and advanced MLLMs.
|
https://arxiv.org/abs/2601.02771
|
Academic Papers
|
svg
|
42987956c897956835e3c68a115a658cdce2264db34d444363cb1b2608856924
|
2026-01-07T00:00:00-05:00
|
From Slaves to Synths? Superintelligence and the Evolution of Legal Personality
|
arXiv:2601.02773v1 Announce Type: new Abstract: This essay examines the evolving concept of legal personality through the lens of recent developments in artificial intelligence and the possible emergence of superintelligence. Legal systems have long been open to extending personhood to non-human entities, most prominently corporations, for instrumental or inherent reasons. Instrumental rationales emphasize accountability and administrative efficiency, whereas inherent ones appeal to moral worth and autonomy. Neither is yet sufficient to justify conferring personhood on AI. Nevertheless, the acceleration of technological autonomy may lead us to reconsider how law conceptualizes agency and responsibility. Drawing on comparative jurisprudence, corporate theory, and the emerging literature on AI governance, the paper argues that existing frameworks can address short-term accountability gaps, but the eventual development of superintelligence may force a paradigmatic shift in our understanding of law itself. In such a speculative future, legal personality may depend less on the cognitive sophistication of machines than on humanity's ability to preserve our own moral and institutional sovereignty.
|
https://arxiv.org/abs/2601.02773
|
Academic Papers
|
svg
|
90d3c661523c642f8a50f174e50c5e0213a589fbc6dcffdf842194d4acc9ff1f
|
2026-01-07T00:00:00-05:00
|
Experience and Adaptation in AI-mediated Hiring Systems: A Combined Analysis of Online Discourse and Interface Design
|
arXiv:2601.02775v1 Announce Type: new Abstract: Automated interviewing tools are now widely adopted to manage recruitment at scale, often replacing early human screening with algorithmic assessments. While these systems are promoted as efficient and consistent, they also generate new forms of uncertainty for applicants. Efforts to soften these experiences through human-like design features have only partially addressed underlying concerns. To understand how candidates interpret and cope with such systems, we conducted a mixed empirical investigation that combined analysis of online discussions, responses from more than one hundred and fifty survey participants, and follow-up conversations with seventeen interviewees. The findings point to several recurring problems, including unclear evaluation criteria, limited organizational responsibility for automated outcomes, and a lack of practical support for preparation. Many participants described the technology as far less advanced than advertised, leading them to infer how decisions might be made in the absence of guidance. This speculation often intensified stress and emotional strain. Furthermore, the minimal sense of interpersonal engagement contributed to feelings of detachment and disposability. Based on these observations, we propose design directions aimed at improving clarity, accountability, and candidate support in AI-mediated hiring processes.
|
https://arxiv.org/abs/2601.02775
|
Academic Papers
|
svg
|
0447ab3ca643c8c0529e2d39c55467d36487a9e266502c1100f215f1048aae8a
|
2026-01-07T00:00:00-05:00
|
UniSRCodec: Unified and Low-Bitrate Single Codebook Codec with Sub-Band Reconstruction
|
arXiv:2601.02776v1 Announce Type: new Abstract: Neural Audio Codecs (NACs) can reduce transmission overhead by performing compact compression and reconstruction, which also aim to bridge the gap between continuous and discrete signals. Existing NACs can be divided into two categories: multi-codebook and single-codebook codecs. Multi-codebook codecs face challenges such as structural complexity and difficulty in adapting to downstream tasks, while single-codebook codecs, though structurally simpler, suffer from low-fidelity, ineffective modeling of unified audio, and an inability to support modeling of high-frequency audio. We propose the UniSRCodec, a single-codebook codec capable of supporting high sampling rate, low-bandwidth, high fidelity, and unified. We analyze the inefficiency of waveform-based compression and introduce the time and frequency compression method using the Mel-spectrogram, and cooperate with a Vocoder to recover the phase information of the original audio. Moreover, we propose a sub-band reconstruction technique to achieve high-quality compression across both low and high frequency bands. Subjective and objective experimental results demonstrate that UniSRCodec achieves state-of-the-art (SOTA) performance among cross-domain single-codebook codecs with only a token rate of 40, and its reconstruction quality is comparable to that of certain multi-codebook methods. Our demo page is available at https://wxzyd123.github.io/unisrcodec.
|
https://arxiv.org/abs/2601.02776
|
Academic Papers
|
svg
|
9d46adcbdf8c9941e30ea5d628ec7a5741961894fa13981dee2c2ce338987da0
|
2026-01-07T00:00:00-05:00
|
M-SEVIQ: A Multi-band Stereo Event Visual-Inertial Quadruped-based Dataset for Perception under Rapid Motion and Challenging Illumination
|
arXiv:2601.02777v1 Announce Type: new Abstract: Agile locomotion in legged robots poses significant challenges for visual perception. Traditional frame-based cameras often fail in these scenarios for producing blurred images, particularly under low-light conditions. In contrast, event cameras capture changes in brightness asynchronously, offering low latency, high temporal resolution, and high dynamic range. These advantages make them suitable for robust perception during rapid motion and under challenging illumination. However, existing event camera datasets exhibit limitations in stereo configurations and multi-band sensing domains under various illumination conditions. To address this gap, we present M-SEVIQ, a multi-band stereo event visual and inertial quadruped dataset collected using a Unitree Go2 equipped with stereo event cameras, a frame-based camera, an inertial measurement unit (IMU), and joint encoders. This dataset contains more than 30 real-world sequences captured across different velocity levels, illumination wavelengths, and lighting conditions. In addition, comprehensive calibration data, including intrinsic, extrinsic, and temporal alignments, are provided to facilitate accurate sensor fusion and benchmarking. Our M-SEVIQ can be used to support research in agile robot perception, sensor fusion, semantic segmentation and multi-modal vision in challenging environments.
|
https://arxiv.org/abs/2601.02777
|
Academic Papers
|
svg
|
3facaa4d5c2383bfcb3a4105cc45e0119043e32cbf97d4429a75a011662e4185
|
2026-01-07T00:00:00-05:00
|
Closing the Reality Gap: Zero-Shot Sim-to-Real Deployment for Dexterous Force-Based Grasping and Manipulation
|
arXiv:2601.02778v1 Announce Type: new Abstract: Human-like dexterous hands with multiple fingers offer human-level manipulation capabilities, but training control policies that can directly deploy on real hardware remains difficult due to contact-rich physics and imperfect actuation. We close this gap with a practical sim-to-real reinforcement learning (RL) framework that utilizes dense tactile feedback combined with joint torque sensing to explicitly regulate physical interactions. To enable effective sim-to-real transfer, we introduce (i) a computationally fast tactile simulation that computes distances between dense virtual tactile units and the object via parallel forward kinematics, providing high-rate, high-resolution touch signals needed by RL; (ii) a current-to-torque calibration that eliminates the need for torque sensors on dexterous hands by mapping motor current to joint torque; and (iii) actuator dynamics modeling to bridge the actuation gaps with randomization of non-ideal effects such as backlash, torque-speed saturation. Using an asymmetric actor-critic PPO pipeline trained entirely in simulation, our policies deploy directly to a five-finger hand. The resulting policies demonstrated two essential skills: (1) command-based, controllable grasp force tracking, and (2) reorientation of objects in the hand, both of which were robustly executed without fine-tuning on the robot. By combining tactile and torque in the observation space with effective sensing/actuation modeling, our system provides a practical solution to achieve reliable dexterous manipulation. To our knowledge, this is the first demonstration of controllable grasping on a multi-finger dexterous hand trained entirely in simulation and transferred zero-shot on real hardware.
|
https://arxiv.org/abs/2601.02778
|
Academic Papers
|
svg
|
28ae9c3af0340f3a0d76ac840ba65c9fc3dabf82250f6b2753477aff6ad864e0
|
2026-01-07T00:00:00-05:00
|
Hierarchical Preemptive Holistic Collaborative Systems for Embodied Multi-Agent Systems: Framework, Hybrid Stability, and Scalability Analysis
|
arXiv:2601.02779v1 Announce Type: new Abstract: The coordination of Embodied Multi-Agent Systems in constrained physical environments requires a rigorous balance between safety, scalability, and efficiency. Traditional decentralized approaches, e.g., reactive collision avoidance, are prone to local minima or reciprocal yielding standoffs due to the lack of future intent awareness. In contrast, centralized planning suffers from intractable computational complexity and single-point-of-failure vulnerabilities. To address these limitations, we propose the Hierarchical Preemptive Holistic Collaborative (Prollect) framework, which generalizes the Preemptive Holistic Collaborative System (PHCS) by decomposing the global coordination problem into topologically connected subspace optimizations. We formalize the system as a Hybrid Automaton and introduce a three-stage receding horizon mechanism (frozen execution, preliminary planning, proactive look-ahead windows) with explicit padding to prevent races between coordination dissemination and intent updates. Notably, we design a robust timing protocol with a mandatory Idle Buffer that acts as a dwell-time constraint to eliminate Zeno behaviors and ensure computational stability under jitter. Furthermore, we formalize a Shadow Agent protocol to guarantee seamless trajectory consistency across subspace boundaries, which we treat as an Input-to-State Stability (ISS) problem.
|
https://arxiv.org/abs/2601.02779
|
Academic Papers
|
svg
|
0e9b302e1bd4a556047119075c8876ecc04adf00726833eb22f7fd18f9c03d6b
|
2026-01-07T00:00:00-05:00
|
MiMo-V2-Flash Technical Report
|
arXiv:2601.02780v1 Announce Type: new Abstract: We present MiMo-V2-Flash, a Mixture-of-Experts (MoE) model with 309B total parameters and 15B active parameters, designed for fast, strong reasoning and agentic capabilities. MiMo-V2-Flash adopts a hybrid attention architecture that interleaves Sliding Window Attention (SWA) with global attention, with a 128-token sliding window under a 5:1 hybrid ratio. The model is pre-trained on 27 trillion tokens with Multi-Token Prediction (MTP), employing a native 32k context length and subsequently extended to 256k. To efficiently scale post-training compute, MiMo-V2-Flash introduces a novel Multi-Teacher On-Policy Distillation (MOPD) paradigm. In this framework, domain-specialized teachers (e.g., trained via large-scale reinforcement learning) provide dense and token-level reward, enabling the student model to perfectly master teacher expertise. MiMo-V2-Flash rivals top-tier open-weight models such as DeepSeek-V3.2 and Kimi-K2, despite using only 1/2 and 1/3 of their total parameters, respectively. During inference, by repurposing MTP as a draft model for speculative decoding, MiMo-V2-Flash achieves up to 3.6 acceptance length and 2.6x decoding speedup with three MTP layers. We open-source both the model weights and the three-layer MTP weights to foster open research and community collaboration.
|
https://arxiv.org/abs/2601.02780
|
Academic Papers
|
svg
|
2299ac0ca321c5d1a3aad14c6cd4a1ee2e0d28a21a9b09fb02f2efd3b2496654
|
2026-01-07T00:00:00-05:00
|
EarthVL: A Progressive Earth Vision-Language Understanding and Generation Framework
|
arXiv:2601.02783v1 Announce Type: new Abstract: Earth vision has achieved milestones in geospatial object recognition but lacks exploration in object-relational reasoning, limiting comprehensive scene understanding. To address this, a progressive Earth vision-language understanding and generation framework is proposed, including a multi-task dataset (EarthVLSet) and a semantic-guided network (EarthVLNet). Focusing on city planning applications, EarthVLSet includes 10.9k sub-meter resolution remote sensing images, land-cover masks, and 761.5k textual pairs involving both multiple-choice and open-ended visual question answering (VQA) tasks. In an object-centric way, EarthVLNet is proposed to progressively achieve semantic segmentation, relational reasoning, and comprehensive understanding. The first stage involves land-cover segmentation to generate object semantics for VQA guidance. Guided by pixel-wise semantics, the object awareness based large language model (LLM) performs relational reasoning and knowledge summarization to generate the required answers. As for optimization, the numerical difference loss is proposed to dynamically add difference penalties, addressing the various objects' statistics. Three benchmarks, including semantic segmentation, multiple-choice, and open-ended VQA demonstrated the superiorities of EarthVLNet, yielding three future directions: 1) segmentation features consistently enhance VQA performance even in cross-dataset scenarios; 2) multiple-choice tasks show greater sensitivity to the vision encoder than to the language decoder; and 3) open-ended tasks necessitate advanced vision encoders and language decoders for an optimal performance. We believe this dataset and method will provide a beneficial benchmark that connects ''image-mask-text'', advancing geographical applications for Earth vision.
|
https://arxiv.org/abs/2601.02783
|
Academic Papers
|
svg
|
5097ef8d8b613c25fc179516e71deefb6849514953c22bc7321d11f234a4e678
|
2026-01-07T00:00:00-05:00
|
DreamStyle: A Unified Framework for Video Stylization
|
arXiv:2601.02785v1 Announce Type: new Abstract: Video stylization, an important downstream task of video generation models, has not yet been thoroughly explored. Its input style conditions typically include text, style image, and stylized first frame. Each condition has a characteristic advantage: text is more flexible, style image provides a more accurate visual anchor, and stylized first frame makes long-video stylization feasible. However, existing methods are largely confined to a single type of style condition, which limits their scope of application. Additionally, their lack of high-quality datasets leads to style inconsistency and temporal flicker. To address these limitations, we introduce DreamStyle, a unified framework for video stylization, supporting (1) text-guided, (2) style-image-guided, and (3) first-frame-guided video stylization, accompanied by a well-designed data curation pipeline to acquire high-quality paired video data. DreamStyle is built on a vanilla Image-to-Video (I2V) model and trained using a Low-Rank Adaptation (LoRA) with token-specific up matrices that reduces the confusion among different condition tokens. Both qualitative and quantitative evaluations demonstrate that DreamStyle is competent in all three video stylization tasks, and outperforms the competitors in style consistency and video quality.
|
https://arxiv.org/abs/2601.02785
|
Academic Papers
|
svg
|
a6a144b5a7800bf1a7aeb19eaf0d4ad9ee026414e31a1c7a076cfe35347b5844
|
2026-01-07T00:00:00-05:00
|
RadioDiff-Flux: Efficient Radio Map Construction via Generative Denoise Diffusion Model Trajectory Midpoint Reuse
|
arXiv:2601.02790v1 Announce Type: new Abstract: Accurate radio map (RM) construction is essential to enabling environment-aware and adaptive wireless communication. However, in future 6G scenarios characterized by high-speed network entities and fast-changing environments, it is very challenging to meet real-time requirements. Although generative diffusion models (DMs) can achieve state-of-the-art accuracy with second-level delay, their iterative nature leads to prohibitive inference latency in delay-sensitive scenarios. In this paper, by uncovering a key structural property of diffusion processes: the latent midpoints remain highly consistent across semantically similar scenes, we propose RadioDiff-Flux, a novel two-stage latent diffusion framework that decouples static environmental modeling from dynamic refinement, enabling the reuse of precomputed midpoints to bypass redundant denoising. In particular, the first stage generates a coarse latent representation using only static scene features, which can be cached and shared across similar scenarios. The second stage adapts this representation to dynamic conditions and transmitter locations using a pre-trained model, thereby avoiding repeated early-stage computation. The proposed RadioDiff-Flux significantly reduces inference time while preserving fidelity. Experiment results show that RadioDiff-Flux can achieve up to 50 acceleration with less than 0.15% accuracy loss, demonstrating its practical utility for fast, scalable RM generation in future 6G networks.
|
https://arxiv.org/abs/2601.02790
|
Academic Papers
|
svg
|
3bd8458471c38d3b75ff772a545c07fd876b8f8587ab09e330a67e4cd34fcffc
|
2026-01-07T00:00:00-05:00
|
Textile IR: A Bidirectional Intermediate Representation for Physics-Aware Fashion CAD
|
arXiv:2601.02792v1 Announce Type: new Abstract: We introduce Textile IR, a bidirectional intermediate representation that connects manufacturing-valid CAD, physics-based simulation, and lifecycle assessment for fashion design. Unlike existing siloed tools where pattern software guarantees sewable outputs but understands nothing about drape, and physics simulation predicts behaviour but cannot automatically fix patterns, Textile IR provides the semantic glue for integration through a seven-layer Verification Ladder -- from cheap syntactic checks (pattern closure, seam compatibility) to expensive physics validation (drape simulation, stress analysis). The architecture enables bidirectional feedback: simulation failures suggest pattern modifications; material substitutions update sustainability estimates in real time; uncertainty propagates across the pipeline with explicit confidence bounds. We formalise fashion engineering as constraint satisfaction over three domains and demonstrate how Textile IR's scene-graph representation enables AI systems to manipulate garments as structured programs rather than pixel arrays. The framework addresses the compound uncertainty problem: when measurement errors in material testing, simulation approximations, and LCA database gaps combine, sustainability claims become unreliable without explicit uncertainty tracking. We propose six research priorities and discuss deployment considerations for fashion SMEs where integrated workflows reduce specialised engineering requirements. Key contribution: a formal representation that makes engineering constraints perceptible, manipulable, and immediately consequential -- enabling designers to navigate sustainability, manufacturability, and aesthetic tradeoffs simultaneously rather than discovering conflicts after costly physical prototyping.
|
https://arxiv.org/abs/2601.02792
|
Academic Papers
|
svg
|
9ffdd7a00697bb3ba0310ba382fbb908a7cc0638cd55a0a8e8260dfb117a72b0
|
2026-01-07T00:00:00-05:00
|
StableDPT: Temporal Stable Monocular Video Depth Estimation
|
arXiv:2601.02793v1 Announce Type: new Abstract: Applying single image Monocular Depth Estimation (MDE) models to video sequences introduces significant temporal instability and flickering artifacts. We propose a novel approach that adapts any state-of-the-art image-based (depth) estimation model for video processing by integrating a new temporal module - trainable on a single GPU in a few days. Our architecture StableDPT builds upon an off-the-shelf Vision Transformer (ViT) encoder and enhances the Dense Prediction Transformer (DPT) head. The core of our contribution lies in the temporal layers within the head, which use an efficient cross-attention mechanism to integrate information from keyframes sampled across the entire video sequence. This allows the model to capture global context and inter-frame relationships leading to more accurate and temporally stable depth predictions. Furthermore, we propose a novel inference strategy for processing videos of arbitrary length avoiding the scale misalignment and redundant computations associated with overlapping windows used in other methods. Evaluations on multiple benchmark datasets demonstrate improved temporal consistency, competitive state-of-the-art performance and on top 2x faster processing in real-world scenarios.
|
https://arxiv.org/abs/2601.02793
|
Academic Papers
|
svg
|
06a7843a45f40644a179540d8098486a49db69fa1f6834da618bbd6bbeabb93b
|
2026-01-07T00:00:00-05:00
|
Reinforcement Learning for Follow-the-Leader Robotic Endoscopic Navigation via Synthetic Data
|
arXiv:2601.02798v1 Announce Type: new Abstract: Autonomous navigation is crucial for both medical and industrial endoscopic robots, enabling safe and efficient exploration of narrow tubular environments without continuous human intervention, where avoiding contact with the inner walls has been a longstanding challenge for prior approaches. We present a follow-the-leader endoscopic robot based on a flexible continuum structure designed to minimize contact between the endoscope body and intestinal walls, thereby reducing patient discomfort. To achieve this objective, we propose a vision-based deep reinforcement learning framework guided by monocular depth estimation. A realistic intestinal simulation environment was constructed in \textit{NVIDIA Omniverse} to train and evaluate autonomous navigation strategies. Furthermore, thousands of synthetic intraluminal images were generated using NVIDIA Replicator to fine-tune the Depth Anything model, enabling dense three-dimensional perception of the intestinal environment with a single monocular camera. Subsequently, we introduce a geometry-aware reward and penalty mechanism to enable accurate lumen tracking. Compared with the original Depth Anything model, our method improves $\delta_{1}$ depth accuracy by 39.2% and reduces the navigation J-index by 0.67 relative to the second-best method, demonstrating the robustness and effectiveness of the proposed approach.
|
https://arxiv.org/abs/2601.02798
|
Academic Papers
|
svg
|
c22f39bc14f97a00715c1f2d142664278ff9c975e6f7f243b779e863467c59a9
|
2026-01-07T00:00:00-05:00
|
Stratified Hazard Sampling: Minimal-Variance Event Scheduling for CTMC/DTMC Discrete Diffusion and Flow Models
|
arXiv:2601.02799v1 Announce Type: new Abstract: CTMC/DTMC-based discrete generative models, including uniform-noise discrete diffusion (e.g., D3PM/CTDD) and discrete flow matching, enable non-autoregressive sequence generation by repeatedly replacing tokens through a time-inhomogeneous Markov process. Inference is typically implemented with step-based simulation: each token decides to jump via independent Bernoulli (or categorical) draws at every discretization step. Under uniform-noise initialization, where self-correction requires multiple edits per position, these independent decisions induce substantial variance in both the number and timing of edits, leading to characteristic failure modes such as under-editing (residual noise) or over-editing (cascading unnecessary substitutions), decreasing reproducibility. We propose Stratified Hazard Sampling (SHS), a drop-in and hyperparameter-free inference principle for any sampler that admits a stay-vs.-replace decomposition. SHS models per-token edits as events driven by cumulative hazard (CTMC) or cumulative jump mass (DTMC) and places events by stratifying this cumulative quantity: with a single random phase per position, a token jumps whenever its accumulated hazard crosses unit-spaced thresholds. This preserves the expected number of jumps while achieving the minimum possible variance among unbiased integer estimators (bounded by 1/4), without altering per-jump destination sampling and thus retaining multimodality. We also introduce a phase-allocation variant for blacklist-style lexical constraints that prioritizes early edits at high-risk positions to mitigate late-masking artifacts.
|
https://arxiv.org/abs/2601.02799
|
Academic Papers
|
svg
|
7c0e31cb6aeef290f91649f6138f021c62df28abc303b3e990cb139566f78ea6
|
2026-01-07T00:00:00-05:00
|
State-Dependent Fading Gaussian Channel with Common Reconstruction Constraints
|
arXiv:2601.02802v1 Announce Type: new Abstract: The task of jointly communicating a message and reconstructing a common estimate of the channel state is examined for a fading Gaussian model with additive state interference. The state is an independent and identically distributed Gaussian sequence known noncausally at the transmitter, and the instantaneous fading coefficient is perfectly known at both the transmitter and the receiver. The receiver is required to decode the transmitted message and, in addition, reconstruct the state under a common reconstruction constraint ensuring that its estimate coincides with that at the transmitter. A complete characterization of the optimal rate distortion tradeoff region for this setting is the main result of our work. The analytical results are also validated through numerical examples illustrating the rate distortion and power distortion tradeoffs.
|
https://arxiv.org/abs/2601.02802
|
Academic Papers
|
svg
|
8d7d114400052336fca124b4bf9d781222e7d33f7bcd5b3b7045e69791884a67
|
2026-01-07T00:00:00-05:00
|
Bounded Rewriting Induction for LCSTRSs
|
arXiv:2601.02803v1 Announce Type: new Abstract: Rewriting Induction (RI) is a method to prove inductive theorems, originating from equational reasoning. By using Logically Constrained Simply-typed Term Rewriting Systems (LCSTRSs) as an intermediate language, rewriting induction becomes a tool for program verification, with inductive theorems taking the role of equivalence predicates. Soundness of RI depends on well-founded induction, and one of the core obstacles for obtaining a practically useful proof system is to find suitable well-founded orderings automatically. Using naive approaches, all induction hypotheses must be oriented within the well-founded ordering, which leads to very strong termination requirements. This, in turn, severely limits the proof capacity of RI. Here, we introduce Bounded RI: an adaption of RI for LCSTRSs where such termination requirements are minimized.
|
https://arxiv.org/abs/2601.02803
|
Academic Papers
|
svg
|
0b3a72e77c7041e7a1490acbeee984773afe6eef5454e578848573bef55295f7
|
2026-01-07T00:00:00-05:00
|
Distributionally Robust Game for Proof-of-Work Blockchain Mining Under Resource Uncertainties
|
arXiv:2601.02804v1 Announce Type: new Abstract: Blockchain plays a crucial role in ensuring the security and integrity of decentralized systems, with the proof-of-work (PoW) mechanism being fundamental for achieving distributed consensus. As PoW blockchains see broader adoption, an increasingly diverse set of miners with varying computing capabilities participate in the network. In this paper, we consider the PoW blockchain mining, where the miners are associated with resource uncertainties. To characterize the uncertainty computing resources at different mining participants, we establish an ambiguous set representing uncertainty of resource distributions. Then, the networked mining is formulated as a non-cooperative game, where distributionally robust performance is calculated for each individual miner to tackle the resource uncertainties. We prove the existence of the equilibrium of the distributionally robust mining game. To derive the equilibrium, we propose the conditional value-at-risk (CVaR)-based reinterpretation of the best response of each miner. We then solve the individual strategy with alternating optimization, which facilitates the iteration among miners towards the game equilibrium. Furthermore, we consider the case that the ambiguity of resource distribution reduces to Gaussian distribution and the case that another uncertainties vanish, and then characterize the properties of the equilibrium therein along with a distributed algorithm to achieve the equilibrium. Simulation results show that the proposed approaches effectively converge to the equilibrium, and effectively tackle the uncertainties in blockchain mining to achieve a robust performance guarantee.
|
https://arxiv.org/abs/2601.02804
|
Academic Papers
|
svg
|
ac2feb5d00c4412bba28762077cc43adab7204debe9600e56764e9eb4685c2b6
|
2026-01-07T00:00:00-05:00
|
The perceptual gap between video see-through displays and natural human vision
|
arXiv:2601.02805v1 Announce Type: new Abstract: Video see-through (VST) technology aims to seamlessly blend virtual and physical worlds by reconstructing reality through cameras. While manufacturers promise perceptual fidelity, it remains unclear how close these systems are to replicating natural human vision across varying environmental conditions. In this work, we quantify the perceptual gap between the human eye and different popular VST headsets (Apple Vision Pro, Meta Quest 3, Quest Pro) using psychophysical measures of visual acuity, contrast sensitivity, and color vision. We show that despite hardware advancements, all tested VST systems fail to match the dynamic range and adaptability of the naked eye. While high-end devices approach human performance in ideal lighting, they exhibit significant degradation in low-light conditions, particularly in contrast sensitivity and acuity. Our results map the physiological limitations of digital reality reconstruction, establishing a specific perceptual gap that defines the roadmap for achieving indistinguishable VST experiences.
|
https://arxiv.org/abs/2601.02805
|
Academic Papers
|
svg
|
423fff0679a9d1ff2b3586b71bb42047ab9ed07e0aad807a38d0fe0fd2590b41
|
2026-01-07T00:00:00-05:00
|
Topology-aware Pathological Consistency Matching for Weakly-Paired IHC Virtual Staining
|
arXiv:2601.02806v1 Announce Type: new Abstract: Immunohistochemical (IHC) staining provides crucial molecular characterization of tissue samples and plays an indispensable role in the clinical examination and diagnosis of cancers. However, compared with the commonly used Hematoxylin and Eosin (H&E) staining, IHC staining involves complex procedures and is both time-consuming and expensive, which limits its widespread clinical use. Virtual staining converts H&E images to IHC images, offering a cost-effective alternative to clinical IHC staining. Nevertheless, using adjacent slides as ground truth often results in weakly-paired data with spatial misalignment and local deformations, hindering effective supervised learning. To address these challenges, we propose a novel topology-aware framework for H&E-to-IHC virtual staining. Specifically, we introduce a Topology-aware Consistency Matching (TACM) mechanism that employs graph contrastive learning and topological perturbations to learn robust matching patterns despite spatial misalignments, ensuring structural consistency. Furthermore, we propose a Topology-constrained Pathological Matching (TCPM) mechanism that aligns pathological positive regions based on node importance to enhance pathological consistency. Extensive experiments on two benchmarks across four staining tasks demonstrate that our method outperforms state-of-the-art approaches, achieving superior generation quality with higher clinical relevance.
|
https://arxiv.org/abs/2601.02806
|
Academic Papers
|
svg
|
7c1869d7603dc1be84365bce02d52c4de3c4f1f9f9e0ab21fd9df3c5c8c3ba4e
|
2026-01-07T00:00:00-05:00
|
COFFEE: COdesign Framework for Feature Enriched Embeddings in Ads-Ranking Systems
|
arXiv:2601.02807v1 Announce Type: new Abstract: Diverse and enriched data sources are essential for commercial ads-recommendation models to accurately assess user interest both before and after engagement with content. While extended user-engagement histories can improve the prediction of user interests, it is equally important to embed activity sequences from multiple sources to ensure freshness of user and ad-representations, following scaling law principles. In this paper, we present a novel three-dimensional framework for enhancing user-ad representations without increasing model inference or serving complexity. The first dimension examines the impact of incorporating diverse event sources, the second considers the benefits of longer user histories, and the third focuses on enriching data with additional event attributes and multi-modal embeddings. We assess the return on investment (ROI) of our source enrichment framework by comparing organic user engagement sources, such as content viewing, with ad-impression sources. The proposed method can boost the area under curve (AUC) and the slope of scaling curves for ad-impression sources by 1.56 to 2 times compared to organic usage sources even for short online-sequence lengths of 100 to 10K. Additionally, click-through rate (CTR) prediction improves by 0.56% AUC over the baseline production ad-recommendation system when using enriched ad-impression event sources, leading to improved sequence scaling resolutions for longer and offline user-ad representations.
|
https://arxiv.org/abs/2601.02807
|
Academic Papers
|
svg
|
d6e65b5cd9f81839c067e65b57c5165f55fe4ecb3858fdbfe628c7d143590b44
|
2026-01-07T00:00:00-05:00
|
HAL: Inducing Human-likeness in LLMs with Alignment
|
arXiv:2601.02813v1 Announce Type: new Abstract: Conversational human-likeness plays a central role in human-AI interaction, yet it has remained difficult to define, measure, and optimize. As a result, improvements in human-like behavior are largely driven by scale or broad supervised training, rather than targeted alignment. We introduce Human Aligning LLMs (HAL), a framework for aligning language models to conversational human-likeness using an interpretable, data-driven reward. HAL derives explicit conversational traits from contrastive dialogue data, combines them into a compact scalar score, and uses this score as a transparent reward signal for alignment with standard preference optimization methods. Using this approach, we align models of varying sizes without affecting their overall performance. In large-scale human evaluations, models aligned with HAL are more frequently perceived as human-like in conversation. Because HAL operates over explicit, interpretable traits, it enables inspection of alignment behavior and diagnosis of unintended effects. More broadly, HAL demonstrates how soft, qualitative properties of language--previously outside the scope for alignment--can be made measurable and aligned in an interpretable and explainable way.
|
https://arxiv.org/abs/2601.02813
|
Academic Papers
|
svg
|
638035b08825a4f42a7168f875d8a080fffca5852a602fde7906d7db775f6c52
|
2026-01-07T00:00:00-05:00
|
Causal-Enhanced AI Agents for Medical Research Screening
|
arXiv:2601.02814v1 Announce Type: new Abstract: Systematic reviews are essential for evidence-based medicine, but reviewing 1.5 million+ annual publications manually is infeasible. Current AI approaches suffer from hallucinations in systematic review tasks, with studies reporting rates ranging from 28--40% for earlier models to 2--15% for modern implementations which is unacceptable when errors impact patient care. We present a causal graph-enhanced retrieval-augmented generation system integrating explicit causal reasoning with dual-level knowledge graphs. Our approach enforces evidence-first protocols where every causal claim traces to retrieved literature and automatically generates directed acyclic graphs visualizing intervention-outcome pathways. Evaluation on 234 dementia exercise abstracts shows CausalAgent achieves 95% accuracy, 100% retrieval success, and zero hallucinations versus 34% accuracy and 10% hallucinations for baseline AI. Automatic causal graphs enable explicit mechanism modeling, visual synthesis, and enhanced interpretability. While this proof-of-concept evaluation used ten questions focused on dementia exercise research, the architectural approach demonstrates transferable principles for trustworthy medical AI and causal reasoning's potential for high-stakes healthcare.
|
https://arxiv.org/abs/2601.02814
|
Academic Papers
|
svg
|
61e47ee5abc03bb3bb20e9e05190208c97a7dcadd4b924d597f0acfbb58c34d5
|
2026-01-07T00:00:00-05:00
|
Quantum-enhanced long short-term memory with attention for spatial permeability prediction in oilfield reservoirs
|
arXiv:2601.02818v1 Announce Type: new Abstract: Spatial prediction of reservoir parameters, especially permeability, is crucial for oil and gas exploration and development. However, the wide range and high variability of permeability prevent existing methods from providing reliable predictions. For the first time in subsurface spatial prediction, this study presents a quantum-enhanced long short-term memory with attention (QLSTMA) model that incorporates variational quantum circuits (VQCs) into the recurrent cell. Using quantum entanglement and superposition principles, the QLSTMA significantly improves the ability to predict complex geological parameters such as permeability. Two quantization structures, QLSTMA with Shared Gates (QLSTMA-SG) and with Independent Gates (QLSTMA-IG), are designed to investigate and evaluate the effects of quantum structure configurations and the number of qubits on model performance. Experimental results demonstrate that the 8-qubit QLSTMA-IG model significantly outperforms the traditional long short-term memory with attention (LSTMA), reducing Mean Absolute Error (MAE) by 19% and Root Mean Squared Error (RMSE) by 20%, with particularly strong performance in regions featuring complex well-logging data. These findings validate the potential of quantum-classical hybrid neural networks for reservoir prediction, indicating that increasing the number of qubits yields further accuracy gains despite the reliance on classical simulations. This study establishes a foundational framework for the eventual deployment of such models on real quantum hardware and their extension to broader applications in petroleum engineering and geoscience.
|
https://arxiv.org/abs/2601.02818
|
Academic Papers
|
svg
|
2445bc9ecd1f7233824a27ca34f8914cea6f95d701da1a84aec6778cba54e7a1
|
2026-01-07T00:00:00-05:00
|
Punctuation-aware Hybrid Trainable Sparse Attention for Large Language Models
|
arXiv:2601.02819v1 Announce Type: new Abstract: Attention serves as the fundamental mechanism for long-context modeling in large language models (LLMs), yet dense attention becomes structurally prohibitive for long sequences due to its quadratic complexity. Consequently, sparse attention has received increasing attention as a scalable alternative. However, existing sparse attention methods rely on coarse-grained semantic representations during block selection, which blur intra-block semantic boundaries and lead to the loss of critical information. To address this issue, we propose \textbf{P}unctuation-aware \textbf{H}ybrid \textbf{S}parse \textbf{A}ttention \textbf{(PHSA)}, a natively trainable sparse attention framework that leverages punctuation tokens as semantic boundary anchors. Specifically, (1) we design a dual-branch aggregation mechanism that fuses global semantic representations with punctuation-enhanced boundary features, preserving the core semantic structure while introducing almost no additional computational overhead; (2) we introduce an extreme-sparsity-adaptive training and inference strategy that stabilizes model behavior under very low token activation ratios; Extensive experiments on general benchmarks and long-context evaluations demonstrate that PHSA consistently outperforms dense attention and state-of-the-art sparse attention baselines, including InfLLM v2. Specifically, for the 0.6B-parameter model with 32k-token input sequences, PHSA can reduce the information loss by 10.8\% at a sparsity ratio of 97.3\%.
|
https://arxiv.org/abs/2601.02819
|
Academic Papers
|
svg
|
60891527917e469d0c53c9160441812fc3f5f4f9a4118b2c66b3677724359de0
|
2026-01-07T00:00:00-05:00
|
DeepFP: Deep-Unfolded Fractional Programming for MIMO Beamforming
|
arXiv:2601.02822v1 Announce Type: new Abstract: This work proposes a mixed learning-based and optimization-based approach to the weighted-sum-rates beamforming problem in a multiple-input multiple-output (MIMO) wireless network. The conventional methods, i.e., the fractional programming (FP) method and the weighted minimum mean square error (WMMSE) algorithm, can be computationally demanding for two reasons: (i) they require inverting a sequence of matrices whose sizes are proportional to the number of antennas; (ii) they require tuning a set of Lagrange multipliers to account for the power constraints. The recently proposed method called the reduced WMMSE addresses the above two issues for a single cell. In contrast, for the multicell case, another recent method called the FastFP eliminates the large matrix inversion and the Lagrange multipliers by using an improved FP technique, but the update stepsize in the FastFP can be difficult to decide. As such, we propose integrating the deep unfolding network into the FastFP for the stepsize optimization. Numerical experiments show that the proposed method is much more efficient than the learning method based on the WMMSE algorithm.
|
https://arxiv.org/abs/2601.02822
|
Academic Papers
|
svg
|
2782dd088ab8acd2c0c28b35ff8488d9183906bc3732f1217a37b56a0338d4bb
|
2026-01-07T00:00:00-05:00
|
Case Count Metric for Comparative Analysis of Entity Resolution Results
|
arXiv:2601.02824v1 Announce Type: new Abstract: This paper describes a new process and software system, the Case Count Metric System (CCMS), for systematically comparing and analyzing the outcomes of two different ER clustering processes acting on the same dataset when the true linking (labeling) is not known. The CCMS produces a set of counts that describe how the clusters produced by the first process are transformed by the second process based on four possible transformation scenarios. The transformations are that a cluster formed in the first process either remains unchanged, merges into a larger cluster, is partitioned into smaller clusters, or otherwise overlaps with multiple clusters formed in the second process. The CCMS produces a count for each of these cases, accounting for every cluster formed in the first process. In addition, when run in analysis mode, the CCMS program can assist the user in evaluating these changes by displaying the details for all changes or only for certain types of changes. The paper includes a detailed description of the CCMS process and program and examples of how the CCMS has been applied in university and industry research.
|
https://arxiv.org/abs/2601.02824
|
Academic Papers
|
svg
|
c9a7fcc79091c280fa8d085f689de95c58c03c4a6eff8359bee8aab267a107ca
|
2026-01-07T00:00:00-05:00
|
SketchThinker-R1: Towards Efficient Sketch-Style Reasoning in Large Multimodal Models
|
arXiv:2601.02825v1 Announce Type: new Abstract: Despite the empirical success of extensive, step-by-step reasoning in large multimodal models, long reasoning processes inevitably incur substantial computational overhead, i.e., in terms of higher token costs and increased response time, which undermines inference efficiency. In contrast, humans often employ sketch-style reasoning: a concise, goal-directed cognitive process that prioritizes salient information and enables efficient problem-solving. Inspired by this cognitive efficiency, we propose SketchThinker-R1, which incentivizes sketch-style reasoning ability in large multimodal models. Our method consists of three primary stages. In the Sketch-Mode Cold Start stage, we convert standard long reasoning process into sketch-style reasoning and finetune base multimodal model, instilling initial sketch-style reasoning capability. Next, we train SketchJudge Reward Model, which explicitly evaluates thinking process of model and assigns higher scores to sketch-style reasoning. Finally, we conduct Sketch-Thinking Reinforcement Learning under supervision of SketchJudge to further generalize sketch-style reasoning ability. Experimental evaluation on four benchmarks reveals that our SketchThinker-R1 achieves over 64% reduction in reasoning token cost without compromising final answer accuracy. Qualitative analysis further shows that sketch-style reasoning focuses more on key cues during problem solving.
|
https://arxiv.org/abs/2601.02825
|
Academic Papers
|
svg
|
173ba6ce11c8080ded742bea1686a1ee6d7698bb529b35887a585394a550da76
|
2026-01-07T00:00:00-05:00
|
Resolution deficits drive simulator sickness and compromise reading performance in virtual environments
|
arXiv:2601.02829v1 Announce Type: new Abstract: Extended reality (XR) is evolving into a general-purpose computing platform, yet its adoption for productivity is hindered by visual fatigue and simulator sickness. While these symptoms are often attributed to latency or motion conflicts, the precise impact of textual clarity on physiological comfort remains undefined. Here we show that sub-optimal effective resolution, the clarity that reaches the eye after the full display-optics-rendering pipeline, is a primary driver of simulator sickness during reading tasks in both virtual reality and video see-through environments. By systematically manipulating end-to-end effective resolution on a unified logMAR scale, we measured reading psychophysics and sickness symptoms in a controlled within-subjects study. We find that reading performance and user comfort degrade exponentially as resolution drops below 0 logMAR (normal visual acuity). Notably, our results reveal 0 logMAR as a key physiological tipping point: resolutions better than this threshold yield naked-eye-level performance with minimal sickness, whereas poorer resolutions trigger rapid, non-linear increases in nausea and oculomotor strain. These findings suggest that the cognitive and perceptual effort required to resolve blurry text directly compromises user comfort, establishing human-eye resolution as a critical baseline for the design of future ergonomic XR systems.
|
https://arxiv.org/abs/2601.02829
|
Academic Papers
|
svg
|
41fc9f3d4a0db05a613333cafa27ae3ac649010d3ccd07a25da6e5fee39d3e3f
|
2026-01-07T00:00:00-05:00
|
The performances of the Chinese and U.S. Large Language Models on the Topic of Chinese Culture
|
arXiv:2601.02830v1 Announce Type: new Abstract: Cultural backgrounds shape individuals' perspectives and approaches to problem-solving. Since the emergence of GPT-1 in 2018, large language models (LLMs) have undergone rapid development. To date, the world's ten leading LLM developers are primarily based in China and the United States. To examine whether LLMs released by Chinese and U.S. developers exhibit cultural differences in Chinese-language settings, we evaluate their performance on questions about Chinese culture. This study adopts a direct-questioning paradigm to evaluate models such as GPT-5.1, DeepSeek-V3.2, Qwen3-Max, and Gemini2.5Pro. We assess their understanding of traditional Chinese culture, including history, literature, poetry, and related domains. Comparative analyses between LLMs developed in China and the U.S. indicate that Chinese models generally outperform their U.S. counterparts on these tasks. Among U.S.-developed models, Gemini 2.5Pro and GPT-5.1 achieve relatively higher accuracy. The observed performance differences may potentially arise from variations in training data distribution, localization strategies, and the degree of emphasis on Chinese cultural content during model development.
|
https://arxiv.org/abs/2601.02830
|
Academic Papers
|
svg
|
30e9cd4b2f216b7a21808f705f2a5626f9d14bf33afa3c115e2ef36f02a7bd11
|
2026-01-07T00:00:00-05:00
|
DGA-Net: Enhancing SAM with Depth Prompting and Graph-Anchor Guidance for Camouflaged Object Detection
|
arXiv:2601.02831v1 Announce Type: new Abstract: To fully exploit depth cues in Camouflaged Object Detection (COD), we present DGA-Net, a specialized framework that adapts the Segment Anything Model (SAM) via a novel ``depth prompting" paradigm. Distinguished from existing approaches that primarily rely on sparse prompts (e.g., points or boxes), our method introduces a holistic mechanism for constructing and propagating dense depth prompts. Specifically, we propose a Cross-modal Graph Enhancement (CGE) module that synthesizes RGB semantics and depth geometric within a heterogeneous graph to form a unified guidance signal. Furthermore, we design an Anchor-Guided Refinement (AGR) module. To counteract the inherent information decay in feature hierarchies, AGR forges a global anchor and establishes direct non-local pathways to broadcast this guidance from deep to shallow layers, ensuring precise and consistent segmentation. Quantitative and qualitative experimental results demonstrate that our proposed DGA-Net outperforms the state-of-the-art COD methods.
|
https://arxiv.org/abs/2601.02831
|
Academic Papers
|
svg
|
39391bdfb105ced62d9da368e5c288a8194e11d162f9b2d2b4437a120b0b3110
|
2026-01-07T00:00:00-05:00
|
A Practical 73/50 Approximation for Contiguous Monotone Moldable Job Scheduling
|
arXiv:2601.02836v1 Announce Type: new Abstract: In moldable job scheduling, we are provided $m$ identical machines and $n$ jobs that can be executed on a variable number of machines. The execution time of each job depends on the number of machines assigned to execute that job. For the specific problem of monotone moldable job scheduling, jobs are assumed to have a processing time that is non-increasing in the number of machines. The previous best-known algorithms are: (1) a polynomial-time approximation scheme with time complexity $\Omega(n^{g(1/\varepsilon)})$, where $g(\cdot)$ is a super-exponential function [Jansen and Th\"ole '08; Jansen and Land '18], (2) a fully polynomial approximation scheme for the case of $m \geq 8\frac{n}{\varepsilon}$ [Jansen and Land '18], and (3) a $\frac{3}{2}$ approximation with time complexity $O(nm\log(mn))$ [Wu, Zhang, and Chen '23]. We present a new practically efficient algorithm with an approximation ratio of $\approx (1.4593 + \varepsilon)$ and a time complexity of $O(nm \log \frac{1}{\varepsilon})$. Our result also applies to the contiguous variant of the problem. In addition to our theoretical results, we implement the presented algorithm and show that the practical performance is significantly better than the theoretical worst-case approximation ratio.
|
https://arxiv.org/abs/2601.02836
|
Academic Papers
|
svg
|
791739db4c1bae75f4de5e6d5002f33edde2195041c990099f82f36d80257d75
|
2026-01-07T00:00:00-05:00
|
Breaking Self-Attention Failure: Rethinking Query Initialization for Infrared Small Target Detection
|
arXiv:2601.02837v1 Announce Type: new Abstract: Infrared small target detection (IRSTD) faces significant challenges due to the low signal-to-noise ratio (SNR), small target size, and complex cluttered backgrounds. Although recent DETR-based detectors benefit from global context modeling, they exhibit notable performance degradation on IRSTD. We revisit this phenomenon and reveal that the target-relevant embeddings of IRST are inevitably overwhelmed by dominant background features due to the self-attention mechanism, leading to unreliable query initialization and inaccurate target localization. To address this issue, we propose SEF-DETR, a novel framework that refines query initialization for IRSTD. Specifically, SEF-DETR consists of three components: Frequency-guided Patch Screening (FPS), Dynamic Embedding Enhancement (DEE), and Reliability-Consistency-aware Fusion (RCF). The FPS module leverages the Fourier spectrum of local patches to construct a target-relevant density map, suppressing background-dominated features. DEE strengthens multi-scale representations in a target-aware manner, while RCF further refines object queries by enforcing spatial-frequency consistency and reliability. Extensive experiments on three public IRSTD datasets demonstrate that SEF-DETR achieves superior detection performance compared to state-of-the-art methods, delivering a robust and efficient solution for infrared small target detection task.
|
https://arxiv.org/abs/2601.02837
|
Academic Papers
|
svg
|
849fcdafc17e4903bcc798d25996b17cff49ec47635ab93c0c306ffbbffe9a18
|
2026-01-07T00:00:00-05:00
|
TiMem: Temporal-Hierarchical Memory Consolidation for Long-Horizon Conversational Agents
|
arXiv:2601.02845v1 Announce Type: new Abstract: Long-horizon conversational agents have to manage ever-growing interaction histories that quickly exceed the finite context windows of large language models (LLMs). Existing memory frameworks provide limited support for temporally structured information across hierarchical levels, often leading to fragmented memories and unstable long-horizon personalization. We present TiMem, a temporal--hierarchical memory framework that organizes conversations through a Temporal Memory Tree (TMT), enabling systematic memory consolidation from raw conversational observations to progressively abstracted persona representations. TiMem is characterized by three core properties: (1) temporal--hierarchical organization through TMT; (2) semantic-guided consolidation that enables memory integration across hierarchical levels without fine-tuning; and (3) complexity-aware memory recall that balances precision and efficiency across queries of varying complexity. Under a consistent evaluation setup, TiMem achieves state-of-the-art accuracy on both benchmarks, reaching 75.30% on LoCoMo and 76.88% on LongMemEval-S. It outperforms all evaluated baselines while reducing the recalled memory length by 52.20% on LoCoMo. Manifold analysis indicates clear persona separation on LoCoMo and reduced dispersion on LongMemEval-S. Overall, TiMem treats temporal continuity as a first-class organizing principle for long-horizon memory in conversational agents.
|
https://arxiv.org/abs/2601.02845
|
Academic Papers
|
svg
|
551d8bd57d3bad7e2555c2a907a9ecb2f3c3c43791f11ed55be06d5a410cce6c
|
2026-01-07T00:00:00-05:00
|
Stability and error estimates of a linear and partitioned finite element method approximating nonlinear fluid-structure interactions
|
arXiv:2601.02847v1 Announce Type: new Abstract: We propose and analyze a linear and partitioned finite element method for fluid-shell interactions under the arbitrary Lagrangian-Eulerian (ALE) framework. We adopt the P1-bubble/P1/P1 elements for the fluid velocity, pressure, and structure velocity, respectively. We show the stability and error estimates of the scheme without assuming infinitesimal structural deformation nor neglecting fluid convection effects. The theoretical convergence rate is further corroborated by numerical experiments.
|
https://arxiv.org/abs/2601.02847
|
Academic Papers
|
svg
|
f2d15cbe4308dcc6fd39bd0db4d5fa0bf4fc320128be7f3b483522cddd0910bf
|
2026-01-07T00:00:00-05:00
|
Modeling ICD-10 Morbidity and Multidimensional Poverty as a Spatial Network: Evidence from Thailand
|
arXiv:2601.02848v1 Announce Type: new Abstract: Health and poverty in Thailand exhibit pronounced geographic structuring, yet the extent to which they operate as interconnected regional systems remains insufficiently understood. This study analyzes ICD-10 chapter-level morbidity and multidimensional poverty as outcomes embedded in a spatial interaction network. Interpreting Thailand's 76 provinces as nodes within a fixed-degree regional graph, we apply tools from spatial econometrics and social network analysis, including Moran's I, Local Indicators of Spatial Association (LISA), and Spatial Durbin Models (SDM), to assess spatial dependence and cross-provincial spillovers. Our findings reveal strong spatial clustering across multiple ICD-10 chapters, with persistent high-high morbidity zones, particularly for digestive, respiratory, musculoskeletal, and symptom-based diseases, emerging in well-defined regional belts. SDM estimates demonstrate that spillover effects from neighboring provinces frequently exceed the influence of local deprivation, especially for living-condition, health-access, accessibility, and poor-household indicators. These patterns are consistent with contagion and contextual influence processes well established in social network theory. By framing morbidity and poverty as interdependent attributes on a spatial network, this study contributes to the growing literature on structural diffusion, health inequality, and regional vulnerability. The results highlight the importance of coordinated policy interventions across provincial boundaries and demonstrate how network-based modeling can uncover the spatial dynamics of health and deprivation.
|
https://arxiv.org/abs/2601.02848
|
Academic Papers
|
svg
|
ce5b28a32e0444f0c94d9f0d440ab0fb1899f375cdaf44bbcfc8f4a2e4241be3
|
2026-01-07T00:00:00-05:00
|
Sample-Efficient Neurosymbolic Deep Reinforcement Learning
|
arXiv:2601.02850v1 Announce Type: new Abstract: Reinforcement Learning (RL) is a well-established framework for sequential decision-making in complex environments. However, state-of-the-art Deep RL (DRL) algorithms typically require large training datasets and often struggle to generalize beyond small-scale training scenarios, even within standard benchmarks. We propose a neuro-symbolic DRL approach that integrates background symbolic knowledge to improve sample efficiency and generalization to more challenging, unseen tasks. Partial policies defined for simple domain instances, where high performance is easily attained, are transferred as useful priors to accelerate learning in more complex settings and avoid tuning DRL parameters from scratch. To do so, partial policies are represented as logical rules, and online reasoning is performed to guide the training process through two mechanisms: (i) biasing the action distribution during exploration, and (ii) rescaling Q-values during exploitation. This neuro-symbolic integration enhances interpretability and trustworthiness while accelerating convergence, particularly in sparse-reward environments and tasks with long planning horizons. We empirically validate our methodology on challenging variants of gridworld environments, both in the fully observable and partially observable setting. We show improved performance over a state-of-the-art reward machine baseline.
|
https://arxiv.org/abs/2601.02850
|
Academic Papers
|
svg
|
1eb33d3c8785513a5c56ffea82be99c86eff440a152112d32d0258b6955fe4c7
|
2026-01-07T00:00:00-05:00
|
M3MAD-Bench: Are Multi-Agent Debates Really Effective Across Domains and Modalities?
|
arXiv:2601.02854v1 Announce Type: new Abstract: As an agent-level reasoning and coordination paradigm, Multi-Agent Debate (MAD) orchestrates multiple agents through structured debate to improve answer quality and support complex reasoning. However, existing research on MAD suffers from two fundamental limitations: evaluations are conducted under fragmented and inconsistent settings, hindering fair comparison, and are largely restricted to single-modality scenarios that rely on textual inputs only. To address these gaps, we introduce M3MAD-Bench, a unified and extensible benchmark for evaluating MAD methods across Multi-domain tasks, Multi-modal inputs, and Multi-dimensional metrics. M3MAD-Bench establishes standardized protocols over five core task domains: Knowledge, Mathematics, Medicine, Natural Sciences, and Complex Reasoning, and systematically covers both pure text and vision-language datasets, enabling controlled cross-modality comparison. We evaluate MAD methods on nine base models spanning different architectures, scales, and modality capabilities. Beyond accuracy, M3MAD-Bench incorporates efficiency-oriented metrics such as token consumption and inference time, providing a holistic view of performance--cost trade-offs. Extensive experiments yield systematic insights into the effectiveness, robustness, and efficiency of MAD across text-only and multimodal scenarios. We believe M3MAD-Bench offers a reliable foundation for future research on standardized MAD evaluation. The code is available at http://github.com/liaolea/M3MAD-Bench.
|
https://arxiv.org/abs/2601.02854
|
Academic Papers
|
svg
|
09405d302a798cdf59dde2317c16cd8f0328b28f4f4584ca6d7f4688ea599dd4
|
2026-01-07T00:00:00-05:00
|
Context-aware Privacy Bounds for Linear Queries
|
arXiv:2601.02855v1 Announce Type: new Abstract: Linear queries, as the basis of broad analysis tasks, are often released through privacy mechanisms based on differential privacy (DP), the most popular framework for privacy protection. However, DP adopts a context-free definition that operates independently of the data-generating distribution. In this paper, we revisit the privacy analysis of the Laplace mechanism through the lens of pointwise maximal leakage (PML). We demonstrate that the distribution-agnostic definition of the DP framework often mandates excessive noise. To address this, we incorporate an assumption about the prior distribution by lower-bounding the probability of any single record belonging to any specific class. With this assumption, we derive a tight, context-aware leakage bound for general linear queries, and prove that our derived bound is strictly tighter than the standard DP guarantee and converges to the DP guarantee as this probability lower bound approaches zero. Numerical evaluations demonstrate that by exploiting this prior knowledge, the required noise scale can be reduced while maintaining privacy guarantees.
|
https://arxiv.org/abs/2601.02855
|
Academic Papers
|
svg
|
d4acec1d3fd73bb73d17ea61a20ad6e8713f7f7a8af9052770d7e17c8de0bca8
|
2026-01-07T00:00:00-05:00
|
Electricity Price Forecasting: Bridging Linear Models, Neural Networks and Online Learning
|
arXiv:2601.02856v1 Announce Type: new Abstract: Precise day-ahead forecasts for electricity prices are crucial to ensure efficient portfolio management, support strategic decision-making for power plant operations, enable efficient battery storage optimization, and facilitate demand response planning. However, developing an accurate prediction model is highly challenging in an uncertain and volatile market environment. For instance, although linear models generally exhibit competitive performance in predicting electricity prices with minimal computational requirements, they fail to capture relevant nonlinear relationships. Nonlinear models, on the other hand, can improve forecasting accuracy with a surge in computational costs. We propose a novel multivariate neural network approach that combines linear and nonlinear feed-forward neural structures. Unlike previous hybrid models, our approach integrates online learning and forecast combination for efficient training and accuracy improvement. It also incorporates all relevant characteristics, particularly the fundamental relationships arising from wind and solar generation, electricity demand patterns, related energy fuel and carbon markets, in addition to autoregressive dynamics and calendar effects. Compared to the current state-of-the-art benchmark models, the proposed forecasting method significantly reduces computational cost while delivering superior forecasting accuracy (12-13% RMSE and 15-18% MAE reductions). Our results are derived from a six-year forecasting study conducted on major European electricity markets.
|
https://arxiv.org/abs/2601.02856
|
Academic Papers
|
svg
|
821c560b4860a6a783ac29f1668826227511b1297bb14995659dfde58f02c872
|
2026-01-07T00:00:00-05:00
|
Soft Responsive Materials Enhance Humanoid Safety
|
arXiv:2601.02857v1 Announce Type: new Abstract: Humanoid robots are envisioned as general-purpose platforms in human-centered environments, yet their deployment is limited by vulnerability to falls and the risks posed by rigid metal-plastic structures to people and surroundings. We introduce a soft-rigid co-design framework that leverages non-Newtonian fluid-based soft responsive materials to enhance humanoid safety. The material remains compliant during normal interaction but rapidly stiffens under impact, absorbing and dissipating fall-induced forces. Physics-based simulations guide protector placement and thickness and enable learning of active fall policies. Applied to a 42 kg life-size humanoid, the protector markedly reduces peak impact and allows repeated falls without hardware damage, including drops from 3 m and tumbles down long staircases. Across diverse scenarios, the approach improves robot robustness and environmental safety. By uniting responsive materials, structural co-design, and learning-based control, this work advances interact-safe, industry-ready humanoid robots.
|
https://arxiv.org/abs/2601.02857
|
Academic Papers
|
svg
|
e65d1b8d4df1f992a3e17baa3431f34476c69b113890af93a7bc15768f5deeaa
|
2026-01-07T00:00:00-05:00
|
To Generate or Discriminate? Methodological Considerations for Measuring Cultural Alignment in LLMs
|
arXiv:2601.02858v1 Announce Type: new Abstract: Socio-demographic prompting (SDP) - prompting Large Language Models (LLMs) using demographic proxies to generate culturally aligned outputs - often shows LLM responses as stereotypical and biased. While effective in assessing LLMs' cultural competency, SDP is prone to confounding factors such as prompt sensitivity, decoding parameters, and the inherent difficulty of generation over discrimination tasks due to larger output spaces. These factors complicate interpretation, making it difficult to determine if the poor performance is due to bias or the task design. To address this, we use inverse socio-demographic prompting (ISDP), where we prompt LLMs to discriminate and predict the demographic proxy from actual and simulated user behavior from different users. We use the Goodreads-CSI dataset (Saha et al., 2025), which captures difficulty in understanding English book reviews for users from India, Mexico, and the USA, and test four LLMs: Aya-23, Gemma-2, GPT-4o, and LLaMA-3.1 with ISDP. Results show that models perform better with actual behaviors than simulated ones, contrary to what SDP suggests. However, performance with both behavior types diminishes and becomes nearly equal at the individual level, indicating limits to personalization.
|
https://arxiv.org/abs/2601.02858
|
Academic Papers
|
svg
|
ec41f3362dc220414a05e2389997f7c7afb30afbfa5692f37d05198cf0bfdc26
|
2026-01-07T00:00:00-05:00
|
Training Language Models with homotokens Leads to Delayed Overfitting
|
arXiv:2601.02867v1 Announce Type: new Abstract: Subword tokenization introduces a computational layer in language models where many distinct token sequences decode to the same surface form and preserve meaning, yet induce different internal computations. Despite this non-uniqueness, language models are typically trained using a single canonical longest-prefix tokenization. We formalize homotokens-alternative valid subword segmentations of the same lexical item-as a strictly meaning-preserving form of data augmentation. We introduce a lightweight training architecture that conditions canonical next-token prediction on sampled homotoken variants via an auxiliary causal encoder and block-causal cross-attention, without modifying the training objective or token interface. In data-constrained pretraining, homotoken augmentation consistently delays overfitting under repeated data exposure and improves generalization across diverse evaluation datasets. In multilingual fine-tuning, we find that the effectiveness of homotokens depends on tokenizer quality: gains are strongest when canonical tokens are highly compressed and diminish when the tokenizer already over-fragments the input. Overall, homotokens provide a simple and modular mechanism for inducing tokenization invariance in language models.
|
https://arxiv.org/abs/2601.02867
|
Academic Papers
|
svg
|
4ca609a08bd632f183ee152c9eb50de79f577d8b2ba64a3f95681d19c1d1996d
|
2026-01-07T00:00:00-05:00
|
CodeMEM: AST-Guided Adaptive Memory for Repository-Level Iterative Code Generation
|
arXiv:2601.02868v1 Announce Type: new Abstract: Large language models (LLMs) substantially enhance developer productivity in repository-level code generation through interactive collaboration. However, as interactions progress, repository context must be continuously preserved and updated to integrate newly validated information. Meanwhile, the expanding session history increases cognitive burden, often leading to forgetting and the reintroduction of previously resolved errors. Existing memory management approaches show promise but remain limited by natural language-centric representations. To overcome these limitations, we propose CodeMEM, an AST-guided dynamic memory management system tailored for repository-level iterative code generation. Specifically, CodeMEM introduces the Code Context Memory component that dynamically maintains and updates repository context through AST-guided LLM operations, along with the Code Session Memory that constructs a code-centric representation of interaction history and explicitly detects and mitigates forgetting through AST-based analysis. Experimental results on the instruction-following benchmark CodeIF-Bench and the code generation benchmark CoderEval demonstrate that CodeMEM achieves state-of-the-art performance, improving instruction following by 12.2% for the current turn and 11.5% for the session level, and reducing interaction rounds by 2-3, while maintaining competitive inference latency and token efficiency.
|
https://arxiv.org/abs/2601.02868
|
Academic Papers
|
svg
|
89e908fb59a261d47119b30c19bcdcefb5791f3e45ce8647290e9f9393df77fc
|
2026-01-07T00:00:00-05:00
|
Quantum-Enhanced Neural Contextual Bandit Algorithms
|
arXiv:2601.02870v1 Announce Type: new Abstract: Stochastic contextual bandits are fundamental for sequential decision-making but pose significant challenges for existing neural network-based algorithms, particularly when scaling to quantum neural networks (QNNs) due to issues such as massive over-parameterization, computational instability, and the barren plateau phenomenon. This paper introduces the Quantum Neural Tangent Kernel-Upper Confidence Bound (QNTK-UCB) algorithm, a novel algorithm that leverages the Quantum Neural Tangent Kernel (QNTK) to address these limitations. By freezing the QNN at a random initialization and utilizing its static QNTK as a kernel for ridge regression, QNTK-UCB bypasses the unstable training dynamics inherent in explicit parameterized quantum circuit training while fully exploiting the unique quantum inductive bias. For a time horizon $T$ and $K$ actions, our theoretical analysis reveals a significantly improved parameter scaling of $\Omega((TK)^3)$ for QNTK-UCB, a substantial reduction compared to $\Omega((TK)^8)$ required by classical NeuralUCB algorithms for similar regret guarantees. Empirical evaluations on non-linear synthetic benchmarks and quantum-native variational quantum eigensolver tasks demonstrate QNTK-UCB's superior sample efficiency in low-data regimes. This work highlights how the inherent properties of QNTK provide implicit regularization and a sharper spectral decay, paving the way for achieving ``quantum advantage'' in online learning.
|
https://arxiv.org/abs/2601.02870
|
Academic Papers
|
svg
|
d7da1b4d7681e9e2b9c96f3dabdb3e1fb1a2bd2a918b491cb2c8407a1d331efd
|
2026-01-07T00:00:00-05:00
|
SimRPD: Optimizing Recruitment Proactive Dialogue Agents through Simulator-Based Data Evaluation and Selection
|
arXiv:2601.02871v1 Announce Type: new Abstract: Task-oriented proactive dialogue agents play a pivotal role in recruitment, particularly for steering conversations towards specific business outcomes, such as acquiring social-media contacts for private-channel conversion. Although supervised fine-tuning and reinforcement learning have proven effective for training such agents, their performance is heavily constrained by the scarcity of high-quality, goal-oriented domain-specific training data. To address this challenge, we propose SimRPD, a three-stage framework for training recruitment proactive dialogue agents. First, we develop a high-fidelity user simulator to synthesize large-scale conversational data through multi-turn online dialogue. Then we introduce a multi-dimensional evaluation framework based on Chain-of-Intention (CoI) to comprehensively assess the simulator and effectively select high-quality data, incorporating both global-level and instance-level metrics. Finally, we train the recruitment proactive dialogue agent on the selected dataset. Experiments in a real-world recruitment scenario demonstrate that SimRPD outperforms existing simulator-based data selection strategies, highlighting its practical value for industrial deployment and its potential applicability to other business-oriented dialogue scenarios.
|
https://arxiv.org/abs/2601.02871
|
Academic Papers
|
svg
|
78e1f8e6534f5dd029751b73f9737e4b9c884a739b44749434f881d2a24e6cd1
|
2026-01-07T00:00:00-05:00
|
LongBench Pro: A More Realistic and Comprehensive Bilingual Long-Context Evaluation Benchmark
|
arXiv:2601.02872v1 Announce Type: new Abstract: The rapid expansion of context length in large language models (LLMs) has outpaced existing evaluation benchmarks. Current long-context benchmarks often trade off scalability and realism: synthetic tasks underrepresent real-world complexity, while fully manual annotation is costly to scale to extreme lengths and diverse scenarios. We present LongBench Pro, a more realistic and comprehensive bilingual benchmark of 1,500 naturally occurring long-context samples in English and Chinese spanning 11 primary tasks and 25 secondary tasks, with input lengths from 8k to 256k tokens. LongBench Pro supports fine-grained analysis with task-specific metrics and a multi-dimensional taxonomy of context requirement (full vs. partial dependency), length (six levels), and difficulty (four levels calibrated by model performance). To balance quality with scalability, we propose a Human-Model Collaborative Construction pipeline: frontier LLMs draft challenging questions and reference answers, along with design rationales and solution processes, to reduce the cost of expert verification. Experts then rigorously validate correctness and refine problematic cases. Evaluating 46 widely used long-context LLMs on LongBench Pro yields three findings: (1) long-context optimization contributes more to long-context comprehension than parameter scaling; (2) effective context length is typically shorter than the claimed context length, with pronounced cross-lingual misalignment; and (3) the "thinking" paradigm helps primarily models trained with native reasoning, while mixed-thinking designs offer a promising Pareto trade-off. In summary, LongBench Pro provides a robust testbed for advancing long-context understanding.
|
https://arxiv.org/abs/2601.02872
|
Academic Papers
|
svg
|
6c580178dc5965b74b040b970ceca3c773fea1c6d1daf397d393aae0feff0fb8
|
2026-01-07T00:00:00-05:00
|
Warm-Starting Collision-Free Model Predictive Control With Object-Centric Diffusion
|
arXiv:2601.02873v1 Announce Type: new Abstract: Acting in cluttered environments requires predicting and avoiding collisions while still achieving precise control. Conventional optimization-based controllers can enforce physical constraints, but they struggle to produce feasible solutions quickly when many obstacles are present. Diffusion models can generate diverse trajectories around obstacles, yet prior approaches lacked a general and efficient way to condition them on scene structure. In this paper, we show that combining diffusion-based warm-starting conditioned with a latent object-centric representation of the scene and with a collision-aware model predictive controller (MPC) yields reliable and efficient motion generation under strict time limits. Our approach conditions a diffusion transformer on the system state, task, and surroundings, using an object-centric slot attention mechanism to provide a compact obstacle representation suitable for control. The sampled trajectories are refined by an optimal control problem that enforces rigid-body dynamics and signed-distance collision constraints, producing feasible motions in real time. On benchmark tasks, this hybrid method achieved markedly higher success rates and lower latency than sampling-based planners or either component alone. Real-robot experiments with a torque-controlled Panda confirm reliable and safe execution with MPC.
|
https://arxiv.org/abs/2601.02873
|
Academic Papers
|
svg
|
5f7f4c3c914fbb98ddf753f939596917ddcfbdb87b722a78584cf266e15f7552
|
2026-01-07T00:00:00-05:00
|
Revisiting Data Compression with Language Modeling
|
arXiv:2601.02875v1 Announce Type: new Abstract: In this report, we investigate the potential use of large language models (LLM's) in the task of data compression. Previous works have demonstrated promising results in applying LLM's towards compressing not only text, but also a wide range of multi-modal data. Despite the favorable performance achieved, there still remains several practical questions that pose a challenge towards replacing existing data compression algorithms with LLM's. In this work, we explore different methods to achieve a lower adjusted compression rate using LLM's as data compressors. In comparison to previous works, we were able to achieve a new state-of-the-art (SOTA) adjusted compression rate of around $18\%$ on the enwik9 dataset without additional model training. Furthermore, we explore the use of LLM's in compressing non-English data, code data, byte stream sequences. We show that while LLM's excel in compressing data in text-dominant domains, their ability in compressing non-natural text sequences still remain competitive if configured in the right way.
|
https://arxiv.org/abs/2601.02875
|
Academic Papers
|
svg
|
cd9ff4597ccf29e37f724ef9ee58902aa119fbdff589cd4ecc182cdf61cfa5ad
|
2026-01-07T00:00:00-05:00
|
ReTreVal: Reasoning Tree with Validation - A Hybrid Framework for Enhanced LLM Multi-Step Reasoning
|
arXiv:2601.02880v1 Announce Type: new Abstract: Multi-step reasoning remains a key challenge for Large Language Models (LLMs), particularly in complex domains such as mathematics and creative writing. While recent approaches including ReAct, Reflexion, and Self-Refine improve reasoning through iterative refinement and reflection, they often lack structured exploration of alternative solution paths and persistent learning across problems. We propose ReTreVal (Reasoning Tree with Validation), a hybrid framework that integrates Tree-of-Thoughts exploration, self-refinement, LLM-based critique scoring, and reflexion memory to enable bounded and validated multi-step reasoning. ReTreVal constructs a structured reasoning tree with adaptive depth based on problem complexity, where each node undergoes iterative self-critique and refinement guided by explicit LLM-generated feedback. A dual validation mechanism evaluates reasoning quality, coherence, and correctness at each node while persistently storing insights from successful reasoning paths and failure patterns in a reflexion memory buffer, enabling cross-problem learning. Critique-based pruning retains only the top-k highest-scoring nodes at each level, controlling computational cost while preserving high-quality solution paths. We evaluate ReTreVal against ReAct, Reflexion, and Self-Refine across 500 mathematical problems and creative writing tasks using Qwen 2.5 7B as the underlying LLM, and demonstrate that ReTreVal consistently outperforms existing methods through its combination of structured exploration, critique-driven refinement, and cross-problem memory, making it particularly effective for tasks requiring exploratory reasoning, rigorous verification, and knowledge transfer.
|
https://arxiv.org/abs/2601.02880
|
Academic Papers
|
svg
|
51c3e701a8d4e3f84e3a5e4c204e10001bd2ae7a9cb87c64cf7f04b0dbd77fb1
|
2026-01-07T00:00:00-05:00
|
Towards Agnostic and Holistic Universal Image Segmentation with Bit Diffusion
|
arXiv:2601.02881v1 Announce Type: new Abstract: This paper introduces a diffusion-based framework for universal image segmentation, making agnostic segmentation possible without depending on mask-based frameworks and instead predicting the full segmentation in a holistic manner. We present several key adaptations to diffusion models, which are important in this discrete setting. Notably, we show that a location-aware palette with our 2D gray code ordering improves performance. Adding a final tanh activation function is crucial for discrete data. On optimizing diffusion parameters, the sigmoid loss weighting consistently outperforms alternatives, regardless of the prediction type used, and we settle on x-prediction. While our current model does not yet surpass leading mask-based architectures, it narrows the performance gap and introduces unique capabilities, such as principled ambiguity modeling, that these models lack. All models were trained from scratch, and we believe that combining our proposed improvements with large-scale pretraining or promptable conditioning could lead to competitive models.
|
https://arxiv.org/abs/2601.02881
|
Academic Papers
|
svg
|
5729bf8c44865fdde85761fc74b0a08da77838c49add17671d79bf9940b8119b
|
2026-01-07T00:00:00-05:00
|
Domain Generalization for Time Series: Enhancing Drilling Regression Models for Stick-Slip Index Prediction
|
arXiv:2601.02884v1 Announce Type: new Abstract: This paper provides a comprehensive comparison of domain generalization techniques applied to time series data within a drilling context, focusing on the prediction of a continuous Stick-Slip Index (SSI), a critical metric for assessing torsional downhole vibrations at the drill bit. The study aims to develop a robust regression model that can generalize across domains by training on 60 second labeled sequences of 1 Hz surface drilling data to predict the SSI. The model is tested in wells that are different from those used during training. To fine-tune the model architecture, a grid search approach is employed to optimize key hyperparameters. A comparative analysis of the Adversarial Domain Generalization (ADG), Invariant Risk Minimization (IRM) and baseline models is presented, along with an evaluation of the effectiveness of transfer learning (TL) in improving model performance. The ADG and IRM models achieve performance improvements of 10% and 8%, respectively, over the baseline model. Most importantly, severe events are detected 60% of the time, against 20% for the baseline model. Overall, the results indicate that both ADG and IRM models surpass the baseline, with the ADG model exhibiting a slight advantage over the IRM model. Additionally, applying TL to a pre-trained model further improves performance. Our findings demonstrate the potential of domain generalization approaches in drilling applications, with ADG emerging as the most effective approach.
|
https://arxiv.org/abs/2601.02884
|
Academic Papers
|
svg
|
7dcde5f0d383d5670ce99ad23f01e4f4bf50c836637db9f09de84bb3d9f4ec21
|
2026-01-07T00:00:00-05:00
|
A Mathematical Formalization of Self-Determining Agency
|
arXiv:2601.02885v1 Announce Type: new Abstract: Defining agency is an extremely important challenge for cognitive science and artificial intelligence. Physics generally describes mechanical happenings, but there remains an unbridgeable gap between them and the acts of agents. To discuss the morality and responsibility of agents, it is necessary to model acts; whether such responsible acts can be fully explained by physical determinism has been debated. Although we have already proposed a physical "agent determinism" model that appears to go beyond mere mechanical happenings, we have not yet established a strict mathematical formalism to eliminate ambiguity. Here, we explain why a physical system can follow coarse-graining agent-level determination without violating physical laws by formulating supervenient causation. Generally, supervenience including coarse graining does not change without a change in its lower base; therefore, a single supervenience alone cannot define supervenient causation. We define supervenient causation as the causal efficacy from the supervenience level to its lower base level. Although an algebraic expression composed of the multiple supervenient functions does supervenes on the base, a sequence of indices that determines the algebraic expression does not supervene on the base. Therefore, the sequence can possess unique dynamical laws that are independent of the lower base level. This independent dynamics creates the possibility for temporally preceding changes at the supervenience level to cause changes at the lower base level. Such a dual-laws system is considered useful for modeling self-determining agents such as humans.
|
https://arxiv.org/abs/2601.02885
|
Academic Papers
|
svg
|
25e80f8ac3daeebd350a3711edaa2ce38866769823864e3118a176cd6913ebcc
|
2026-01-07T00:00:00-05:00
|
RPIQ: Residual-Projected Multi-Collaboration Closed-Loop and Single Instance Quantization for Visually Impaired Assistance
|
arXiv:2601.02888v1 Announce Type: new Abstract: Visually impaired users face significant challenges in daily information access and real-time environmental perception, and there is an urgent need for intelligent assistive systems with accurate recognition capabilities. Although large-scale models provide effective solutions for perception and reasoning, their practical deployment on assistive devices is severely constrained by excessive memory consumption and high inference costs. Moreover, existing quantization strategies often ignore inter-block error accumulation, leading to degraded model stability. To address these challenges, this study proposes a novel quantization framework -- Residual-Projected Multi-Collaboration Closed-Loop and Single Instance Quantization(RPIQ), whose quantization process adopts a multi-collaborative closed-loop compensation scheme based on Single Instance Calibration and Gauss-Seidel Iterative Quantization. Experiments on various types of large-scale models, including language models such as OPT, Qwen, and LLaMA, as well as vision-language models such as CogVLM2, demonstrate that RPIQ can compress models to 4-bit representation while significantly reducing peak memory consumption (approximately 60%-75% reduction compared to original full-precision models). The method maintains performance highly close to full-precision models across multiple language and visual tasks, and exhibits excellent recognition and reasoning capabilities in key applications such as text understanding and visual question answering in complex scenarios. While verifying the effectiveness of RPIQ for deployment in real assistive systems, this study also advances the computational efficiency and reliability of large models, enabling them to provide visually impaired users with the required information accurately and rapidly.
|
https://arxiv.org/abs/2601.02888
|
Academic Papers
|
svg
|
2f6ac85b4c86d0729888702f1a00b256cd5e26907069becff05dcc1ed5240c4f
|
2026-01-07T00:00:00-05:00
|
Transparent Semantic Change Detection with Dependency-Based Profiles
|
arXiv:2601.02891v1 Announce Type: new Abstract: Most modern computational approaches to lexical semantic change detection (LSC) rely on embedding-based distributional word representations with neural networks. Despite the strong performance on LSC benchmarks, they are often opaque. We investigate an alternative method which relies purely on dependency co-occurrence patterns of words. We demonstrate that it is effective for semantic change detection and even outperforms a number of distributional semantic models. We provide an in-depth quantitative and qualitative analysis of the predictions, showing that they are plausible and interpretable.
|
https://arxiv.org/abs/2601.02891
|
Academic Papers
|
svg
|
2bd1f643bf841e6e37f83e2492bbd26870d105b978be2f78b727da484c44f14c
|
2026-01-07T00:00:00-05:00
|
Bridging Mechanistic Interpretability and Prompt Engineering with Gradient Ascent for Interpretable Persona Control
|
arXiv:2601.02896v1 Announce Type: new Abstract: Controlling emergent behavioral personas (e.g., sycophancy, hallucination) in Large Language Models (LLMs) is critical for AI safety, yet remains a persistent challenge. Existing solutions face a dilemma: manual prompt engineering is intuitive but unscalable and imprecise, while automatic optimization methods are effective but operate as "black boxes" with no interpretable connection to model internals. We propose a novel framework that adapts gradient ascent to LLMs, enabling targeted prompt discovery. In specific, we propose two methods, RESGA and SAEGA, that both optimize randomly initialized prompts to achieve better aligned representation with an identified persona direction. We introduce fluent gradient ascent to control the fluency of discovered persona steering prompts. We demonstrate RESGA and SAEGA's effectiveness across Llama 3.1, Qwen 2.5, and Gemma 3 for steering three different personas,sycophancy, hallucination, and myopic reward. Crucially, on sycophancy, our automatically discovered prompts achieve significant improvement (49.90% compared with 79.24%). By grounding prompt discovery in mechanistically meaningful features, our method offers a new paradigm for controllable and interpretable behavior modification.
|
https://arxiv.org/abs/2601.02896
|
Academic Papers
|
svg
|
871bef0eb5fe561447e176e264e2472b7762172bd7d6163da6e7a6994c5fdf7f
|
2026-01-07T00:00:00-05:00
|
Proceedings of the 1st International Workshop on Low Carbon Computing (LOCO 2024)
|
arXiv:2601.02898v1 Announce Type: new Abstract: This is the proceedings of the 1st International Workshop on Low Carbon Computing (LOCO 2024).
|
https://arxiv.org/abs/2601.02898
|
Academic Papers
|
svg
|
eeb06891388059a90171e0a8b3ceeef5755c7027baa65fdf8ef7c6d94dc5b177
|
2026-01-07T00:00:00-05:00
|
SPO-CLAPScore: Enhancing CLAP-based alignment prediction system with Standardize Preference Optimization, for the first XACLE Challenge
|
arXiv:2601.02900v1 Announce Type: new Abstract: The first XACLE Challenge (x-to-audio alignment challenge) addresses the critical need for automatic evaluation metrics that correlate with human perception of audio-text semantic alignment. In this paper, we describe the "Takano_UTokyo_03" system submitted to XACLE Challenge. Our approach leverages a CLAPScore-based architecture integrated with a novel training method called Standardized Preference Optimization (SPO). SPO standardizes the raw alignment scores provided by each listener, enabling the model to learn relative preferences and mitigate the impact of individual scoring biases. Additionally, we employ listener screening to exclude listeners with inconsistent ratings. Experimental evaluations demonstrate that both SPO and listener screening effectively improve the correlation with human judgment. Our system achieved 6th place in the challenge with a Spearman's rank correlation coefficient (SRCC) of 0.6142, demonstrating competitive performance within a marginal gap from the top-ranked systems. The code is available at https://github.com/ttakano398/SPO-CLAPScore.
|
https://arxiv.org/abs/2601.02900
|
Academic Papers
|
svg
|
1312aa6bb10f20726388d69432fad5c9b568445308fca8d9c8cf0e7f0bf4ceeb
|
2026-01-07T00:00:00-05:00
|
Logical Phase Transitions: Understanding Collapse in LLM Logical Reasoning
|
arXiv:2601.02902v1 Announce Type: new Abstract: Symbolic logical reasoning is a critical yet underexplored capability of large language models (LLMs), providing reliable and verifiable decision-making in high-stakes domains such as mathematical reasoning and legal judgment. In this study, we present a systematic analysis of logical reasoning under controlled increases in logical complexity, and reveal a previously unrecognized phenomenon, which we term Logical Phase Transitions: rather than degrading smoothly, logical reasoning performance remains stable within a regime but collapses abruptly beyond a critical logical depth, mirroring physical phase transitions such as water freezing beyond a critical temperature threshold. Building on this insight, we propose Neuro-Symbolic Curriculum Tuning, a principled framework that adaptively aligns natural language with logical symbols to establish a shared representation, and reshapes training dynamics around phase-transition boundaries to progressively strengthen reasoning at increasing logical depths. Experiments on five benchmarks show that our approach effectively mitigates logical reasoning collapse at high complexity, yielding average accuracy gains of +1.26 in naive prompting and +3.95 in CoT, while improving generalization to unseen logical compositions. Code and data are available at https://github.com/AI4SS/Logical-Phase-Transitions.
|
https://arxiv.org/abs/2601.02902
|
Academic Papers
|
svg
|
ca83a129b0630731407d422849850bcfdceb19f34bac93f386e874b0cab73c04
|
2026-01-07T00:00:00-05:00
|
Site-Specific and Frequency-Dependent Channel Characterization and MIMO Performance in FR3
|
arXiv:2601.02903v1 Announce Type: new Abstract: Next-generation wireless systems aim to enable on-demand connectivity through dynamic spectrum utilization. Motivated by this vision, this paper investigates the propagation characteristics and MIMO performance of the upper mid-band, spanning approximately 7-24 GHz and unofficially referred to as FR3. Using site-specific ray-tracing (RT) simulations based on the Sionna framework, we analyze indoor and outdoor environments at representative frequencies across FR1, FR3, and FR2, including 3.5, 7, 10, 14, 20, 24, and 28 GHz, under both single-antenna and multi-antenna configurations. The results show that FR3 exhibits intermediate propagation behavior between sub-6 GHz and millimeter-wave bands while sustaining effective spatial multiplexing and favorable spectral efficiency. Furthermore, large-array analysis indicates that performance gains in FR3 are closely tied to antenna scaling, highlighting the importance of large-size or large-aperture MIMO architectures for practical deployments.
|
https://arxiv.org/abs/2601.02903
|
Academic Papers
|
svg
|
8956ff9f82c3d555f468052ca99c1c6a71f17ad1937be30fc1a91ea31fbd5e47
|
2026-01-07T00:00:00-05:00
|
LOST-3DSG: Lightweight Open-Vocabulary 3D Scene Graphs with Semantic Tracking in Dynamic Environments
|
arXiv:2601.02905v1 Announce Type: new Abstract: Tracking objects that move within dynamic environments is a core challenge in robotics. Recent research has advanced this topic significantly; however, many existing approaches remain inefficient due to their reliance on heavy foundation models. To address this limitation, we propose LOST-3DSG, a lightweight open-vocabulary 3D scene graph designed to track dynamic objects in real-world environments. Our method adopts a semantic approach to entity tracking based on word2vec and sentence embeddings, enabling an open-vocabulary representation while avoiding the necessity of storing dense CLIP visual features. As a result, LOST-3DSG achieves superior performance compared to approaches that rely on high-dimensional visual embeddings. We evaluate our method through qualitative and quantitative experiments conducted in a real 3D environment using a TIAGo robot. The results demonstrate the effectiveness and efficiency of LOST-3DSG in dynamic object tracking. Code and supplementary material are publicly available on the project website at https://lab-rococo-sapienza.github.io/lost-3dsg/.
|
https://arxiv.org/abs/2601.02905
|
Academic Papers
|
svg
|
be8871b640c78f382f9c76c484115f3f7239a2d8598c4057328803ef11bbea94
|
2026-01-07T00:00:00-05:00
|
Linear Script Representations in Speech Foundation Models Enable Zero-Shot Transliteration
|
arXiv:2601.02906v1 Announce Type: new Abstract: Multilingual speech foundation models such as Whisper are trained on web-scale data, where data for each language consists of a myriad of regional varieties. However, different regional varieties often employ different scripts to write the same language, rendering speech recognition output also subject to non-determinism in the output script. To mitigate this problem, we show that script is linearly encoded in the activation space of multilingual speech models, and that modifying activations at inference time enables direct control over output script. We find the addition of such script vectors to activations at test time can induce script change even in unconventional language-script pairings (e.g. Italian in Cyrillic and Japanese in Latin script). We apply this approach to inducing post-hoc control over the script of speech recognition output, where we observe competitive performance across all model sizes of Whisper.
|
https://arxiv.org/abs/2601.02906
|
Academic Papers
|
svg
|
0df67ce06b68a6ec6df1f96a3b1fc5e4d4f71fc78bb9e25156b662a32f44baa9
|
2026-01-07T00:00:00-05:00
|
Beyond the Black Box: Theory and Mechanism of Large Language Models
|
arXiv:2601.02907v1 Announce Type: new Abstract: The rapid emergence of Large Language Models (LLMs) has precipitated a profound paradigm shift in Artificial Intelligence, delivering monumental engineering successes that increasingly impact modern society. However, a critical paradox persists within the current field: despite the empirical efficacy, our theoretical understanding of LLMs remains disproportionately nascent, forcing these systems to be treated largely as ``black boxes''. To address this theoretical fragmentation, this survey proposes a unified lifecycle-based taxonomy that organizes the research landscape into six distinct stages: Data Preparation, Model Preparation, Training, Alignment, Inference, and Evaluation. Within this framework, we provide a systematic review of the foundational theories and internal mechanisms driving LLM performance. Specifically, we analyze core theoretical issues such as the mathematical justification for data mixtures, the representational limits of various architectures, and the optimization dynamics of alignment algorithms. Moving beyond current best practices, we identify critical frontier challenges, including the theoretical limits of synthetic data self-improvement, the mathematical bounds of safety guarantees, and the mechanistic origins of emergent intelligence. By connecting empirical observations with rigorous scientific inquiry, this work provides a structured roadmap for transitioning LLM development from engineering heuristics toward a principled scientific discipline.
|
https://arxiv.org/abs/2601.02907
|
Academic Papers
|
svg
|
bd49b679b83cac34c25fd99d710c14f0f0696e98e905e288c1fb39fee5d02f63
|
2026-01-07T00:00:00-05:00
|
TA-Prompting: Enhancing Video Large Language Models for Dense Video Captioning via Temporal Anchors
|
arXiv:2601.02908v1 Announce Type: new Abstract: Dense video captioning aims to interpret and describe all temporally localized events throughout an input video. Recent state-of-the-art methods leverage large language models (LLMs) to provide detailed moment descriptions for video data. However, existing VideoLLMs remain challenging in identifying precise event boundaries in untrimmed videos, causing the generated captions to be not properly grounded. In this paper, we propose TA-Prompting, which enhances VideoLLMs via Temporal Anchors that learn to precisely localize events and prompt the VideoLLMs to perform temporal-aware video event understanding. During inference, in order to properly determine the output caption sequence from an arbitrary number of events presented within a video, we introduce an event coherent sampling strategy to select event captions with sufficient coherence across temporal events and cross-modal similarity with the given video. Through extensive experiments on benchmark datasets, we show that our TA-Prompting is favorable against state-of-the-art VideoLLMs, yielding superior performance on dense video captioning and temporal understanding tasks including moment retrieval and temporalQA.
|
https://arxiv.org/abs/2601.02908
|
Academic Papers
|
svg
|
eefb547cf3c0a7a8545c6ed81d914cc024e54813cade08f1fd8cae55e56cfadc
|
2026-01-07T00:00:00-05:00
|
Image, Word and Thought: A More Challenging Language Task for the Iterated Learning Model
|
arXiv:2601.02911v1 Announce Type: new Abstract: The iterated learning model simulates the transmission of language from generation to generation in order to explore how the constraints imposed by language transmission facilitate the emergence of language structure. Despite each modelled language learner starting from a blank slate, the presence of a bottleneck limiting the number of utterances to which the learner is exposed can lead to the emergence of language that lacks ambiguity, is governed by grammatical rules, and is consistent over successive generations, that is, one that is expressive, compositional and stable. The recent introduction of a more computationally tractable and ecologically valid semi supervised iterated learning model, combining supervised and unsupervised learning within an autoencoder architecture, has enabled exploration of language transmission dynamics for much larger meaning-signal spaces. Here, for the first time, the model has been successfully applied to a language learning task involving the communication of much more complex meanings: seven-segment display images. Agents in this model are able to learn and transmit a language that is expressive: distinct codes are employed for all 128 glyphs; compositional: signal components consistently map to meaning components, and stable: the language does not change from generation to generation.
|
https://arxiv.org/abs/2601.02911
|
Academic Papers
|
svg
|
093816e54984c00e2faf3fa52b1e01a2d7900b0df3be2a90d8242ea78438662f
|
2026-01-07T00:00:00-05:00
|
Vulnerabilities of Audio-Based Biometric Authentication Systems Against Deepfake Speech Synthesis
|
arXiv:2601.02914v1 Announce Type: new Abstract: As audio deepfakes transition from research artifacts to widely available commercial tools, robust biometric authentication faces pressing security threats in high-stakes industries. This paper presents a systematic empirical evaluation of state-of-the-art speaker authentication systems based on a large-scale speech synthesis dataset, revealing two major security vulnerabilities: 1) modern voice cloning models trained on very small samples can easily bypass commercial speaker verification systems; and 2) anti-spoofing detectors struggle to generalize across different methods of audio synthesis, leading to a significant gap between in-domain performance and real-world robustness. These findings call for a reconsideration of security measures and stress the need for architectural innovations, adaptive defenses, and the transition towards multi-factor authentication.
|
https://arxiv.org/abs/2601.02914
|
Academic Papers
|
svg
|
cb3fa13f601dcd524008085f8ec87407d2b834ab9d1e5625d168fbf7cabf200c
|
2026-01-07T00:00:00-05:00
|
ChemBART: A Pre-trained BART Model Assisting Organic Chemistry Analysis
|
arXiv:2601.02915v1 Announce Type: new Abstract: Recent advances in large language models (LLMs) have demonstrated transformative potential across diverse fields. While LLMs have been applied to molecular simplified molecular input line entry system (SMILES) in computer-aided synthesis planning (CASP), existing methodologies typically address single tasks, such as precursor prediction. We introduce ChemBART, a SMILES-based LLM pre-trained on chemical reactions, which enables a unified model for multiple downstream chemical tasks--achieving the paradigm of "one model, one pre-training, multiple tasks." By leveraging outputs from a mask-filling pre-training task on reaction expressions, ChemBART effectively solves a variety of chemical problems, including precursor/reagent generation, temperature-yield regression, molecular property classification, and optimizing the policy and value functions within a reinforcement learning framework, integrated with Monte Carlo tree search for multi-step synthesis route design. Unlike single-molecule pre-trained LLMs constrained to specific applications, ChemBART addresses broader chemical challenges and integrates them for comprehensive synthesis planning. Crucially, ChemBART-designed multi-step synthesis routes and reaction conditions directly inspired wet-lab validation, which confirmed shorter pathways with ~30% yield improvement over literature benchmarks. Our work validates the power of reaction-focused pre-training and showcases the broad utility of ChemBART in advancing the complete synthesis planning cycle.
|
https://arxiv.org/abs/2601.02915
|
Academic Papers
|
svg
|
b793bc297f68b530f53844b1739ecf3e9218830e7a725514629c372d34f20950
|
2026-01-07T00:00:00-05:00
|
RAL2M: Retrieval Augmented Learning-To-Match Against Hallucination in Compliance-Guaranteed Service Systems
|
arXiv:2601.02917v1 Announce Type: new Abstract: Hallucination is a major concern in LLM-driven service systems, necessitating explicit knowledge grounding for compliance-guaranteed responses. In this paper, we introduce Retrieval-Augmented Learning-to-Match (RAL2M), a novel framework that eliminates generation hallucination by repositioning LLMs as query-response matching judges within a retrieval-based system, providing a robust alternative to purely generative approaches. To further mitigate judgment hallucination, we propose a query-adaptive latent ensemble strategy that explicitly models heterogeneous model competence and interdependencies among LLMs, deriving a calibrated consensus decision. Extensive experiments on large-scale benchmarks demonstrate that the proposed method effectively leverages the "wisdom of the crowd" and significantly outperforms strong baselines. Finally, we discuss best practices and promising directions for further exploiting latent representations in future work.
|
https://arxiv.org/abs/2601.02917
|
Academic Papers
|
svg
|
c5afb1abca9d9da46a5c4d0692a37aa1353d7dc372bebd0d8838e38b6584d1d0
|
2026-01-07T00:00:00-05:00
|
Zoom-IQA: Image Quality Assessment with Reliable Region-Aware Reasoning
|
arXiv:2601.02918v1 Announce Type: new Abstract: Image Quality Assessment (IQA) is a long-standing problem in computer vision. Previous methods typically focus on predicting numerical scores without explanation or provide low-level descriptions lacking precise scores. Recent reasoning-based vision language models (VLMs) have shown strong potential for IQA, enabling joint generation of quality descriptions and scores. However, we notice that existing VLM-based IQA methods tend to exhibit unreliable reasoning due to their limited capability of integrating visual and textual cues. In this work, we introduce Zoom-IQA, a VLM-based IQA model to explicitly emulate key cognitive behaviors: uncertainty awareness, region reasoning, and iterative refinement. Specifically, we present a two-stage training pipeline: 1) supervised fine-tuning (SFT) on our Grounded-Rationale-IQA (GR-IQA) dataset to teach the model to ground its assessments in key regions; and 2) reinforcement learning (RL) for dynamic policy exploration, primarily stabilized by our KL-Coverage regularizer to prevent reasoning and scoring diversity collapse, and supported by a Progressive Re-sampling Strategy to mitigate annotation bias. Extensive experiments show that Zoom-IQA achieves improved robustness, explainability, and generalization. The application to downstream tasks, such as image restoration, further demonstrates the effectiveness of Zoom-IQA.
|
https://arxiv.org/abs/2601.02918
|
Academic Papers
|
svg
|
b7b4cc2da2db7b98baff73a38fa95736cd11e260582e2a2f86dd4ab5cfa61015
|
2026-01-07T00:00:00-05:00
|
Intersection patterns of set systems on manifolds with slowly growing homological shatter functions
|
arXiv:2601.02920v1 Announce Type: new Abstract: A theorem of Matou\v{s}ek asserts that for any $k \ge 2$, any set system whose shatter function is $o(n^k)$ enjoys a fractional Helly theorem: in the $k$-wise intersection hypergraph, positive density implies a linear-size clique. Kalai and Meshulam conjectured a generalization of that phenomenon to homological shatter functions. It was verified for set systems with bounded homological shatter functions and ground set with a forbidden homological minor (which includes $\mathbb{R}^d$ by a homological analogue of the van Kampen-Flores theorem). We present two contributions to this line of research: - We study homological minors in certain manifolds (possibly with boundary), for which we prove analogues of the van Kampen-Flores theorem and of the Hanani-Tutte theorem. - We introduce graded analogues of the Radon and Helly numbers of set systems and relate their growth rate to the original parameters. This allows to extend the verification of the Kalai-Meshulam conjecture for sufficiently slowly growing homological shatter functions.
|
https://arxiv.org/abs/2601.02920
|
Academic Papers
|
svg
|
ecafa32a97a6f401cdfd04f32d1d4a148abd72cdc3e534a0659b38a03d1f08d7
|
2026-01-07T00:00:00-05:00
|
DCG ReID: Disentangling Collaboration and Guidance Fusion Representations for Multi-modal Vehicle Re-Identification
|
arXiv:2601.02924v1 Announce Type: new Abstract: Multi-modal vehicle Re-Identification (ReID) aims to leverage complementary information from RGB, Near Infrared (NIR), and Thermal Infrared (TIR) modalities to retrieve the same vehicle. The challenges of multi-modal vehicle ReID arise from the uncertainty of modality quality distribution induced by inherent discrepancies across modalities, resulting in distinct conflicting fusion requirements for data with balanced and unbalanced quality distributions. Existing methods handle all multi-modal data within a single fusion model, overlooking the different needs of the two data types and making it difficult to decouple the conflict between intra-class consistency and inter-modal heterogeneity. To this end, we propose Disentangle Collaboration and Guidance Fusion Representations for Multi-modal Vehicle ReID (DCG-ReID). Specifically, to disentangle heterogeneous quality-distributed modal data without mutual interference, we first design the Dynamic Confidence-based Disentangling Weighting (DCDW) mechanism: dynamically reweighting three-modal contributions via interaction-derived modal confidence to build a disentangled fusion framework. Building on DCDW, we develop two scenario-specific fusion strategies: (1) for balanced quality distributions, Collaboration Fusion Module (CFM) mines pairwise consensus features to capture shared discriminative information and boost intra-class consistency; (2) for unbalanced distributions, Guidance Fusion Module (GFM) implements differential amplification of modal discriminative disparities to reinforce dominant modality advantages, guide auxiliary modalities to mine complementary discriminative info, and mitigate inter-modal divergence to boost multi-modal joint decision performance. Extensive experiments on three multi-modal ReID benchmarks (WMVeID863, MSVR310, RGBNT100) validate the effectiveness of our method. Code will be released upon acceptance.
|
https://arxiv.org/abs/2601.02924
|
Academic Papers
|
svg
|
7dc3c82158c119cae7fcd7f67f9bd471c0a90dfd0f4a999a5749f4224d2a236a
|
2026-01-07T00:00:00-05:00
|
PrismVAU: Prompt-Refined Inference System for Multimodal Video Anomaly Understanding
|
arXiv:2601.02927v1 Announce Type: new Abstract: Video Anomaly Understanding (VAU) extends traditional Video Anomaly Detection (VAD) by not only localizing anomalies but also describing and reasoning about their context. Existing VAU approaches often rely on fine-tuned multimodal large language models (MLLMs) or external modules such as video captioners, which introduce costly annotations, complex training pipelines, and high inference overhead. In this work, we introduce PrismVAU, a lightweight yet effective system for real-time VAU that leverages a single off-the-shelf MLLM for anomaly scoring, explanation, and prompt optimization. PrismVAU operates in two complementary stages: (1) a coarse anomaly scoring module that computes frame-level anomaly scores via similarity to textual anchors, and (2) an MLLM-based refinement module that contextualizes anomalies through system and user prompts. Both textual anchors and prompts are optimized with a weakly supervised Automatic Prompt Engineering (APE) framework. Extensive experiments on standard VAD benchmarks demonstrate that PrismVAU delivers competitive detection performance and interpretable anomaly explanations -- without relying on instruction tuning, frame-level annotations, and external modules or dense processing -- making it an efficient and practical solution for real-world applications.
|
https://arxiv.org/abs/2601.02927
|
Academic Papers
|
svg
|
2d2aa4754bb2c0ff0c2b9fd18d6b2677c165d9b494545a1a552ce3340554a781
|
2026-01-07T00:00:00-05:00
|
HybridSolarNet: A Lightweight and Explainable EfficientNet-CBAM Architecture for Real-Time Solar Panel Fault Detection
|
arXiv:2601.02928v1 Announce Type: new Abstract: Manual inspections for solar panel systems are a tedious, costly, and error-prone task, making it desirable for Unmanned Aerial Vehicle (UAV) based monitoring. Though deep learning models have excellent fault detection capabilities, almost all methods either are too large and heavy for edge computing devices or involve biased estimation of accuracy due to ineffective learning techniques. We propose a new solar panel fault detection model called HybridSolarNet. It integrates EfficientNet-B0 with Convolutional Block Attention Module (CBAM). We implemented it on the Kaggle Solar Panel Images competition dataset with a tight split-before-augmentation protocol. It avoids leakage in accuracy estimation. We introduced focal loss and cosine annealing. Ablation analysis validates that accuracy boosts due to added benefits from CBAM (+1.53%) and that there are benefits from recognition of classes with imbalanced samples via focal loss. Overall average accuracy on 5-fold stratified cross-validation experiments on the given competition dataset topped 92.37% +/- 0.41 and an F1-score of 0.9226 +/- 0.39 compared to baselines like VGG19, requiring merely 16.3 MB storage, i.e., 32 times less. Its inference speed measured at 54.9 FPS with GPU support makes it a successful candidate for real-time UAV implementation. Moreover, visualization obtained from Grad-CAM illustrates that HybridSolarNet focuses on actual locations instead of irrelevant ones.
|
https://arxiv.org/abs/2601.02928
|
Academic Papers
|
svg
|
d0df55f83f92e7084daf63615f3f323655007bb25ea416701ddb09e2da1e5947
|
2026-01-07T00:00:00-05:00
|
Probabilistic Time Slot Leasing in TDMA-Based IoT Networks for Enhanced Channel Utilization
|
arXiv:2601.02930v1 Announce Type: new Abstract: In large-scale resource-constrained wireless networks, such as those prevalent in the Internet of Things (IoT), efficient communication scheduling remains a critical challenge. Among the various approaches, Time Division Multiple Access (TDMA) protocols have been widely adopted for their structured and collision-free communication capabilities. Nevertheless, despite extensive research in this area, current solutions often exhibit suboptimal performance, particularly in dynamic environments where node activity levels fluctuate over time. This paper introduces a novel fully distributed TDMA-based scheduling protocol that intelligently maximizes the utilization of communication resources. The proposed approach adaptively reallocates underutilized time slots, originally assigned to temporarily inactive nodes, to those experiencing higher communication demands. This dynamic reallocation not only improves channel utilization but also reduces idle periods, thereby enhancing overall network efficiency. To further enhance performance, we incorporate a lightweight probabilistic mechanism that governs the temporal leasing of unused slots. This mechanism balances the trade-off between slot availability and transmission reliability, minimizing packet loss while preserving fairness and stability within the network. Simulations across a range of network scenarios demonstrate that our protocol significantly improves throughput, latency, and reliability in resource-constrained environments. These results highlight the protocol's potential as a robust and scalable solution for adaptive and energy-efficient scheduling in next-generation IoT networks.
|
https://arxiv.org/abs/2601.02930
|
Academic Papers
|
svg
|
c7bd638baeae48f1f2ec9ecc516023177ef1ad692c15e29ebd8347dab72252e8
|
2026-01-07T00:00:00-05:00
|
Memorization, Emergence, and Explaining Reversal Failures: A Controlled Study of Relational Semantics in LLMs
|
arXiv:2601.02931v1 Announce Type: new Abstract: Autoregressive LLMs perform well on relational tasks that require linking entities via relational words (e.g., father/son, friend), but it is unclear whether they learn the logical semantics of such relations (e.g., symmetry and inversion logic) and, if so, whether reversal-type failures arise from missing relational semantics or left-to-right order bias. We propose a controlled Knowledge Graph-based synthetic framework that generates text from symmetric/inverse triples, train GPT-style autoregressive models from scratch, and evaluate memorization, logical inference, and in-context generalization to unseen entities to address these questions. We find a sharp phase transition in which relational semantics emerge with sufficient logic-bearing supervision, even in shallow (2-3 layer) models, and that successful generalization aligns with stable intermediate-layer signals. Finally, order-matched forward/reverse tests and a diffusion baseline indicate that reversal failures are primarily driven by autoregressive order bias rather than deficient inversion semantics.
|
https://arxiv.org/abs/2601.02931
|
Academic Papers
|
svg
|
715b8eacc88a17aef86b193b0c4d723150424464014e4d25f89c2485de3cea53
|
2026-01-07T00:00:00-05:00
|
Pearmut: Human Evaluation of Translation Made Trivial
|
arXiv:2601.02933v1 Announce Type: new Abstract: Human evaluation is the gold standard for multilingual NLP, but is often skipped in practice and substituted with automatic metrics, because it is notoriously complex and slow to set up with existing tools with substantial engineering and operational overhead. We introduce Pearmut, a lightweight yet feature-rich platform that makes end-to-end human evaluation as easy to run as automatic evaluation. Pearmut removes common entry barriers and provides support for evaluating multilingual tasks, with a particular focus on machine translation. The platform implements standard evaluation protocols, including DA, ESA, or MQM, but is also extensible to allow prototyping new protocols. It features document-level context, absolute and contrastive evaluation, attention checks, ESAAI pre-annotations and both static and active learning-based assignment strategies. Pearmut enables reliable human evaluation to become a practical, routine component of model development and diagnosis rather than an occasional effort.
|
https://arxiv.org/abs/2601.02933
|
Academic Papers
|
svg
|
ab2f00c536844f04dfff974b3142c3ff6fa5406cf0583e9fd2f36e49ffca34b2
|
2026-01-07T00:00:00-05:00
|
SastBench: A Benchmark for Testing Agentic SAST Triage
|
arXiv:2601.02941v1 Announce Type: new Abstract: SAST (Static Application Security Testing) tools are among the most widely used techniques in defensive cybersecurity, employed by commercial and non-commercial organizations to identify potential vulnerabilities in software. Despite their great utility, they generate numerous false positives, requiring costly manual filtering (aka triage). While LLM-powered agents show promise for automating cybersecurity tasks, existing benchmarks fail to emulate real-world SAST finding distributions. We introduce SastBench, a benchmark for evaluating SAST triage agents that combines real CVEs as true positives with filtered SAST tool findings as approximate false positives. SastBench features an agent-agnostic design. We evaluate different agents on the benchmark and present a comparative analysis of their performance, provide a detailed analysis of the dataset, and discuss the implications for future development.
|
https://arxiv.org/abs/2601.02941
|
Academic Papers
|
svg
|
32281212d03ec505dcb560d2a0537f5c8d6ab763f0a11e56145ac5efa8d7c4cd
|
2026-01-07T00:00:00-05:00
|
MixTTE: Multi-Level Mixture-of-Experts for Scalable and Adaptive Travel Time Estimation
|
arXiv:2601.02943v1 Announce Type: new Abstract: Accurate Travel Time Estimation (TTE) is critical for ride-hailing platforms, where errors directly impact user experience and operational efficiency. While existing production systems excel at holistic route-level dependency modeling, they struggle to capture city-scale traffic dynamics and long-tail scenarios, leading to unreliable predictions in large urban networks. In this paper, we propose \model, a scalable and adaptive framework that synergistically integrates link-level modeling with industrial route-level TTE systems. Specifically, we propose a spatio-temporal external attention module to capture global traffic dynamic dependencies across million-scale road networks efficiently. Moreover, we construct a stabilized graph mixture-of-experts network to handle heterogeneous traffic patterns while maintaining inference efficiency. Furthermore, an asynchronous incremental learning strategy is tailored to enable real-time and stable adaptation to dynamic traffic distribution shifts. Experiments on real-world datasets validate MixTTE significantly reduces prediction errors compared to seven baselines. MixTTE has been deployed in DiDi, substantially improving the accuracy and stability of the TTE service.
|
https://arxiv.org/abs/2601.02943
|
Academic Papers
|
svg
|
ebeb37b665c4767235050ede395d0f567665614820ba4b580d1f700209892e8c
|
2026-01-07T00:00:00-05:00
|
VTONQA: A Multi-Dimensional Quality Assessment Dataset for Virtual Try-on
|
arXiv:2601.02945v1 Announce Type: new Abstract: With the rapid development of e-commerce and digital fashion, image-based virtual try-on (VTON) has attracted increasing attention. However, existing VTON models often suffer from artifacts such as garment distortion and body inconsistency, highlighting the need for reliable quality evaluation of VTON-generated images. To this end, we construct VTONQA, the first multi-dimensional quality assessment dataset specifically designed for VTON, which contains 8,132 images generated by 11 representative VTON models, along with 24,396 mean opinion scores (MOSs) across three evaluation dimensions (i.e., clothing fit, body compatibility, and overall quality). Based on VTONQA, we benchmark both VTON models and a diverse set of image quality assessment (IQA) metrics, revealing the limitations of existing methods and highlighting the value of the proposed dataset. We believe that the VTONQA dataset and corresponding benchmarks will provide a solid foundation for perceptually aligned evaluation, benefiting both the development of quality assessment methods and the advancement of VTON models.
|
https://arxiv.org/abs/2601.02945
|
Academic Papers
|
svg
|
7f8d80382e50dc6e7dffa3a8d59331e3ee02be6ec861656bf8de4e86b7d285c6
|
2026-01-07T00:00:00-05:00
|
Quality Degradation Attack in Synthetic Data
|
arXiv:2601.02947v1 Announce Type: new Abstract: Synthetic Data Generation (SDG) can be used to facilitate privacy-preserving data sharing. However, most existing research focuses on privacy attacks where the adversary is the recipient of the released synthetic data and attempts to infer sensitive information from it. This study investigates quality degradation attacks initiated by adversaries who possess access to the real dataset or control over the generation process, such as the data owner, the synthetic data provider, or potential intruders. We formalize a corresponding threat model and empirically evaluate the effectiveness of targeted manipulations of real data (e.g., label flipping and feature-importance-based interventions) on the quality of generated synthetic data. The results show that even small perturbations can substantially reduce downstream predictive performance and increase statistical divergence, exposing vulnerabilities within SDG pipelines. This study highlights the need to integrate integrity verification and robustness mechanisms, alongside privacy protection, to ensure the reliability and trustworthiness of synthetic data sharing frameworks.
|
https://arxiv.org/abs/2601.02947
|
Academic Papers
|
svg
|
41d16595b92381afbf1d52fb5ecd503052348b52ee22d7e5b02c2bbc9165f3b5
|
2026-01-07T00:00:00-05:00
|
Parameter-Robust MPPI for Safe Online Learning of Unknown Parameters
|
arXiv:2601.02948v1 Announce Type: new Abstract: Robots deployed in dynamic environments must remain safe even when key physical parameters are uncertain or change over time. We propose Parameter-Robust Model Predictive Path Integral (PRMPPI) control, a framework that integrates online parameter learning with probabilistic safety constraints. PRMPPI maintains a particle-based belief over parameters via Stein Variational Gradient Descent, evaluates safety constraints using Conformal Prediction, and optimizes both a nominal performance-driven and a safety-focused backup trajectory in parallel. This yields a controller that is cautious at first, improves performance as parameters are learned, and ensures safety throughout. Simulation and hardware experiments demonstrate higher success rates, lower tracking error, and more accurate parameter estimates than baselines.
|
https://arxiv.org/abs/2601.02948
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.