id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
f833ce22f03b29aaa33a681f751fd6270f51fbbe5e089a7b9a937133428006b1
|
2026-01-21T00:00:00-05:00
|
Reasoning or Pattern Matching? Probing Large Vision-Language Models with Visual Puzzles
|
arXiv:2601.13705v1 Announce Type: new Abstract: Puzzles have long served as compact and revealing probes of human cognition, isolating abstraction, rule discovery, and systematic reasoning with minimal reliance on prior knowledge. Leveraging these properties, visual puzzles have recently emerged as a powerful diagnostic tool for evaluating the reasoning abilities of Large Vision-Language Models (LVLMs), offering controlled, verifiable alternatives to open-ended multimodal benchmarks. This survey provides a unified perspective of visual puzzle reasoning in LVLMs. We frame visual puzzles through a common abstraction and organize existing benchmarks by the reasoning mechanisms they target (inductive, analogical, algorithmic, deductive, and geometric/spatial), thereby linking puzzle design to the cognitive operations required for solving. Synthesizing empirical evidence across these categories, we identify consistent limitations in current models, including brittle generalization, tight entanglement between perception and reasoning, and a persistent gap between fluent explanations and faithful execution. By framing visual puzzles as diagnostic instruments rather than task formats, this survey elaborates on the state of LVLM reasoning and outlines key directions for future benchmarks and reasoning-aware multimodal systems.
|
https://arxiv.org/abs/2601.13705
|
Academic Papers
|
svg
|
42ba428d5dcd267ebd086ff9f7662a248ad15000a9b5814bc33b6ba15328a43c
|
2026-01-21T00:00:00-05:00
|
ParkingTwin: Training-Free Streaming 3D Reconstruction for Parking-Lot Digital Twins
|
arXiv:2601.13706v1 Announce Type: new Abstract: High-fidelity parking-lot digital twins provide essential priors for path planning, collision checking, and perception validation in Automated Valet Parking (AVP). Yet robot-oriented reconstruction faces a trilemma: sparse forward-facing views cause weak parallax and ill-posed geometry; dynamic occlusions and extreme lighting hinder stable texture fusion; and neural rendering typically needs expensive offline optimization, violating edge-side streaming constraints. We propose ParkingTwin, a training-free, lightweight system for online streaming 3D reconstruction. First, OSM-prior-driven geometric construction uses OpenStreetMap semantic topology to directly generate a metric-consistent TSDF, replacing blind geometric search with deterministic mapping and avoiding costly optimization. Second, geometry-aware dynamic filtering employs a quad-modal constraint field (normal/height/depth consistency) to reject moving vehicles and transient occlusions in real time. Third, illumination-robust fusion in CIELAB decouples luminance and chromaticity via adaptive L-channel weighting and depth-gradient suppression, reducing seams under abrupt lighting changes. ParkingTwin runs at 30+ FPS on an entry-level GTX 1660. On a 68,000 m^2 real-world dataset, it achieves SSIM 0.87 (+16.0%), delivers about 15x end-to-end speedup, and reduces GPU memory by 83.3% compared with state-of-the-art 3D Gaussian Splatting (3DGS) that typically requires high-end GPUs (RTX 4090D). The system outputs explicit triangle meshes compatible with Unity/Unreal digital-twin pipelines. Project page: https://mihoutao-liu.github.io/ParkingTwin/
|
https://arxiv.org/abs/2601.13706
|
Academic Papers
|
svg
|
a293c45ea8111adde72c5b8d82f257b7c3773747b65b9ca0f4b456f5e5254ec1
|
2026-01-21T00:00:00-05:00
|
Attention-space Contrastive Guidance for Efficient Hallucination Mitigation in LVLMs
|
arXiv:2601.13707v1 Announce Type: new Abstract: Hallucinations in large vision-language models (LVLMs) often arise when language priors dominate over visual evidence, causing object misidentification and visually inconsistent descriptions. We address this issue by framing hallucination mitigation as contrastive guidance, steering generation toward visually grounded and semantically faithful text. This approach regulates the model's internal behavior by reducing over-dependence on language priors and contrasting visually grounded with language-only representations. We propose Attention-space Contrastive Guidance (ACG), a single-pass mechanism that operates within self-attention layers to construct both vision-language and language-only attention paths in a single forward computation. This integration enables computationally efficient guidance directly embedded in the model's representation contextualization. To correct approximation bias introduced by the single-pass formulation, we further apply an orthogonalized correction that removes components aligned with the language-only path, selectively amplifying visual contributions. Experiments on the CHAIR and POPE benchmarks show that ACG achieves state-of-the-art faithfulness and caption quality while significantly reducing computational cost. Our method establishes a principled and efficient alternative, reducing latency by up to 2x compared to prior contrastive decoding methods that require multiple forward passes.
|
https://arxiv.org/abs/2601.13707
|
Academic Papers
|
svg
|
89eccb2d8e6f0e6c50d917e6e9c82db3f86ce5d79381be5cca53bc175fa8cce5
|
2026-01-21T00:00:00-05:00
|
Hidden in Plain Text: Measuring LLM Deception Quality Against Human Baselines Using Social Deduction Games
|
arXiv:2601.13709v1 Announce Type: new Abstract: Large Language Model (LLM) agents are increasingly used in many applications, raising concerns about their safety. While previous work has shown that LLMs can deceive in controlled tasks, less is known about their ability to deceive using natural language in social contexts. In this paper, we study deception in the Social Deduction Game (SDG) Mafia, where success is dependent on deceiving others through conversation. Unlike previous SDG studies, we use an asynchronous multi-agent framework which better simulates realistic social contexts. We simulate 35 Mafia games with GPT-4o LLM agents. We then create a Mafia Detector using GPT-4-Turbo to analyze game transcripts without player role information to predict the mafia players. We use prediction accuracy as a surrogate marker for deception quality. We compare this prediction accuracy to that of 28 human games and a random baseline. Results show that the Mafia Detector's mafia prediction accuracy is lower on LLM games than on human games. The result is consistent regardless of the game days and the number of mafias detected. This indicates that LLMs blend in better and thus deceive more effectively. We also release a dataset of LLM Mafia transcripts to support future research. Our findings underscore both the sophistication and risks of LLM deception in social contexts.
|
https://arxiv.org/abs/2601.13709
|
Academic Papers
|
svg
|
3007f20520427cb21e4c6a7aba89e18a26e5c97f68e3d6cdd34d1af1537e3197
|
2026-01-21T00:00:00-05:00
|
Who Should Have Surgery? A Comparative Study of GenAI vs Supervised ML for CRS Surgical Outcome Prediction
|
arXiv:2601.13710v1 Announce Type: new Abstract: Artificial intelligence has reshaped medical imaging, yet the use of AI on clinical data for prospective decision support remains limited. We study pre-operative prediction of clinically meaningful improvement in chronic rhinosinusitis (CRS), defining success as a more than 8.9-point reduction in SNOT-22 at 6 months (MCID). In a prospectively collected cohort where all patients underwent surgery, we ask whether models using only pre-operative clinical data could have identified those who would have poor outcomes, i.e. those who should have avoided surgery. We benchmark supervised ML (logistic regression, tree ensembles, and an in-house MLP) against generative AI (ChatGPT, Claude, Gemini, Perplexity), giving each the same structured inputs and constraining outputs to binary recommendations with confidence. Our best ML model (MLP) achieves 85 % accuracy with superior calibration and decision-curve net benefit. GenAI models underperform on discrimination and calibration across zero-shot setting. Notably, GenAI justifications align with clinician heuristics and the MLP's feature importance, repeatedly highlighting baseline SNOT-22, CT/endoscopy severity, polyp phenotype, and physchology/pain comorbidities. We provide a reproducible tabular-to-GenAI evaluation protocol and subgroup analyses. Findings support an ML-first, GenAI- augmented workflow: deploy calibrated ML for primary triage of surgical candidacy, with GenAI as an explainer to enhance transparency and shared decision-making.
|
https://arxiv.org/abs/2601.13710
|
Academic Papers
|
svg
|
bfe0b698ae04b1e2aeb0674b2e27fe56b08fdf6c3c1361f519ce762b5da29c8e
|
2026-01-21T00:00:00-05:00
|
GerAV: Towards New Heights in German Authorship Verification using Fine-Tuned LLMs on a New Benchmark
|
arXiv:2601.13711v1 Announce Type: new Abstract: Authorship verification (AV) is the task of determining whether two texts were written by the same author and has been studied extensively, predominantly for English data. In contrast, large-scale benchmarks and systematic evaluations for other languages remain scarce. We address this gap by introducing GerAV, a comprehensive benchmark for German AV comprising over 600k labeled text pairs. GerAV is built from Twitter and Reddit data, with the Reddit part further divided into in-domain and cross-domain message-based subsets, as well as a profile-based subset. This design enables controlled analysis of the effects of data source, topical domain, and text length. Using the provided training splits, we conduct a systematic evaluation of strong baselines and state-of-the-art models and find that our best approach, a fine-tuned large language model, outperforms recent baselines by up to 0.09 absolute F1 score and surpasses GPT-5 in a zero-shot setting by 0.08. We further observe a trade-off between specialization and generalization: models trained on specific data types perform best under matching conditions but generalize less well across data regimes, a limitation that can be mitigated by combining training sources. Overall, GerAV provides a challenging and versatile benchmark for advancing research on German and cross-domain AV.
|
https://arxiv.org/abs/2601.13711
|
Academic Papers
|
svg
|
35ec8e4d53f5cdacb108d431ac2efa97c8271127b03322addba777d1ca896fff
|
2026-01-21T00:00:00-05:00
|
Nonlinear compressive reduced basis approximation : when Taylor meets Kolmogorov
|
arXiv:2601.13712v1 Announce Type: new Abstract: This paper investigates model reduction methods for efficiently approximating the solution of parameter-dependent PDEs with a multi-parameter vector $\vec{\mu} \in \mathbb{R}^p$. In cases where the Kolmogorov $N$-width decays fast enough, it is effective to approximate the solution as a sum of $N$ separable terms, each being the product of a parameter-dependent coefficient and a space-dependent function. This leads to reduced-order models with $N$ degrees of freedom and complexity of order ${\mathcal O}(N^3)$. However, when the $N$-width decays slowly, $N$ must be large to achieve acceptable accuracy, making cubic complexity prohibitive. The linear complexity measure in terms of Kolmogorov width must be replaced by the Gelfand width, with its associated sensing number. Recent nonlinear approaches based on this notion decompose the $N$ coordinates into two groups: $n$ free variables and $\overline{n}$ dependent variables, where the latter are nonlinear functions of the former ($N= n+\overline n$). Several works have focused on cases where these $\overline{n}$ functions are homogeneous quadratic forms of the $n$ variables, with optimization strategies for choosing $n$ given a target accuracy. A rigorous analysis of the local sensing number is carried out, showing that $n = p$ is optimal and appropriate, at least locally, around a reference point. In practical scenarios involving wide parameter ranges, the condition $p\le n \le p + k$ (with $k$ small) is valid and more robust from continuity arguments. Additionally, the assumption of a quadratic mapping, while justified in a local sense, becomes insufficient. More expressive nonlinear mappings-including those using machine learning-become necessary. This work contributes a theoretical foundation for such strategies and highlights the need for further investigations to push back the Kolmogorov Barrier.
|
https://arxiv.org/abs/2601.13712
|
Academic Papers
|
svg
|
13d536c2b11af70f20fe0648d2977be8a852be6b0e22b36fab03b3e39ee4e215
|
2026-01-21T00:00:00-05:00
|
SWE-Tester: Training Open-Source LLMs for Issue Reproduction in Real-World Repositories
|
arXiv:2601.13713v1 Announce Type: new Abstract: Software testing is crucial for ensuring the correctness and reliability of software systems. Automated generation of issue reproduction tests from natural language issue descriptions enhances developer productivity by simplifying root cause analysis, promotes test-driven development -- "test first, write code later", and can be used for improving the effectiveness of automated issue resolution systems like coding agents. Existing methods proposed for this task predominantly rely on closed-source LLMs, with limited exploration of open models. To address this, we propose SWE-Tester -- a novel pipeline for training open-source LLMs to generate issue reproduction tests. First, we curate a high-quality training dataset of 41K instances from 2.6K open-source GitHub repositories and use it to train LLMs of varying sizes and families. The fine-tuned models achieve absolute improvements of up to 10\% in success rate and 21\% in change coverage on SWT-Bench Verified. Further analysis shows consistent improvements with increased inference-time compute, more data, and larger models. These results highlight the effectiveness of our framework for advancing open-source LLMs in this domain.
|
https://arxiv.org/abs/2601.13713
|
Academic Papers
|
svg
|
85a6bfce76235d96f7589ef887a551500f6a80c33f74679ff10afcb0e4413d5a
|
2026-01-21T00:00:00-05:00
|
MVGD-Net: A Novel Motion-aware Video Glass Surface Detection Network
|
arXiv:2601.13715v1 Announce Type: new Abstract: Glass surface ubiquitous in both daily life and professional environments presents a potential threat to vision-based systems, such as robot and drone navigation. To solve this challenge, most recent studies have shown significant interest in Video Glass Surface Detection (VGSD). We observe that objects in the reflection (or transmission) layer appear farther from the glass surfaces. Consequently, in video motion scenarios, the notable reflected (or transmitted) objects on the glass surface move slower than objects in non-glass regions within the same spatial plane, and this motion inconsistency can effectively reveal the presence of glass surfaces. Based on this observation, we propose a novel network, named MVGD-Net, for detecting glass surfaces in videos by leveraging motion inconsistency cues. Our MVGD-Net features three novel modules: the Cross-scale Multimodal Fusion Module (CMFM) that integrates extracted spatial features and estimated optical flow maps, the History Guided Attention Module (HGAM) and Temporal Cross Attention Module (TCAM), both of which further enhances temporal features. A Temporal-Spatial Decoder (TSD) is also introduced to fuse the spatial and temporal features for generating the glass region mask. Furthermore, for learning our network, we also propose a large-scale dataset, which comprises 312 diverse glass scenarios with a total of 19,268 frames. Extensive experiments demonstrate that our MVGD-Net outperforms relevant state-of-the-art methods.
|
https://arxiv.org/abs/2601.13715
|
Academic Papers
|
svg
|
1fdf70202b36b71deedeb437dbe354918e2819ab3928a901991893651b8d89ee
|
2026-01-21T00:00:00-05:00
|
Simulated Ignorance Fails: A Systematic Study of LLM Behaviors on Forecasting Problems Before Model Knowledge Cutoff
|
arXiv:2601.13717v1 Announce Type: new Abstract: Evaluating LLM forecasting capabilities is constrained by a fundamental tension: prospective evaluation offers methodological rigor but prohibitive latency, while retrospective forecasting (RF) -- evaluating on already-resolved events -- faces rapidly shrinking clean evaluation data as SOTA models possess increasingly recent knowledge cutoffs. Simulated Ignorance (SI), prompting models to suppress pre-cutoff knowledge, has emerged as a potential solution. We provide the first systematic test of whether SI can approximate True Ignorance (TI). Across 477 competition-level questions and 9 models, we find that SI fails systematically: (1) cutoff instructions leave a 52% performance gap between SI and TI; (2) chain-of-thought reasoning fails to suppress prior knowledge, even when reasoning traces contain no explicit post-cutoff references; (3) reasoning-optimized models exhibit worse SI fidelity despite superior reasoning trace quality. These findings demonstrate that prompts cannot reliably "rewind" model knowledge. We conclude that RF on pre-cutoff events is methodologically flawed; we recommend against using SI-based retrospective setups to benchmark forecasting capabilities.
|
https://arxiv.org/abs/2601.13717
|
Academic Papers
|
svg
|
073d43adc374153f20473308ba5cc52b63c904e638d31ff8eddd90a3d2727bf2
|
2026-01-21T00:00:00-05:00
|
Hierarchical Long Video Understanding with Audiovisual Entity Cohesion and Agentic Search
|
arXiv:2601.13719v1 Announce Type: new Abstract: Long video understanding presents significant challenges for vision-language models due to extremely long context windows. Existing solutions relying on naive chunking strategies with retrieval-augmented generation, typically suffer from information fragmentation and a loss of global coherence. We present HAVEN, a unified framework for long-video understanding that enables coherent and comprehensive reasoning by integrating audiovisual entity cohesion and hierarchical video indexing with agentic search. First, we preserve semantic consistency by integrating entity-level representations across visual and auditory streams, while organizing content into a structured hierarchy spanning global summary, scene, segment, and entity levels. Then we employ an agentic search mechanism to enable dynamic retrieval and reasoning across these layers, facilitating coherent narrative reconstruction and fine-grained entity tracking. Extensive experiments demonstrate that our method achieves good temporal coherence, entity consistency, and retrieval efficiency, establishing a new state-of-the-art with an overall accuracy of 84.1% on LVBench. Notably, it achieves outstanding performance in the challenging reasoning category, reaching 80.1%. These results highlight the effectiveness of structured, multimodal reasoning for comprehensive and context-consistent understanding of long-form videos.
|
https://arxiv.org/abs/2601.13719
|
Academic Papers
|
svg
|
6575740b3469a4fe671498b390ebdf8ef8a924c8f107af3a182bb789309a99b0
|
2026-01-21T00:00:00-05:00
|
OP-Bench: Benchmarking Over-Personalization for Memory-Augmented Personalized Conversational Agents
|
arXiv:2601.13722v1 Announce Type: new Abstract: Memory-augmented conversational agents enable personalized interactions using long-term user memory and have gained substantial traction. However, existing benchmarks primarily focus on whether agents can recall and apply user information, while overlooking whether such personalization is used appropriately. In fact, agents may overuse personal information, producing responses that feel forced, intrusive, or socially inappropriate to users. We refer to this issue as \emph{over-personalization}. In this work, we formalize over-personalization into three types: Irrelevance, Repetition, and Sycophancy, and introduce \textbf{OP-Bench} a benchmark of 1,700 verified instances constructed from long-horizon dialogue histories. Using \textbf{OP-Bench}, we evaluate multiple large language models and memory-augmentation methods, and find that over-personalization is widespread when memory is introduced. Further analysis reveals that agents tend to retrieve and over-attend to user memories even when unnecessary. To address this issue, we propose \textbf{Self-ReCheck}, a lightweight, model-agnostic memory filtering mechanism that mitigates over-personalization while preserving personalization performance. Our work takes an initial step toward more controllable and appropriate personalization in memory-augmented dialogue systems.
|
https://arxiv.org/abs/2601.13722
|
Academic Papers
|
svg
|
7274dc626bb88e05f7b1a19d89dd5987db92585a4d583db46f300a441f3fc285
|
2026-01-21T00:00:00-05:00
|
Facial Spatiotemporal Graphs: Leveraging the 3D Facial Surface for Remote Physiological Measurement
|
arXiv:2601.13724v1 Announce Type: new Abstract: Facial remote photoplethysmography (rPPG) methods estimate physiological signals by modeling subtle color changes on the 3D facial surface over time. However, existing methods fail to explicitly align their receptive fields with the 3D facial surface-the spatial support of the rPPG signal. To address this, we propose the Facial Spatiotemporal Graph (STGraph), a novel representation that encodes facial color and structure using 3D facial mesh sequences-enabling surface-aligned spatiotemporal processing. We introduce MeshPhys, a lightweight spatiotemporal graph convolutional network that operates on the STGraph to estimate physiological signals. Across four benchmark datasets, MeshPhys achieves state-of-the-art or competitive performance in both intra- and cross-dataset settings. Ablation studies show that constraining the model's receptive field to the facial surface acts as a strong structural prior, and that surface-aligned, 3D-aware node features are critical for robustly encoding facial surface color. Together, the STGraph and MeshPhys constitute a novel, principled modeling paradigm for facial rPPG, enabling robust, interpretable, and generalizable estimation. Code is available at https://samcantrill.github.io/facial-stgraph-rppg/ .
|
https://arxiv.org/abs/2601.13724
|
Academic Papers
|
svg
|
b9de790a37261c027d979503c6bec89ac08d19414f4466c9769bcf825e8215a0
|
2026-01-21T00:00:00-05:00
|
Foundational VeriFast: Pragmatic Certification of Verification Tool Results through Hinted Mirroring
|
arXiv:2601.13727v1 Announce Type: new Abstract: VeriFast is a leading tool for the modular formal verification of correctness properties of single-threaded and multi-threaded C and Rust programs. It verifies a program by symbolically executing each function in isolation, exploiting user-annotated preconditions, postconditions, and loop invariants written in a form of separation logic, and using a separation logic-based symbolic representation of memory. However, the tool itself, written in roughly 30K lines of OCaml code, has not been formally verified. Therefore, bugs in the tool could cause it to falsely report the correctness of the input program. We here report on an early result extending VeriFast to emit, upon successful verification of a Rust program, a Rocq proof script that proves correctness of the program with respect to a Rocq-encoded axiomatic semantics of Rust. This significantly enhances VeriFast's applicability in safety-critical domains. We apply hinted mirroring: we record key information from VeriFast's symbolic execution run, and use it to direct a replay of the run in Rocq.
|
https://arxiv.org/abs/2601.13727
|
Academic Papers
|
svg
|
29e43466f179148ba12b228f9c8cf13e22037e0e61a178106517ac6eb2a5a95e
|
2026-01-21T00:00:00-05:00
|
On Temperature-Constrained Non-Deterministic Machine Translation: Potential and Evaluation
|
arXiv:2601.13729v1 Announce Type: new Abstract: In recent years, the non-deterministic properties of language models have garnered considerable attention and have shown a significant influence on real-world applications. However, such properties remain under-explored in machine translation (MT), a complex, non-deterministic NLP task. In this study, we systematically evaluate modern MT systems and identify temperature-constrained Non-Deterministic MT (ND-MT) as a distinct phenomenon. Additionally, we demonstrate that ND-MT exhibits significant potential in addressing the multi-modality issue that has long challenged MT research and provides higher-quality candidates than Deterministic MT (D-MT) under temperature constraints. However, ND-MT introduces new challenges in evaluating system performance. Specifically, the evaluation framework designed for D-MT fails to yield consistent evaluation results when applied to ND-MT. We further investigate this emerging challenge by evaluating five state-of-the-art ND-MT systems across three open datasets using both lexical-based and semantic-based metrics at varying sampling sizes. The results reveal a Buckets effect across these systems: the lowest-quality candidate generated by ND-MT consistently determines the overall system ranking across different sampling sizes for all reasonable metrics. Furthermore, we propose the ExpectoSample strategy to automatically assess the reliability of evaluation metrics for selecting robust ND-MT.
|
https://arxiv.org/abs/2601.13729
|
Academic Papers
|
svg
|
527634d516b692f02973b04651b431bb5c3aaa793136e5a58b63636ffed7e8bd
|
2026-01-21T00:00:00-05:00
|
Breaking the Data Barrier in Learning Symbolic Computation: A Case Study on Variable Ordering Suggestion for Cylindrical Algebraic Decomposition
|
arXiv:2601.13731v1 Announce Type: new Abstract: Symbolic computation, powered by modern computer algebra systems, has important applications in mathematical reasoning through exact deep computations. The efficiency of symbolic computation is largely constrained by such deep computations in high dimension. This creates a fundamental barrier on labelled data acquisition if leveraging supervised deep learning to accelerate symbolic computation. Cylindrical algebraic decomposition (CAD) is a pillar symbolic computation method for reasoning with first-order logic formulas over reals with many applications in formal verification and automatic theorem proving. Variable orderings have a huge impact on its efficiency. Impeded by the difficulty to acquire abundant labelled data, existing learning-based approaches are only competitive with the best expert-based heuristics. In this work, we address this problem by designing a series of intimately connected tasks for which a large amount of annotated data can be easily obtained. We pre-train a Transformer model with these data and then fine-tune it on the datasets for CAD ordering. Experiments on publicly available CAD ordering datasets show that on average the orderings predicted by the new model are significantly better than those suggested by the best heuristic methods.
|
https://arxiv.org/abs/2601.13731
|
Academic Papers
|
svg
|
ea6cffc80b537b06bd3bdd3f167c389774d283039d33d581da7222010584dd52
|
2026-01-21T00:00:00-05:00
|
SUNSET -- A Sensor-fUsioN based semantic SegmEnTation exemplar for ROS-based self-adaptation
|
arXiv:2601.13732v1 Announce Type: new Abstract: The fact that robots are getting deployed more often in dynamic environments, together with the increasing complexity of their software systems, raises the need for self-adaptive approaches. In these environments robotic software systems increasingly operate amid (1) uncertainties, where symptoms are easy to observe but root causes are ambiguous, or (2) multiple uncertainties appear concurrently. We present SUNSET, a ROS2-based exemplar that enables rigorous, repeatable evaluation of architecture-based self-adaptation in such conditions. It implements a sensor fusion semantic-segmentation pipeline driven by a trained Machine Learning (ML) model whose input preprocessing can be perturbed to induce realistic performance degradations. The exemplar exposes five observable symptoms, where each can be caused by different root causes and supports concurrent uncertainties spanning self-healing and self-optimisation. SUNSET includes the segmentation pipeline, a trained ML model, uncertainty-injection scripts, a baseline controller, and step-by-step integration and evaluation documentation to facilitate reproducible studies and fair comparison.
|
https://arxiv.org/abs/2601.13732
|
Academic Papers
|
svg
|
9b4bda954c6f7053fffc5c21acb66f19bb8ba154267c9a575190d13976990ffa
|
2026-01-21T00:00:00-05:00
|
Towards robust long-context understanding of large language model via active recap learning
|
arXiv:2601.13734v1 Announce Type: new Abstract: In this paper, we propose active recap learning (ARL), a framework for enhancing large language model (LLM) in understanding long contexts. ARL enables models to revisit and summarize earlier content through targeted sequence construction during contined pretraining and retrospective summarization at inference. First, we identify key tokens in prepared long context based on loss gaps between long and short forward contexts and find most revant preceding paragraphs, then summarize them using an LLM. Second, ARL equips models with the ability to autonomously generate and utilize these retrospective summaries during inference, thereby establishing a recursive memory mechanism across paragraphs. Experimental results show substantial gains, with ARL achieving a 26.8% improvement on RULER and a 9.44% improvement on LongBench. Overall, ARL offers a simple yet effective continued pretraining-based approach to strengthen long-context understanding, advancing scalable memory augmentation in LLM
|
https://arxiv.org/abs/2601.13734
|
Academic Papers
|
svg
|
52d622c9d1aab7b89c3d8108d9b566086b005cf6e1d5f6288ed66a08aff39fe0
|
2026-01-21T00:00:00-05:00
|
Reasoning or Fluency? Dissecting Probabilistic Confidence in Best-of-N Selection
|
arXiv:2601.13735v1 Announce Type: new Abstract: Probabilistic confidence metrics are increasingly adopted as proxies for reasoning quality in Best-of-N selection, under the assumption that higher confidence reflects higher reasoning fidelity. In this work, we challenge this assumption by investigating whether these metrics truly capture inter-step causal dependencies necessary for valid reasoning. We introduce three classes of inter-step causality perturbations that systematically disrupt dependencies between reasoning steps while preserving local fluency. Surprisingly, across diverse model families and reasoning benchmarks, we find that selection accuracy degrades only marginally under these disruptions. Even severe interventions, such as applying hard attention masks that directly prevent the model from attending to prior reasoning steps, do not substantially reduce selection performance. These findings provide strong evidence that current probabilistic metrics are largely insensitive to logical structure, and primarily capture surface-level fluency or in-distribution priors instead. Motivated by this gap, we propose a contrastive causality metric that explicitly isolates inter-step causal dependencies, and demonstrate that it yields more faithful output selection than existing probability-based approaches.
|
https://arxiv.org/abs/2601.13735
|
Academic Papers
|
svg
|
0c9302e93f8fab67669a5affd90004ea4ec83acc6e37a57ee30c8ea3a8eb0fa1
|
2026-01-21T00:00:00-05:00
|
RIM Hand : A Robotic Hand with an Accurate Carpometacarpal Joint and Nitinol-Supported Skeletal Structure
|
arXiv:2601.13737v1 Announce Type: new Abstract: This paper presents the flexible RIM Hand, a biomimetic robotic hand that precisely replicates the carpometacarpal (CMC) joints and employs superelastic Nitinol wires throughout its skeletal framework. By modeling the full carpal-to-metacarpal anatomy, the design enables realistic palm deformation through tendon-driven fingers while enhancing joint restoration and supports skeletal structure with Nitinol-based dorsal extensors. A flexible silicone skin further increases contact friction and contact area, enabling stable grasps for diverse objects. Experiments show that the palm can deform up to 28%, matching human hand flexibility, while achieving more than twice the payload capacity and three times the contact area compared to a rigid palm design. The RIM Hand thus offers improved dexterity, compliance, and anthropomorphism, making it promising for prosthetic and service-robot applications.
|
https://arxiv.org/abs/2601.13737
|
Academic Papers
|
svg
|
b1f2d2499cf0076df18b4353731fc936aaa3765b84d45239323893d521bba514
|
2026-01-21T00:00:00-05:00
|
Dimension-First Evaluation of Speech-to-Speech Models with Structured Acoustic Cues
|
arXiv:2601.13742v1 Announce Type: new Abstract: Large Language Model (LLM) judges exhibit strong reasoning capabilities but are limited to textual content. This leaves current automatic Speech-to-Speech (S2S) evaluation methods reliant on opaque and expensive Audio Language Models (ALMs). In this work, we propose TRACE (Textual Reasoning over Audio Cues for Evaluation), a novel framework that enables LLM judges to reason over audio cues to achieve cost-efficient and human-aligned S2S evaluation. To demonstrate the strength of the framework, we first introduce a Human Chain-of-Thought (HCoT) annotation protocol to improve the diagnostic capability of existing judge benchmarks by separating evaluation into explicit dimensions: content (C), voice quality (VQ), and paralinguistics (P). Using this data, TRACE constructs a textual blueprint of inexpensive audio signals and prompts an LLM to render dimension-wise judgments, fusing them into an overall rating via a deterministic policy. TRACE achieves higher agreement with human raters than ALMs and transcript-only LLM judges while being significantly more cost-effective. We will release the HCoT annotations and the TRACE framework to enable scalable and human-aligned S2S evaluation.
|
https://arxiv.org/abs/2601.13742
|
Academic Papers
|
svg
|
76204aaf79c87af12fc28d15402e3c9c29497879e3a2c638b4cfe3727c87e06f
|
2026-01-21T00:00:00-05:00
|
Counterexample Classification against Signal Temporal Logic Specifications
|
arXiv:2601.13743v1 Announce Type: new Abstract: Signal Temporal Logic (STL) has been widely adopted as a specification language for specifying desirable behaviors of hybrid systems. By monitoring a given STL specification, we can detect the executions that violate it, which are often referred to as counterexamples. In practice, these counterexamples may arise from different causes and thus are relevant to different system defects. To effectively address this, we need a proper criterion for classifying these counterexamples, by which we can comprehend the possible violation patterns and the distributions of these counterexamples with respect to the patterns. In this paper, we propose a classification criterion by using parametric signal temporal logic (PSTL) to represent each class. Due to this formalism, identifying the classes of a counterexample requires finding proper parameter values of PSTL that enable a class to include the counterexample. To improve the efficiency of class identification, we further derive an inclusion relation between different classes, and then propose a binary search-like approach over it that significantly prunes the classes needed to query. We implement a prototype tool and experimentally evaluate its effectiveness on two widely-studied systems.
|
https://arxiv.org/abs/2601.13743
|
Academic Papers
|
svg
|
0deaeed6da9d769792e622b1f953c274bfa230e508c9d2f5ec86bd57e2bc06c2
|
2026-01-21T00:00:00-05:00
|
Variational Dual-path Attention Network for CSI-Based Gesture Recognition
|
arXiv:2601.13745v1 Announce Type: new Abstract: Wi-Fi gesture recognition based on Channel State Information (CSI) is challenged by high-dimensional noise and resource constraints on edge devices. Prevailing end-to-end models tightly couple feature extraction with classification, overlooking the inherent time-frequency sparsity of CSI and leading to redundancy and poor generalization. To address this, this paper proposes a lightweight feature preprocessing module--the Variational Dual-path Attention Network (VDAN). It performs structured feature refinement through frequency-domain filtering and temporal detection. Variational inference is introduced to model the uncertainty in attention weights, thereby enhancing robustness to noise. The design principles of the module are explained from the perspectives of the information bottleneck and regularization. Experiments on a public dataset demonstrate that the learned attention weights align with the physical sparse characteristics of CSI, verifying its interpretability. This work provides an efficient and explainable front-end processing solution for resource-constrained wireless sensing systems.
|
https://arxiv.org/abs/2601.13745
|
Academic Papers
|
svg
|
775a99e8a664c737008b084c6094b82a00635124f7f7428b7a632587dd36f1d9
|
2026-01-21T00:00:00-05:00
|
EEG-Titans: Long-Horizon Seizure Forecasting via Dual-Branch Attention and Neural Memory
|
arXiv:2601.13748v1 Announce Type: new Abstract: Accurate epileptic seizure prediction from electroencephalography (EEG) remains challenging because pre-ictal dynamics may span long time horizons while clinically relevant signatures can be subtle and transient. Many deep learning models face a persistent trade-off between capturing local spatiotemporal patterns and maintaining informative long-range context when operating on ultralong sequences. We propose EEG-Titans, a dualbranch architecture that incorporates a modern neural memory mechanism for long-context modeling. The model combines sliding-window attention to capture short-term anomalies with a recurrent memory pathway that summarizes slower, progressive trends over time. On the CHB-MIT scalp EEG dataset, evaluated under a chronological holdout protocol, EEG-Titans achieves 99.46% average segment-level sensitivity across 18 subjects. We further analyze safety-first operating points on artifact-prone recordings and show that a hierarchical context strategy extending the receptive field for high-noise subjects can markedly reduce false alarms (down to 0.00 FPR/h in an extreme outlier) without sacrificing sensitivity. These results indicate that memory-augmented long-context modeling can provide robust seizure forecasting under clinically constrained evaluation
|
https://arxiv.org/abs/2601.13748
|
Academic Papers
|
svg
|
f26fe739e765acdbd060284bae529a08a0ae3a5788accc1783075b1c6c1f6632
|
2026-01-21T00:00:00-05:00
|
Pro-AI Bias in Large Language Models
|
arXiv:2601.13749v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly employed for decision-support across multiple domains. We investigate whether these models display a systematic preferential bias in favor of artificial intelligence (AI) itself. Across three complementary experiments, we find consistent evidence of pro-AI bias. First, we show that LLMs disproportionately recommend AI-related options in response to diverse advice-seeking queries, with proprietary models doing so almost deterministically. Second, we demonstrate that models systematically overestimate salaries for AI-related jobs relative to closely matched non-AI jobs, with proprietary models overestimating AI salaries more by 10 percentage points. Finally, probing internal representations of open-weight models reveals that ``Artificial Intelligence'' exhibits the highest similarity to generic prompts for academic fields under positive, negative, and neutral framings alike, indicating valence-invariant representational centrality. These patterns suggest that LLM-generated advice and valuation can systematically skew choices and perceptions in high-stakes decisions.
|
https://arxiv.org/abs/2601.13749
|
Academic Papers
|
svg
|
b8dbf66c56a1c74fbdffcea1c185d94f04cf60d7a6116ff53f1fcb1631f71044
|
2026-01-21T00:00:00-05:00
|
HiT: History-Injection Transformers for Onboard Continuous Flood Change Detection
|
arXiv:2601.13751v1 Announce Type: new Abstract: Natural disaster monitoring through continuous satellite observation requires processing multi-temporal data under strict operational constraints. This paper addresses flood detection, a critical application for hazard management, by developing an onboard change detection system that operates within the memory and computational limits of small satellites. We propose History Injection mechanism for Transformer models (HiT), that maintains historical context from previous observations while reducing data storage by over 99\% of original image size. Moreover, testing on the STTORM-CD flood dataset confirms that the HiT mechanism within the Prithvi-tiny foundation model maintains detection accuracy compared to the bitemporal baseline. The proposed HiT-Prithvi model achieved 43 FPS on Jetson Orin Nano, a representative onboard hardware used in nanosats. This work establishes a practical framework for satellite-based continuous monitoring of natural disasters, supporting real-time hazard assessment without dependency on ground-based processing infrastructure. Architecture as well as model checkpoints is available at https://github.com/zaitra/HiT-change-detection
|
https://arxiv.org/abs/2601.13751
|
Academic Papers
|
svg
|
7f1a91859740e7028ad09adc5c0ab0b5d523d5e24fd8d271eb262696cbca0084
|
2026-01-21T00:00:00-05:00
|
Finding RELIEF: Shaping Reasoning Behavior without Reasoning Supervision via Belief Engineering
|
arXiv:2601.13752v1 Announce Type: new Abstract: Large reasoning models (LRMs) have achieved remarkable success in complex problem-solving, yet they often suffer from computational redundancy or reasoning unfaithfulness. Current methods for shaping LRM behavior typically rely on reinforcement learning or fine-tuning with gold-standard reasoning traces, a paradigm that is both computationally expensive and difficult to scale. In this paper, we reveal that LRMs possess latent \textit{reasoning beliefs} that internally track their own reasoning traits, which can be captured through simple logit probing. Building upon this insight, we propose Reasoning Belief Engineering (RELIEF), a simple yet effective framework that shapes LRM behavior by aligning the model's self-concept with a target belief blueprint. Crucially, RELIEF completely bypasses the need for reasoning-trace supervision. It internalizes desired traits by fine-tuning on synthesized, self-reflective question-answering pairs that affirm the target belief. Extensive experiments on efficiency and faithfulness tasks demonstrate that RELIEF matches or outperforms behavior-supervised and preference-based baselines while requiring lower training costs. Further analysis validates that shifting a model's reasoning belief effectively shapes its actual behavior.
|
https://arxiv.org/abs/2601.13752
|
Academic Papers
|
svg
|
4a11b758e8edb10a42d2892e89c9c384d60aaecf3d5897612803541f67459189
|
2026-01-21T00:00:00-05:00
|
Research on Adaptive Inertial Control in Synchronization Systems: Based on Variational Optimization Methods and Their Applications in the Stability of Complex Networks
|
arXiv:2601.13753v1 Announce Type: new Abstract: Aiming at the core problem that it is difficult for a fixed inertia coefficient to balance transient disturbance suppression and long-term stability in complex network synchronization systems, an adaptive inertia control strategy based on variational optimization is proposed. Taking the Kuramoto model with inertia as the research carrier, the analytical expression of the time-varying inertia coefficient M(t) is strictly derived by the functional variational method, and a hierarchical control structure of "benchmark inertia + disturbance feedback" is constructed to achieve the organic unity of minimizing the vulnerability performance function H(T) and stability constraints. A multimodal decoupling control strategy based on Laplacian eigenvector projection is designed to enhance the feedback strength of the dominant mode by eigenvalue weighting, improving the control accuracy and dynamic response speed. Simulation verification is carried out in complex network systems, and the control performance of regular networks (RG), random networks (ER), small-world networks (SW), scale-free networks (SF) and spider webs (SP) under three typical disturbances of pulses, monotonic decays and oscillatory decays is systematically analyzed. The results show that the proposed strategy reduces H(T) of the five networks by 19%-25%, shortens the relaxation time by 15%-24%, and the real parts of all system eigenvalues are less than -0.25s^-1 , meeting the asymptotic stability criterion. This study provides a new theoretical framework and engineering implementation scheme for the stability control of complex network synchronization systems, which can be widely applied to fields such as power grids, communication networks, and neural networks.
|
https://arxiv.org/abs/2601.13753
|
Academic Papers
|
svg
|
b0b4723c2b35799546a0e617ffa5e42ca8034e53216dc35ccc1a184e0efbaefb
|
2026-01-21T00:00:00-05:00
|
On Autopilot? An Empirical Study of Human-AI Teaming and Review Practices in Open Source
|
arXiv:2601.13754v1 Announce Type: new Abstract: Large Language Models (LLMs) increasingly automate software engineering tasks. While recent studies highlight the accelerated adoption of ``AI as a teammate'' in Open Source Software (OSS), developer interaction patterns remain under-explored. In this work, we investigated project-level guidelines and developers' interactions with AI-assisted pull requests (PRs) by expanding the AIDev dataset to include finer-grained contributor code ownership and a comparative baseline of human-created PRs. We found that over 67.5\% of AI-co-authored PRs originate from contributors without prior code ownership. Despite this, the majority of repositories lack guidelines for AI-coding agent usage. Notably, we observed a distinct interaction pattern: AI-co-authored PRs are merged significantly faster with minimal feedback. In contrast to human-created PRs where non-owner developers receive the most feedback, AI-co-authored PRs from non-owners receive the least, with approximately 80\% merged without any explicit review. Finally, we discuss implications for developers and researchers.
|
https://arxiv.org/abs/2601.13754
|
Academic Papers
|
svg
|
1caa1ab7ccb0690a2a7b5fd007dd0b11b72a01bc3d82720dff2ca50d7b207f3e
|
2026-01-21T00:00:00-05:00
|
The Limits of Conditional Volatility: Assessing Cryptocurrency VaR under EWMA and IGARCH Models
|
arXiv:2601.13757v1 Announce Type: new Abstract: The application of the standard static Geometric Brownian Motion (GBM) model for cryptocurrency risk management resulted in a systemic failure, evidenced by a 80.67% chance of loss in the 5% value-at-risk benchmark. This study addresses a critical literature gap by comparatively testing three conditional volatility models the EWMA/IGARCH baseline, an IGARCH model augmented with explicit mean reversion (IGARCH + MR), and a modified EGARCH-style asymmetric shock model within a correlated Monte Carlo VaR framework. Crucially, the analysis is applied specifically to high-beta altcoins (XRP, SOL, ADA), an asset class largely neglected by mainstream GARCH literature. Our results demonstrate that imposing stationarity (IGARCH + MR) drastically underestimates downside risk (5 percent value-at-risk reduced by 50%), while the asymmetric model (Model 3) leads to severe over-penalization. The EWMA/IGARCH baseline, characterized by infinite volatility persistence (alpha + beta = 1), provided the only robust conditional volatility estimate. This finding constitutes a formal rejection of the conventional financial hypotheses of volatility mean reversion and the asymmetric leverage effect in the altcoin asset class, establishing that non-stationary frameworks are a prerequisite for regulatory-grade risk modeling in this domain.
|
https://arxiv.org/abs/2601.13757
|
Academic Papers
|
svg
|
cd3e75e948cd1b6f2b5e550b952ddc1cce4fae69f7e3d6fd65073dfec65c4a13
|
2026-01-21T00:00:00-05:00
|
GOMPSNR: Reflourish the Signal-to-Noise Ratio Metric for Audio Generation Tasks
|
arXiv:2601.13758v1 Announce Type: new Abstract: In the field of audio generation, signal-to-noise ratio (SNR) has long served as an objective metric for evaluating audio quality. Nevertheless, recent studies have shown that SNR and its variants are not always highly correlated with human perception, prompting us to raise the questions: Why does SNR fail in measuring audio quality? And how to improve its reliability as an objective metric? In this paper, we identify the inadequate measurement of phase distance as a pivotal factor and propose to reformulate SNR with specially designed phase-distance terms, yielding an improved metric named GOMPSNR. We further extend the newly proposed formulation to derive two novel categories of loss function, corresponding to magnitude-guided phase refinement and joint magnitude-phase optimization, respectively. Besides, extensive experiments are conducted for an optimal combination of different loss functions. Experimental results on advanced neural vocoders demonstrate that our proposed GOMPSNR exhibits more reliable error measurement than SNR. Meanwhile, our proposed loss functions yield substantial improvements in model performance, and our wellchosen combination of different loss functions further optimizes the overall model capability.
|
https://arxiv.org/abs/2601.13758
|
Academic Papers
|
svg
|
5cd4ad4586a73a43e036a55f63d1377ca8aa248e0a0cb28dca8376dcaf3e90fe
|
2026-01-21T00:00:00-05:00
|
DARC: Decoupled Asymmetric Reasoning Curriculum for LLM Evolution
|
arXiv:2601.13761v1 Announce Type: new Abstract: Self-play with large language models has emerged as a promising paradigm for achieving self-improving artificial intelligence. However, existing self-play frameworks often suffer from optimization instability, due to (i) non-stationary objectives induced by solver-dependent reward feedback for the Questioner, and (ii) bootstrapping errors from self-generated pseudo-labels used to supervise the Solver. To mitigate these challenges, we introduce DARC (Decoupled Asymmetric Reasoning Curriculum), a two-stage framework that stabilizes the self-evolution process. First, we train the Questioner to synthesize difficulty-calibrated questions, conditioned on explicit difficulty levels and external corpora. Second, we train the Solver with an asymmetric self-distillation mechanism, where a document-augmented teacher generates high-quality pseudo-labels to supervise the student Solver that lacks document access. Empirical results demonstrate that DARC is model-agnostic, yielding an average improvement of 10.9 points across nine reasoning benchmarks and three backbone models. Moreover, DARC consistently outperforms all baselines and approaches the performance of fully supervised models without relying on human annotations.The code is available at https://github.com/RUCBM/DARC.
|
https://arxiv.org/abs/2601.13761
|
Academic Papers
|
svg
|
09d3adf34461666f945b328e9a0088fc7be47788d36b1c8046cf7c37f36bc728
|
2026-01-21T00:00:00-05:00
|
TransMode-LLM: Feature-Informed Natural Language Modeling with Domain-Enhanced Prompting for Travel Behavior Modeling
|
arXiv:2601.13763v1 Announce Type: new Abstract: Understanding traveler behavior and accurately predicting travel mode choice are at the heart of transportation planning and policy-making. This study proposes TransMode-LLM, an innovative framework that integrates statistical methods with LLM-based techniques to predict travel modes from travel survey data. The framework operates through three phases: (1) statistical analysis identifies key behavioral features, (2) natural language encoding transforms structured data into contextual descriptions, and (3) LLM adaptation predicts travel mode through multiple learning paradigms including zero-shot and one/few-shot learning and domain-enhanced prompting. We evaluate TransMode-LLM using both general-purpose models (GPT-4o, GPT-4o-mini) and reasoning-focused models (o3-mini, o4-mini) with varying sample sizes on real-world travel survey data. Extensive experiment results demonstrate that the LLM-based approach achieves competitive accuracy compared to state-of-the-art baseline classifiers models. Moreover, few-shot learning significantly improves prediction accuracy, with models like o3-mini showing consistent improvements of up to 42.9\% with 5 provided examples. However, domain-enhanced prompting shows divergent effects across LLM architectures. In detail, it is helpful to improve performance for general-purpose models with GPT-4o achieving improvements of 2.27% to 12.50%. However, for reasoning-oriented models (o3-mini, o4-mini), domain knowledge enhancement does not universally improve performance. This study advances the application of LLMs in travel behavior modeling, providing promising and valuable insights for both academic research and transportation policy-making in the future.
|
https://arxiv.org/abs/2601.13763
|
Academic Papers
|
svg
|
618be85f5f48b4b11822faf5a9d8971617d1d0373ac052443f37d1014db59d34
|
2026-01-21T00:00:00-05:00
|
vLinear: A Powerful Linear Model for Multivariate Time Series Forecasting
|
arXiv:2601.13768v1 Announce Type: new Abstract: In this paper, we present \textbf{vLinear}, an effective yet efficient \textbf{linear}-based multivariate time series forecaster featuring two components: the \textbf{v}ecTrans module and the WFMLoss objective. Many state-of-the-art forecasters rely on self-attention or its variants to capture multivariate correlations, typically incurring $\mathcal{O}(N^2)$ computational complexity with respect to the number of variates $N$. To address this, we propose vecTrans, a lightweight module that utilizes a learnable vector to model multivariate correlations, reducing the complexity to $\mathcal{O}(N)$. Notably, vecTrans can be seamlessly integrated into Transformer-based forecasters, delivering up to 5$\times$ inference speedups and consistent performance gains. Furthermore, we introduce WFMLoss (Weighted Flow Matching Loss) as the objective. In contrast to typical \textbf{velocity-oriented} flow matching objectives, we demonstrate that a \textbf{final-series-oriented} formulation yields significantly superior forecasting accuracy. WFMLoss also incorporates path- and horizon-weighted strategies to focus learning on more reliable paths and horizons. Empirically, vLinear achieves state-of-the-art performance across 22 benchmarks and 124 forecasting settings. Moreover, WFMLoss serves as an effective plug-and-play objective, consistently improving existing forecasters. The code is available at https://anonymous.4open.science/r/vLinear.
|
https://arxiv.org/abs/2601.13768
|
Academic Papers
|
svg
|
13b65503045040311af6ed4e2256135ada6ace241ebd38579c1ac5e5a85e6a9a
|
2026-01-21T00:00:00-05:00
|
Interoperable rApp/xApp Control over O-RAN for Mobility-aware Dynamic Spectrum Allocation
|
arXiv:2601.13769v1 Announce Type: new Abstract: Open Radio Access Networks (O-RAN) enable the disaggregation of radio access functions and the deployment of control applications across different timescales. However, designing interoperable control schemes that jointly exploit long-term traffic awareness and near-real-time radio resource optimization remains a challenging problem, particularly under dense multi-cell interference and heterogeneous service demands. This paper proposes an interoperable rApp/xApp-driven dynamic spectrum allocation (DSA) framework for O-RAN, based on a graph-theoretic formulation of physical resource block (PRB) assignment. The proposed architecture leverages a non-real-time radio intelligent controller (Non-RT RIC) rApp to predict aggregated traffic evolution and generate high-level spectrum policies at the minutes timescale, while a near-real-time RIC (Near-RT RIC) xApp constructs a user-centric conflict graph and performs fairness-aware PRB allocation at sub-second timescales. To mitigate persistent user starvation, a conflict-aware modified proportional fair (MPF) scheduling mechanism is applied, enabling controlled interference-free PRB time-sharing. Extensive simulation results demonstrate that the proposed framework significantly improves the PRB assignment success rate (above 90%) and service-share fairness (above 85%) across different channel configurations and user demands, while maintaining architectural separation and rApp/xApp interoperability in accordance with O-RAN principles.
|
https://arxiv.org/abs/2601.13769
|
Academic Papers
|
svg
|
6a98fefb0e8676a4e149dabe095cf7086a6790ac1103c3da462f082c21010502
|
2026-01-21T00:00:00-05:00
|
Look-Ahead-Bench: a Standardized Benchmark of Look-ahead Bias in Point-in-Time LLMs for Finance
|
arXiv:2601.13770v1 Announce Type: new Abstract: We introduce Look-Ahead-Bench, a standardized benchmark measuring look-ahead bias in Point-in-Time (PiT) Large Language Models (LLMs) within realistic and practical financial workflows. Unlike most existing approaches that primarily test inner lookahead knowledge via Q\\&A, our benchmark evaluates model behavior in practical scenarios. To distinguish genuine predictive capability from memorization-based performance, we analyze performance decay across temporally distinct market regimes, incorporating several quantitative baselines to establish performance thresholds. We evaluate prominent open-source LLMs -- Llama 3.1 (8B and 70B) and DeepSeek 3.2 -- against a family of Point-in-Time LLMs (Pitinf-Small, Pitinf-Medium, and frontier-level model Pitinf-Large) from PiT-Inference. Results reveal significant lookahead bias in standard LLMs, as measured with alpha decay, unlike Pitinf models, which demonstrate improved generalization and reasoning abilities as they scale in size. This work establishes a foundation for the standardized evaluation of temporal bias in financial LLMs and provides a practical framework for identifying models suitable for real-world deployment. Code is available on GitHub: https://github.com/benstaf/lookaheadbench
|
https://arxiv.org/abs/2601.13770
|
Academic Papers
|
svg
|
07bb0488d8ca57c30cbeb28714a71feba45a011179dd20e6e8f94f6d8c645b0c
|
2026-01-21T00:00:00-05:00
|
A Blockchain-Oriented Software Engineering Architecture for Carbon Credit Certification Systems
|
arXiv:2601.13772v1 Announce Type: new Abstract: Carbon credit systems have emerged as a policy tool to incentivize emission reductions and support the transition to clean energy. Reliable carbon-credit certification depends on mechanisms that connect actual, measured renewable-energy production to verifiable emission-reduction records. Although blockchain and IoT technologies have been applied to emission monitoring and trading, existing work offers limited support for certification processes, particularly for small and medium-scale renewable installations. This paper introduces a blockchain-based carbon-credit certification architecture, demonstrated through a 100 kWp photovoltaic case study, that integrates real-time IoT data collection, edge-level aggregation, and secure on-chain storage on a permissioned blockchain with smart contracts. Unlike approaches focused on trading mechanisms, the proposed system aligns with European legislation and voluntary carbon-market standards, clarifying the practical requirements and constraints that apply to photovoltaic operators. The resulting architecture provides a structured pathway for generating verifiable carbon-credit records and supporting third-party verification.
|
https://arxiv.org/abs/2601.13772
|
Academic Papers
|
svg
|
f7317f6fb9bfc4e1ef00f14d4089595d939c8d83aedda36b25e94b93e33e6796
|
2026-01-21T00:00:00-05:00
|
Orthogonium : A Unified, Efficient Library of Orthogonal and 1-Lipschitz Building Blocks
|
arXiv:2601.13776v1 Announce Type: new Abstract: Orthogonal and 1-Lipschitz neural network layers are essential building blocks in robust deep learning architectures, crucial for certified adversarial robustness, stable generative models, and reliable recurrent networks. Despite significant advancements, existing implementations remain fragmented, limited, and computationally demanding. To address these issues, we introduce Orthogonium , a unified, efficient, and comprehensive PyTorch library providing orthogonal and 1-Lipschitz layers. Orthogonium provides access to standard convolution features-including support for strides, dilation, grouping, and transposed-while maintaining strict mathematical guarantees. Its optimized implementations reduce overhead on large scale benchmarks such as ImageNet. Moreover, rigorous testing within the library has uncovered critical errors in existing implementations, emphasizing the importance of standardized and reliable tools. Orthogonium thus significantly lowers adoption barriers, enabling scalable experimentation and integration across diverse applications requiring orthogonality and robust Lipschitz constraints. Orthogonium is available at https://github.com/deel-ai/orthogonium.
|
https://arxiv.org/abs/2601.13776
|
Academic Papers
|
svg
|
cb66e41e2607a35b06da382df97d736b7409d1ad8a75cdc3f8d4452463365f63
|
2026-01-21T00:00:00-05:00
|
Sample Efficient Learning of Body-Environment Interaction of an Under-Actuated System
|
arXiv:2601.13777v1 Announce Type: new Abstract: Geometric mechanics provides valuable insights into how biological and robotic systems use changes in shape to move by mechanically interacting with their environment. In high-friction environments it provides that the entire interaction is captured by the ``motility map''. Here we compare methods for learning the motility map from motion tracking data of a physical robot created specifically to test these methods by having under-actuated degrees of freedom and a hard to model interaction with its substrate. We compared four modeling approaches in terms of their ability to predict body velocity from shape change within the same gait, across gaits, and across speeds. Our results show a trade-off between simpler methods which are superior on small training datasets, and more sophisticated methods, which are superior when more training data is available.
|
https://arxiv.org/abs/2601.13777
|
Academic Papers
|
svg
|
2a88dafa0f59e479fc86d5938db01ea89fbf444e0981bcd5fcd9243591e86f6b
|
2026-01-21T00:00:00-05:00
|
Fit Matters: Format-Distance Alignment Improves Conversational Search
|
arXiv:2601.13778v1 Announce Type: new Abstract: Existing conversational search systems can synthesize information into responses, but they lack principled ways to adapt response formats to users' cognitive states. This paper investigates whether aligning format and distance, which involves matching information granularity and media to users' psychological distance, improves user experience. In a between-subjects experiment (N=464) on travel planning, we crossed two distance dimensions (temporal/spatial x near/far) with four formats varying in granularity (abstract/concrete) and media (text/image-and-text). The experiment established that format--distance alignment reduced users' risk perceptions while increasing decision confidence, perceptions of information usefulness, ease of use, enjoyment, and credibility, and adoption intentions. Concrete formats imposed higher cognitive load, but yielded productive effort when matched to near-distance tasks. Images enhanced concrete but not abstract text, suggesting multimedia benefits depend on complementarity. These findings establish format--distance alignment as a distinctive and important design dimension, enabling systems to tailor response formats to users' psychological distance.
|
https://arxiv.org/abs/2601.13778
|
Academic Papers
|
svg
|
24b8ab97f4d81ebda2916a476edb0623ed38fbf4dda7146fff37dc3d6fb1561f
|
2026-01-21T00:00:00-05:00
|
Principled Latent Diffusion for Graphs via Laplacian Autoencoders
|
arXiv:2601.13780v1 Announce Type: new Abstract: Graph diffusion models achieve state-of-the-art performance in graph generation but suffer from quadratic complexity in the number of nodes -- and much of their capacity is wasted modeling the absence of edges in sparse graphs. Inspired by latent diffusion in other modalities, a natural idea is to compress graphs into a low-dimensional latent space and perform diffusion there. However, unlike images or text, graph generation requires nearly lossless reconstruction, as even a single error in decoding an adjacency matrix can render the entire sample invalid. This challenge has remained largely unaddressed. We propose LG-Flow, a latent graph diffusion framework that directly overcomes these obstacles. A permutation-equivariant autoencoder maps each node into a fixed-dimensional embedding from which the full adjacency is provably recoverable, enabling near-lossless reconstruction for both undirected graphs and DAGs. The dimensionality of this latent representation scales linearly with the number of nodes, eliminating the quadratic bottleneck and making it feasible to train larger and more expressive models. In this latent space, we train a Diffusion Transformer with flow matching, enabling efficient and expressive graph generation. Our approach achieves competitive results against state-of-the-art graph diffusion models, while achieving up to $1000\times$ speed-up.
|
https://arxiv.org/abs/2601.13780
|
Academic Papers
|
svg
|
86edc7ac346465ebbac7d511c834400a56a98751a0821fa7f7c0228b359ed21f
|
2026-01-21T00:00:00-05:00
|
Area-universality in Outerplanar Graphs
|
arXiv:2601.13781v1 Announce Type: new Abstract: A rectangular floorplan is a partition of a rectangle into smaller rectangles such that no four rectangles meet at a single point. Rectangular floorplans arise naturally in a variety of applications, including VLSI design, architectural layout, and cartography, where efficient and flexible spatial subdivisions are required. A central concept in this domain is that of area-universality: a floorplan (or more generally, a rectangular layout) is area-universal if, for any assignment of target areas to its constituent rectangles, there exists a combinatorially equivalent layout that realizes these areas. In this paper, we investigate the structural conditions under which an outerplanar graph admits an area-universal rectangular layout. We establish a necessary and sufficient condition for area-universality in this setting, thereby providing a complete characterization of admissible outerplanar graphs. Furthermore, we present an algorithmic construction that guarantees that the resulting layout is always area-universal.
|
https://arxiv.org/abs/2601.13781
|
Academic Papers
|
svg
|
c6866dec662e7c46918df0fc7d17be785d0968abd0968e0e5c8cf210fcbe369e
|
2026-01-21T00:00:00-05:00
|
Demystifying Starlink Network Performance under Vehicular Mobility with Dynamic Beam Switching
|
arXiv:2601.13790v1 Announce Type: new Abstract: In the last few years, considerable research efforts have focused on measuring and improving Starlink network performance, especially for user terminals (UTs) in stationary scenarios. However, the performance of Starlink networks in mobility settings, particularly with frequent changes in the UT's orientation, and the impact of environmental factors, such as transient obstructions, has not been thoroughly studied, leaving gaps in understanding the causes of performance degradation. Recently, researchers have started identifying the communicating satellites to evaluate satellite selection strategies and the impact on network performance. However, existing Starlink satellite identification methods only work in stationary, obstruction-free scenarios, as they do not account for UT mobility, obstructions or detect dynamic beam switching events. In this paper, we reveal that the UT can perform multiple dynamic beam switching attempts to connect to different satellites when the UT-satellite link is degraded. This degradation can occur either due to the loss of line-of-sight (LoS) from changes in the FOV or obstructions, or due to poor signal quality, extending UT-satellite handovers beyond the well-known 15-second regular handover interval. We propose a mobility-aware Starlink satellite identification method that detects dynamic beam switching events, and plausibly explain network performance using UT's diagnostic data and connected satellite information. Our findings demystifies the mobile Starlink network performance degradations, which is crucial to enhance the end-to-end performance of transport layer protocols and in diverse application scenarios.
|
https://arxiv.org/abs/2601.13790
|
Academic Papers
|
svg
|
38987336d0b75291a06f738d181df69da60800b798f39cb099539a6dd7c39067
|
2026-01-21T00:00:00-05:00
|
PAtt: A Pattern Attention Network for ETA Prediction Using Historical Speed Profiles
|
arXiv:2601.13793v1 Announce Type: new Abstract: In this paper, we propose an ETA model (Estimated Time of Arrival) that leverages an attention mechanism over historical road speed patterns. As autonomous driving and intelligent transportation systems become increasingly prevalent, the need for accurate and reliable ETA estimation has grown, playing a vital role in navigation, mobility planning, and traffic management. However, predicting ETA remains a challenging task due to the dynamic and complex nature of traffic flow. Traditional methods often combine real-time and historical traffic data in simplistic ways, or rely on complex rule-based computations. While recent deep learning models have shown potential, they often require high computational costs and do not effectively capture the spatio-temporal patterns crucial for ETA prediction. ETA prediction inherently involves spatio-temporal causality, and our proposed model addresses this by leveraging attention mechanisms to extract and utilize temporal features accumulated at each spatio-temporal point along a route. This architecture enables efficient and accurate ETA estimation while keeping the model lightweight and scalable. We validate our approach using real-world driving datasets and demonstrate that our approach outperforms existing baselines by effectively integrating road characteristics, real-time traffic conditions, and historical speed patterns in a task-aware manner.
|
https://arxiv.org/abs/2601.13793
|
Academic Papers
|
svg
|
4bb8243dab8515099981ffc5a9aad374df1fa934c37754470a7d1e71bb7c0914
|
2026-01-21T00:00:00-05:00
|
A Distributed Spatial Data Warehouse for AIS Data (DIPAAL)
|
arXiv:2601.13795v1 Announce Type: new Abstract: AIS data from ships is excellent for analyzing single-ship movements and monitoring all ships within a specific area. However, the AIS data needs to be cleaned, processed, and stored before being usable. This paper presents a system consisting of an efficient and modular ETL process for loading AIS data, as well as a distributed spatial data warehouse storing the trajectories of ships. To efficiently analyze a large set of ships, a raster approach to querying the AIS data is proposed. A spatially partitioned data warehouse with a granularized cell representation and heatmap presentation is designed, developed, and evaluated. Currently the data warehouse stores ~312 million kilometers of ship trajectories and more than +8 billion rows in the largest table. It is found that searching the cell representation is faster than searching the trajectory representation. Further, we show that the spatially divided shards enable a consistently good scale-up for both cell and heatmap analytics in large areas, ranging between 354% to 1164% with a 5x increase in workers
|
https://arxiv.org/abs/2601.13795
|
Academic Papers
|
svg
|
b605c9faeb344fa944303d81a3a7ed8c502e022f6b8c2407a18ecbf733501426
|
2026-01-21T00:00:00-05:00
|
Zero-free regions and concentration inequalities for hypergraph colorings in the local lemma regime
|
arXiv:2601.13796v1 Announce Type: new Abstract: We show that for $q$-colorings in $k$-uniform hypergraphs with maximum degree $\Delta$, if $k\ge 50$ and $q\ge 700\Delta^{\frac{5}{k-10}}$, there is a "Lee-Yang" zero-free strip around the interval $[0,1]$ of the partition function, which includes the special case of uniform enumeration of hypergraph colorings. As an immediate consequence, we obtain Berry-Esseen type inequalities for hypergraph $q$-colorings under such conditions, demonstrating the asymptotic normality for the size of any color class in a uniformly random coloring. Our framework also extends to the study of "Fisher zeros", leading to deterministic algorithms for approximating the partition function in the zero-free region. Our approach is based on extending the recent work of [Liu, Wang, Yin, Yu, STOC 2025] to general constraint satisfaction problems (CSP). We focus on partition functions defined for CSPs by introducing external fields to the variables. A key component in our approach is a projection-lifting scheme, which enables us to essentially lift information percolation type analysis for Markov chains from the real line to the complex plane. Last but not least, we also show a Chebyshev-type inequality under the sampling LLL condition for atomic CSPs.
|
https://arxiv.org/abs/2601.13796
|
Academic Papers
|
svg
|
eb2ac31f383275cf580c6f81cd1c482e40ee5c05c90f43299293034fe45ac319
|
2026-01-21T00:00:00-05:00
|
PREGEN: Uncovering Latent Thoughts in Composed Video Retrieval
|
arXiv:2601.13797v1 Announce Type: new Abstract: Composed Video Retrieval (CoVR) aims to retrieve a video based on a query video and a modifying text. Current CoVR methods fail to fully exploit modern Vision-Language Models (VLMs), either using outdated architectures or requiring computationally expensive fine-tuning and slow caption generation. We introduce PREGEN (PRE GENeration extraction), an efficient and powerful CoVR framework that overcomes these limitations. Our approach uniquely pairs a frozen, pre-trained VLM with a lightweight encoding model, eliminating the need for any VLM fine-tuning. We feed the query video and modifying text into the VLM and extract the hidden state of the final token from each layer. A simple encoder is then trained on these pooled representations, creating a semantically rich and compact embedding for retrieval. PREGEN significantly advances the state of the art, surpassing all prior methods on standard CoVR benchmarks with substantial gains in Recall@1 of +27.23 and +69.59. Our method demonstrates robustness across different VLM backbones and exhibits strong zero-shot generalization to more complex textual modifications, highlighting its effectiveness and semantic capabilities.
|
https://arxiv.org/abs/2601.13797
|
Academic Papers
|
svg
|
2390c4b39bcb0d80c33e9d70258c6fe687272ed417f0afec4d33140ac0d3ac6c
|
2026-01-21T00:00:00-05:00
|
Insight: Interpretable Semantic Hierarchies in Vision-Language Encoders
|
arXiv:2601.13798v1 Announce Type: new Abstract: Language-aligned vision foundation models perform strongly across diverse downstream tasks. Yet, their learned representations remain opaque, making interpreting their decision-making hard. Recent works decompose these representations into human-interpretable concepts, but provide poor spatial grounding and are limited to image classification tasks. In this work, we propose Insight, a language-aligned concept foundation model that provides fine-grained concepts, which are human-interpretable and spatially grounded in the input image. We leverage a hierarchical sparse autoencoder and a foundation model with strong semantic representations to automatically extract concepts at various granularities. Examining local co-occurrence dependencies of concepts allows us to define concept relationships. Through these relations we further improve concept naming and obtain richer explanations. On benchmark data, we show that Insight provides performance on classification and segmentation that is competitive with opaque foundation models while providing fine-grained, high quality concept-based explanations. Code is available at https://github.com/kawi19/Insight.
|
https://arxiv.org/abs/2601.13798
|
Academic Papers
|
svg
|
b6ee7623912a9e972ffeb5ff540d62892d3f4cd5e58d4ca398a762851f7f2bf7
|
2026-01-21T00:00:00-05:00
|
Linear viscoelastic rheological FrBD models
|
arXiv:2601.13799v1 Announce Type: new Abstract: In [1], a new modeling paradigm for developing rate-and-state-dependent, control-oriented friction models was introduced. The framework, termed Friction with Bristle Dynamics (FrBD), combines nonlinear analytical expressions for the friction coefficient with constitutive equations for bristle-like elements. Within the FrBD framework, this letter introduces two novel formulations based on the two most general linear viscoelastic models for solids: the Generalized Maxwell (GM) and Generalized Kelvin-Voigt (GKV) elements. Both are analyzed in terms of boundedness and passivity, revealing that these properties are satisfied for any physically meaningful parametrization. An application of passivity for control design is also illustrated, considering an example from robotics. The findings of this letter systematically integrate rate-and-state dynamic friction models with linear viscoelasticity.
|
https://arxiv.org/abs/2601.13799
|
Academic Papers
|
svg
|
c24dc96e62548bbed80196c6fe6f72eb9ce4c868a032fcf2d31f8853e3023c52
|
2026-01-21T00:00:00-05:00
|
A Hybridizable Discontinuous Galerkin Method for the non--local Camassa--Holm--Kadomtsev--Petviashvili equation
|
arXiv:2601.13800v1 Announce Type: new Abstract: This paper develops a hybridizable discontinuous Galerkin method for the two-dimensional Camassa--Holm--Kadomtsev--Petviashvili equation. The method employs Cartesian meshes with tensor-product polynomial spaces, enabling separate treatment of \(x\) and \(y\) derivatives. The non-local operator \(\partial_{x}^{-1}u_{y}\) is localized through an auxiliary variable \(v\) satisfying \(v_x = u_y\), allowing efficient element-by-element computations. We prove energy stability of the semi-discrete scheme and derive \(\mathcal{O}(h^{k+1/2})\) convergence in space. Numerical experiments validate the theoretical results and demonstrate the method's capability to accurately resolve smooth solutions and peaked solitary waves (peakons).
|
https://arxiv.org/abs/2601.13800
|
Academic Papers
|
svg
|
16b5640029b095878aa8ae26811c6f6882498b519d9d828f5f573eda22a123b8
|
2026-01-21T00:00:00-05:00
|
HoverAI: An Embodied Aerial Agent for Natural Human-Drone Interaction
|
arXiv:2601.13801v1 Announce Type: new Abstract: Drones operating in human-occupied spaces suffer from insufficient communication mechanisms that create uncertainty about their intentions. We present HoverAI, an embodied aerial agent that integrates drone mobility, infrastructure-independent visual projection, and real-time conversational AI into a unified platform. Equipped with a MEMS laser projector, onboard semi-rigid screen, and RGB camera, HoverAI perceives users through vision and voice, responding via lip-synced avatars that adapt appearance to user demographics. The system employs a multimodal pipeline combining VAD, ASR (Whisper), LLM-based intent classification, RAG for dialogue, face analysis for personalization, and voice synthesis (XTTS v2). Evaluation demonstrates high accuracy in command recognition (F1: 0.90), demographic estimation (gender F1: 0.89, age MAE: 5.14 years), and speech transcription (WER: 0.181). By uniting aerial robotics with adaptive conversational AI and self-contained visual output, HoverAI introduces a new class of spatially-aware, socially responsive embodied agents for applications in guidance, assistance, and human-centered interaction.
|
https://arxiv.org/abs/2601.13801
|
Academic Papers
|
svg
|
a69ad870e818df5eeae81c11b86c9bbd75940abe4050cb463c5c5bbcbda0078f
|
2026-01-21T00:00:00-05:00
|
Habibi: Laying the Open-Source Foundation of Unified-Dialectal Arabic Speech Synthesis
|
arXiv:2601.13802v1 Announce Type: new Abstract: A notable gap persists in speech synthesis research and development for Arabic dialects, particularly from a unified modeling perspective. Despite its high practical value, the inherent linguistic complexity of Arabic dialects, further compounded by a lack of standardized data, benchmarks, and evaluation guidelines, steers researchers toward safer ground. To bridge this divide, we present Habibi, a suite of specialized and unified text-to-speech models that harnesses existing open-source ASR corpora to support a wide range of high- to low-resource Arabic dialects through linguistically-informed curriculum learning. Our approach outperforms the leading commercial service in generation quality, while maintaining extensibility through effective in-context learning, without requiring text diacritization. We are committed to open-sourcing the model, along with creating the first systematic benchmark for multi-dialect Arabic speech synthesis. Furthermore, by identifying the key challenges in and establishing evaluation standards for the process, we aim to provide a solid groundwork for subsequent research. Resources at https://SWivid.github.io/Habibi/ .
|
https://arxiv.org/abs/2601.13802
|
Academic Papers
|
svg
|
a3019cace0464ed34a02ca52d333473e57e9690bcf6e2d022df2bd07ba014ec9
|
2026-01-21T00:00:00-05:00
|
The Non-Predictability of Mispredicted Branches using Timing Information
|
arXiv:2601.13804v1 Announce Type: new Abstract: Branch misprediction latency is one of the most important contributors to performance degradation and wasted energy consumption in a modern core. State-of-the-art predictors generally perform very well but occasionally suffer from high Misprediction Per Kilo Instruction due to hard-to-predict branches. In this work, we investigate if predicting branches using microarchitectural information, in addition to traditional branch history, can improve prediction accuracy. Our approach considers branch timing information (resolution cycle) both for older branches in the Reorder Buffer (ROB) and recently committed, and for younger branches relative to the branch we re-predict. We propose Speculative Branch Resolution (SBR) in which, N cycles after a branch allocates in the ROB, various timing information is collected and used to re-predict. Using the gem5 simulator we implement and perform a limit-study of SBR using a TAGE-Like predictor. Our experiments show that the post-alloc timing information we used was not able to yield performance gains over an unbounded TAGE-SC. However, we find two hard to predict branches where timing information did provide an advantage and thoroughly analysed one of them to understand why. This finding suggests that predictors may benefit from specific microarchitectural information to increase accuracy on specific hard to predict branches and that overriding predictions in the backend may yet yield performance benefits, but that further research is needed to determine such information vectors.
|
https://arxiv.org/abs/2601.13804
|
Academic Papers
|
svg
|
cc8e9941776161b1e944ff29d7690c25b83de30ed8d389b71aa9ed27e42ab0ec
|
2026-01-21T00:00:00-05:00
|
Knowledge Graph-Assisted LLM Post-Training for Enhanced Legal Reasoning
|
arXiv:2601.13806v1 Announce Type: new Abstract: LLM post-training has primarily relied on large text corpora and human feedback, without capturing the structure of domain knowledge. This has caused models to struggle dealing with complex reasoning tasks, especially for high-stakes professional domains. In Law, reasoning requires deep understanding of the relations between various legal concepts, a key component missing in current LLM post-training. In this paper, we propose a knowledge graph (KG)-assisted approach for enhancing LLMs' reasoning capability in Legal that is generalizable to other high-stakes domains. We model key legal concepts by following the \textbf{IRAC} (Issue, Rule, Analysis and Conclusion) framework, and construct a KG with 12K legal cases. We then produce training data using our IRAC KG, and conduct both Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) with three state-of-the-art (SOTA) LLMs (30B, 49B and 70B), varying architecture and base model family. Our post-trained models obtained better average performance on 4/5 diverse legal benchmarks (14 tasks) than baselines. In particular, our 70B DPO model achieved the best score on 4/6 reasoning tasks, among baselines and a 141B SOTA legal LLM, demonstrating the effectiveness of our KG for enhancing LLMs' legal reasoning capability.
|
https://arxiv.org/abs/2601.13806
|
Academic Papers
|
svg
|
1a8cc0431020d95fc16c045f3c11767b14e58e63f732f8f7af2ad39c2c904a8a
|
2026-01-21T00:00:00-05:00
|
DroneVLA: VLA based Aerial Manipulation
|
arXiv:2601.13809v1 Announce Type: new Abstract: As aerial platforms evolve from passive observers to active manipulators, the challenge shifts toward designing intuitive interfaces that allow non-expert users to command these systems naturally. This work introduces a novel concept of autonomous aerial manipulation system capable of interpreting high-level natural language commands to retrieve objects and deliver them to a human user. The system is intended to integrate a MediaPipe based on Grounding DINO and a Vision-Language-Action (VLA) model with a custom-built drone equipped with a 1-DOF gripper and an Intel RealSense RGB-D camera. VLA performs semantic reasoning to interpret the intent of a user prompt and generates a prioritized task queue for grasping of relevant objects in the scene. Grounding DINO and dynamic A* planning algorithm are used to navigate and safely relocate the object. To ensure safe and natural interaction during the handover phase, the system employs a human-centric controller driven by MediaPipe. This module provides real-time human pose estimation, allowing the drone to employ visual servoing to maintain a stable, distinct position directly in front of the user, facilitating a comfortable handover. We demonstrate the system's efficacy through real-world experiments for localization and navigation, which resulted in a 0.164m, 0.070m, and 0.084m of max, mean euclidean, and root-mean squared errors, respectively, highlighting the feasibility of VLA for aerial manipulation operations.
|
https://arxiv.org/abs/2601.13809
|
Academic Papers
|
svg
|
4a4c56013b3f1532515cab763898b23377087d90594359c968bff87f713d800f
|
2026-01-21T00:00:00-05:00
|
Integrated Sensing and Communication for Low-Altitude Security
|
arXiv:2601.13810v1 Announce Type: new Abstract: The dense concentration of low-altitude, slow-speed, and small-size targets in the complex low-altitude environment poses significant security challenges, including failures in continuous wide-area sensing and ambiguous target intent, which existing regulatory frameworks struggle to address. Integrated sensing and communication (ISAC), a hallmark of next-generation mobile communication, offers a transformative approach to low-altitude security governance. By leveraging existing cellular infrastructure and spectrum resources, ISAC enables the construction of a seamless wide-area sensing network, supports intelligent feature extraction and intent inference, facilitates real-time collaborative decision-making, and establishes a dynamic trust authentication framework. This article systematically reviews the technical system, analyzes the security challenges, forecasts the enabling value of ISAC, and discusses the resulting open problems and challenges, thereby laying a foundation for future research and industrial implementation.
|
https://arxiv.org/abs/2601.13810
|
Academic Papers
|
svg
|
d0e868bf7ca355a49c962d630b09e8e5b0a6a5ed7e63f68b75162988e6739886
|
2026-01-21T00:00:00-05:00
|
GuideTouch: An Obstacle Avoidance Device for Visually Impaired
|
arXiv:2601.13813v1 Announce Type: new Abstract: Safe navigation for the visually impaired individuals remains a critical challenge, especially concerning head-level obstacles, which traditional mobility aids often fail to detect. We introduce GuideTouch, a compact, affordable, standalone wearable device designed for autonomous obstacle avoidance. The system integrates two vertically aligned Time-of-Flight (ToF) sensors, enabling three-dimensional environmental perception, and four vibrotactile actuators that provide directional haptic feedback. Proximity and direction information is communicated via an intuitive 4-point vibrotactile feedback system located across the user's shoulders and upper chest. For real-world robustness, the device includes a unique centrifugal self-cleaning optical cover mechanism and a sound alarm system for location if the device is dropped. We evaluated the haptic perception accuracy across 22 participants (17 male and 5 female, aged 21-48, mean 25.7, sd 6.1). Statistical analysis confirmed a significant difference between the perception accuracy of different patterns. The system demonstrated high recognition accuracy, achieving an average of 92.9% for single and double motor (primary directional) patterns. Furthermore, preliminary experiments with 14 visually impaired users validated this interface, showing a recognition accuracy of 93.75% for primary directional cues. The results demonstrate that GuideTouch enables intuitive spatial perception and could significantly improve the safety, confidence, and autonomy of users with visual impairments during independent navigation.
|
https://arxiv.org/abs/2601.13813
|
Academic Papers
|
svg
|
618daef204ea5495509a81124fb2c66ef2c85d92a800a263f8fa607511867309
|
2026-01-21T00:00:00-05:00
|
From RTL to Prompt Coding: Empowering the Next Generation of Chip Designers through LLMs
|
arXiv:2601.13815v1 Announce Type: new Abstract: This paper presents an LLM-based learning platform for chip design education, aiming to make chip design accessible to beginners without overwhelming them with technical complexity. It represents the first educational platform that assists learners holistically across both frontend and backend design. The proposed approach integrates an LLM-based chat agent into a browser-based workflow built upon the Tiny Tapeout ecosystem. The workflow guides users from an initial design idea through RTL code generation to a tapeout-ready chip. To evaluate the concept, a case study was conducted with 18 high-school students. Within a 90-minute session they developed eight functional VGA chip designs in a 130 nm technology. Despite having no prior experience in chip design, all groups successfully implemented tapeout-ready projects. The results demonstrate the feasibility and educational impact of LLM-assisted chip design, highlighting its potential to attract and inspire early learners and significantly broaden the target audience for the field.
|
https://arxiv.org/abs/2601.13815
|
Academic Papers
|
svg
|
fa49ac52d55ec7e647b58a3b7f91c2dc807ef334760afa4dfc63ff9aef0e7b2c
|
2026-01-21T00:00:00-05:00
|
Discriminant Learning-based Colorspace for Blade Segmentation
|
arXiv:2601.13816v1 Announce Type: new Abstract: Suboptimal color representation often hinders accurate image segmentation, yet many modern algorithms neglect this critical preprocessing step. This work presents a novel multidimensional nonlinear discriminant analysis algorithm, Colorspace Discriminant Analysis (CSDA), for improved segmentation. Extending Linear Discriminant Analysis into a deep learning context, CSDA customizes color representation by maximizing multidimensional signed inter-class separability while minimizing intra-class variability through a generalized discriminative loss. To ensure stable training, we introduce three alternative losses that enable end-to-end optimization of both the discriminative colorspace and segmentation process. Experiments on wind turbine blade data demonstrate significant accuracy gains, emphasizing the importance of tailored preprocessing in domain-specific segmentation.
|
https://arxiv.org/abs/2601.13816
|
Academic Papers
|
svg
|
d53e6bececec002cad6fcfd4672463015f81c4722585a0e4f4d216c3a80ae363
|
2026-01-21T00:00:00-05:00
|
Device Association and Resource Allocation for Hierarchical Split Federated Learning in Space-Air-Ground Integrated Network
|
arXiv:2601.13817v1 Announce Type: new Abstract: 6G facilitates deployment of Federated Learning (FL) in the Space-Air-Ground Integrated Network (SAGIN), yet FL confronts challenges such as resource constrained and unbalanced data distribution. To address these issues, this paper proposes a Hierarchical Split Federated Learning (HSFL) framework and derives its upper bound of loss function. To minimize the weighted sum of training loss and latency, we formulate a joint optimization problem that integrates device association, model split layer selection, and resource allocation. We decompose the original problem into several subproblems, where an iterative optimization algorithm for device association and resource allocation based on brute-force split point search is proposed. Simulation results demonstrate that the proposed algorithm can effectively balance training efficiency and model accuracy for FL in SAGIN.
|
https://arxiv.org/abs/2601.13817
|
Academic Papers
|
svg
|
fb2a46aed0160ecdc4d20a8d68808a4b4e06d2b712712c98ea592ca60bfa6ed1
|
2026-01-21T00:00:00-05:00
|
Efficient Parallel $(\Delta+1)$-Edge-Coloring
|
arXiv:2601.13822v1 Announce Type: new Abstract: We study the $(\Delta+1)$-edge-coloring problem in the parallel $\left(\mathrm{PRAM}\right)$ model of computation. The celebrated Vizing's theorem [Viz64] states that every simple graph $G = (V,E)$ can be properly $(\Delta+1)$-edge-colored. In a seminal paper, Karloff and Shmoys [KS87] devised a parallel algorithm with time $O\left(\Delta^5\cdot\log n\cdot\left(\log^3 n+\Delta^2\right)\right)$ and $O(m\cdot\Delta)$ processors. This result was improved by Liang et al. [LSH96] to time $O\left(\Delta^{4.5}\cdot \log^3\Delta\cdot \log n + \Delta^4 \cdot\log^4 n\right)$ and $O\left(n\cdot\Delta^{3} +n^2\right)$ processors. [LSH96] claimed $O\left(\Delta^{3.5} \cdot\log^3\Delta\cdot \log n + \Delta^3\cdot \log^4 n\right)$ time, but we point out a flaw in their analysis, which once corrected, results in the above bound. We devise a faster parallel algorithm for this fundamental problem. Specifically, our algorithm uses $O\left(\Delta^4\cdot \log^4 n\right)$ time and $O(m\cdot \Delta)$ processors. Another variant of our algorithm requires $O\left(\Delta^{4+o(1)}\cdot\log^2 n\right)$ time, and $O\left(m\cdot\Delta\cdot\log n\cdot\log^{\delta}\Delta\right)$ processors, for an arbitrarily small $\delta>0$. We also devise a few other tradeoffs between the time and the number of processors, and devise an improved algorithm for graphs with small arboricity. On the way to these results, we also provide a very fast parallel algorithm for updating $(\Delta+1)$-edge-coloring. Our algorithm for this problem is dramatically faster and simpler than the previous state-of-the-art algorithm (due to [LSH96]) for this problem.
|
https://arxiv.org/abs/2601.13822
|
Academic Papers
|
svg
|
59bc71415b11e151ee08acfadcbc849203eb346894f275a33b59eada95294550
|
2026-01-21T00:00:00-05:00
|
Multi-Trace M\"uller Boundary Integral Equation for Electromagnetic Scattering by Composite Objects
|
arXiv:2601.13823v1 Announce Type: new Abstract: This paper introduces a boundary integral equation for time-harmonic electromagnetic scattering by composite dielectric objects. The formulation extends the classical M\"uller equation to composite structures through the global multi-trace method. The key ingredient enabling this extension is the use of the Stratton-Chu representation in complementary region, also known as the extinction property, which augments the off-diagonal blocks of the interior representation operator. The resulting block system is composed entirely of second-kind operators. A Petrov-Galerkin (mixed) discretization using Rao-Wilton-Glisson trial functions and Buffa-Christiansen test functions is employed, yielding linear systems that remain well conditioned on dense meshes and at low frequencies without the need for additional stabilization. This reduces computational costs associated with matrix-vector multiplications and iterative solving. Numerical experiments demonstrate the accuracy of the method in computing field traces and derived quantities.
|
https://arxiv.org/abs/2601.13823
|
Academic Papers
|
svg
|
4b63cee18b5c66f798f31321808f5d4072ed3e38fea0bb70c57c9e507c8e6458
|
2026-01-21T00:00:00-05:00
|
ELSA: Efficient LLM-Centric Split Aggregation for Privacy-Aware Hierarchical Federated Learning over Resource-Constrained Edge Networks
|
arXiv:2601.13824v1 Announce Type: new Abstract: Training large language models (LLMs) at the network edge faces fundamental challenges arising from device resource constraints, severe data heterogeneity, and heightened privacy risks. To address these, we propose ELSA (Efficient LLM-centric Split Aggregation), a novel framework that systematically integrates split learning (SL) and hierarchical federated learning (HFL) for distributed LLM fine-tuning over resource-constrained edge networks. ELSA introduces three key innovations. First, it employs a task-agnostic, behavior-aware client clustering mechanism that constructs semantic fingerprints using public probe inputs and symmetric KL divergence, further enhanced by prediction-consistency-based trust scoring and latency-aware edge assignment to jointly address data heterogeneity, client unreliability, and communication constraints. Second, it splits the LLM into three parts across clients and edge servers, with the cloud used only for adapter aggregation, enabling an effective balance between on-device computation cost and global convergence stability. Third, it incorporates a lightweight communication scheme based on computational sketches combined with semantic subspace orthogonal perturbation (SS-OP) to reduce communication overhead while mitigating privacy leakage during model exchanges. Experiments across diverse NLP tasks demonstrate that ELSA consistently outperforms state-of-the-art methods in terms of adaptability, convergence behavior, and robustness, establishing a scalable and privacy-aware solution for edge-side LLM fine-tuning under resource constraints.
|
https://arxiv.org/abs/2601.13824
|
Academic Papers
|
svg
|
615a0f532c23a83bf8963816375e1fc186abf79ea5a30848fe634d6f7743c70c
|
2026-01-21T00:00:00-05:00
|
MirageNet:A Secure, Efficient, and Scalable On-Device Model Protection in Heterogeneous TEE and GPU System
|
arXiv:2601.13826v1 Announce Type: new Abstract: As edge devices gain stronger computing power, deploying high-performance DNN models on untrusted hardware has become a practical approach to cut inference latency and protect user data privacy. Given high model training costs and user experience requirements, balancing model privacy and low runtime overhead is critical. TEEs offer a viable defense, and prior work has proposed heterogeneous GPU-TEE inference frameworks via parameter obfuscation to balance efficiency and confidentiality. However, recent studies find partial obfuscation defenses ineffective, while robust schemes cause unacceptable latency. To resolve these issues, we propose ConvShatter, a novel obfuscation scheme that achieves low latency and high accuracy while preserving model confidentiality and integrity. It leverages convolution linearity to decompose kernels into critical and common ones, inject confounding decoys, and permute channel/kernel orders. Pre-deployment, it performs kernel decomposition, decoy injection and order obfuscation, storing minimal recovery parameters securely in the TEE. During inference, the TEE reconstructs outputs of obfuscated convolutional layers. Extensive experiments show ConvShatter substantially reduces latency overhead with strong security guarantees; versus comparable schemes, it cuts overhead by 16% relative to GroupCover while maintaining accuracy on par with the original model.
|
https://arxiv.org/abs/2601.13826
|
Academic Papers
|
svg
|
919cb56c58de54b46c4856284432956a6d4700397bd7e533a9a37adbfc4a3ec2
|
2026-01-21T00:00:00-05:00
|
Base Station Sleeping Strategy Based on Load Sharing in Ultra-Dense Networks
|
arXiv:2601.13832v1 Announce Type: new Abstract: To address the issues of high operational costs and low energy efficiency (EE) caused by the dense deployment of small base stations (s-BSs) in 5G ultra-dense networks (UDNs), this paper first constructs a multi-objective mathematical optimization model targeting maximizing EE and minimizing the number of active BSs. The model incorporates key constraints including BS operational state, user equipment (UE)-BS connection relationship, and load threshold, laying a theoretical foundation for the coordinated optimization of energy conservation and quality of service. Based on this model, an integrated solution combining UE-BS initial connection optimization and load-sharing based BS sleeping is proposed. In the initial connection phase, with communication quality and BS load as dual constraints, efficient matching between UEs and optimal BSs is achieved through three sequential steps: communication feasibility screening, redundant connection removal, and overload load redistribution. This resolves the problems of load imbalance and difficult identification of redundant BSs in UDNs arising from unordered initial connections. In the BS sleeping phase, a BS sleeping index, comprehensively considering UE transferability and backup BS resources, is innovatively introduced to quantify BS dormancy priority. Through a closed-loop process involving low-load BS screening, adjacent BS load evaluation, and load sharing by two takeover BSs based on their capacity, accurate dormancy of redundant BSs and collaborative load migration are realized. Simulation results in a typical UDNs scenario demonstrate that, compared with the traditional baseline scheme, the proposed solution exhibits significant advantages in convergence speed, optimization of the number of active BSs, and EE improvement.
|
https://arxiv.org/abs/2601.13832
|
Academic Papers
|
svg
|
ecdc6978a04b3109cd97ba579999371915dd16d2a8bba77fe369bb65ec6be784
|
2026-01-21T00:00:00-05:00
|
The Role of Prosodic and Lexical Cues in Turn-Taking with Self-Supervised Speech Representations
|
arXiv:2601.13835v1 Announce Type: new Abstract: Fluid turn-taking remains a key challenge in human-robot interaction. Self-supervised speech representations (S3Rs) have driven many advances, but it remains unclear whether S3R-based turn-taking models rely on prosodic cues, lexical cues or both. We introduce a vocoder-based approach to control prosody and lexical cues in speech more cleanly than prior work. This allows us to probe the voice-activity projection model, an S3R-based turn-taking model. We find that prediction on prosody-matched, unintelligible noise is similar to accuracy on clean speech. This reveals both prosodic and lexical cues support turn-taking, but either can be used in isolation. Hence, future models may only require prosody, providing privacy and potential performance benefits. When either prosodic or lexical information is disrupted, the model exploits the other without further training, indicating they are encoded in S3Rs with limited interdependence. Results are consistent in CPC-based and wav2vec2.0 S3Rs. We discuss our findings and highlight a number of directions for future work. All code is available to support future research.
|
https://arxiv.org/abs/2601.13835
|
Academic Papers
|
svg
|
15e8b74bb680bb455626344c00027ba5de43a49115ee77d1b533f526b91f7a07
|
2026-01-21T00:00:00-05:00
|
FutureOmni: Evaluating Future Forecasting from Omni-Modal Context for Multimodal LLMs
|
arXiv:2601.13836v1 Announce Type: new Abstract: Although Multimodal Large Language Models (MLLMs) demonstrate strong omni-modal perception, their ability to forecast future events from audio-visual cues remains largely unexplored, as existing benchmarks focus mainly on retrospective understanding. To bridge this gap, we introduce FutureOmni, the first benchmark designed to evaluate omni-modal future forecasting from audio-visual environments. The evaluated models are required to perform cross-modal causal and temporal reasoning, as well as effectively leverage internal knowledge to predict future events. FutureOmni is constructed via a scalable LLM-assisted, human-in-the-loop pipeline and contains 919 videos and 1,034 multiple-choice QA pairs across 8 primary domains. Evaluations on 13 omni-modal and 7 video-only models show that current systems struggle with audio-visual future prediction, particularly in speech-heavy scenarios, with the best accuracy of 64.8% achieved by Gemini 3 Flash. To mitigate this limitation, we curate a 7K-sample instruction-tuning dataset and propose an Omni-Modal Future Forecasting (OFF) training strategy. Evaluations on FutureOmni and popular audio-visual and video-only benchmarks demonstrate that OFF enhances future forecasting and generalization. We publicly release all code (https://github.com/OpenMOSS/FutureOmni) and datasets (https://huggingface.co/datasets/OpenMOSS-Team/FutureOmni).
|
https://arxiv.org/abs/2601.13836
|
Academic Papers
|
svg
|
55e6189de39597bda85e774f04c214df5787bed29fb21482443634380c3eefc6
|
2026-01-21T00:00:00-05:00
|
FastGHA: Generalized Few-Shot 3D Gaussian Head Avatars with Real-Time Animation
|
arXiv:2601.13837v1 Announce Type: new Abstract: Despite recent progress in 3D Gaussian-based head avatar modeling, efficiently generating high fidelity avatars remains a challenge. Current methods typically rely on extensive multi-view capture setups or monocular videos with per-identity optimization during inference, limiting their scalability and ease of use on unseen subjects. To overcome these efficiency drawbacks, we propose \OURS, a feed-forward method to generate high-quality Gaussian head avatars from only a few input images while supporting real-time animation. Our approach directly learns a per-pixel Gaussian representation from the input images, and aggregates multi-view information using a transformer-based encoder that fuses image features from both DINOv3 and Stable Diffusion VAE. For real-time animation, we extend the explicit Gaussian representations with per-Gaussian features and introduce a lightweight MLP-based dynamic network to predict 3D Gaussian deformations from expression codes. Furthermore, to enhance geometric smoothness of the 3D head, we employ point maps from a pre-trained large reconstruction model as geometry supervision. Experiments show that our approach significantly outperforms existing methods in both rendering quality and inference efficiency, while supporting real-time dynamic avatar animation.
|
https://arxiv.org/abs/2601.13837
|
Academic Papers
|
svg
|
98d0d342c103f634854a3c93eb00e95f684d896b5de423c618a23d8bc913f6e0
|
2026-01-21T00:00:00-05:00
|
A Predictive and Preventive Digital Twin Framework for Indoor Wireless Networks
|
arXiv:2601.13838v1 Announce Type: new Abstract: Wi-Fi networks increasingly suffer from performance degradation caused by contention-based channel access, dense deployments, and largely self-managed operation among mutually interfering access points (APs). In this paper, we propose a Digital Twin (DT) framework that captures the essential spatial and temporal characteristics of wireless channels and traffic patterns, enabling the prediction of likely future network scenarios while respecting physical constraints. Leveraging this predictive capability, we introduce two analytically derived performance upper bounds-one based on Shannon capacity and the other on latency behavior under CSMA-CA (Carrier Sense Multiple Access with Collision Avoidance)-that can be evaluated efficiently without time-consuming network simulations. By applying importance sampling to DT-generated scenarios, potentially risky network conditions can be identified within large stochastic scenario spaces. These same performance bounds are then used to proactively guide a gradient-based search for improved network configurations, with the objective of avoiding imminent performance degradation rather than pursuing globally optimal but fragile solutions. Simulation results demonstrate that the proposed approach can successfully predict time-dependent network congestion and mitigate it in advance, highlighting its potential for predictive and preventive Wi-Fi network management.
|
https://arxiv.org/abs/2601.13838
|
Academic Papers
|
svg
|
b92e2d2c55815b849d81b828d1275ca2ddc8683f97a91fa09db68595cbc9914e
|
2026-01-21T00:00:00-05:00
|
DisasterVQA: A Visual Question Answering Benchmark Dataset for Disaster Scenes
|
arXiv:2601.13839v1 Announce Type: new Abstract: Social media imagery provides a low-latency source of situational information during natural and human-induced disasters, enabling rapid damage assessment and response. While Visual Question Answering (VQA) has shown strong performance in general-purpose domains, its suitability for the complex and safety-critical reasoning required in disaster response remains unclear. We introduce DisasterVQA, a benchmark dataset designed for perception and reasoning in crisis contexts. DisasterVQA consists of 1,395 real-world images and 4,405 expert-curated question-answer pairs spanning diverse events such as floods, wildfires, and earthquakes. Grounded in humanitarian frameworks including FEMA ESF and OCHA MIRA, the dataset includes binary, multiple-choice, and open-ended questions covering situational awareness and operational decision-making tasks. We benchmark seven state-of-the-art vision-language models and find performance variability across question types, disaster categories, regions, and humanitarian tasks. Although models achieve high accuracy on binary questions, they struggle with fine-grained quantitative reasoning, object counting, and context-sensitive interpretation, particularly for underrepresented disaster scenarios. DisasterVQA provides a challenging and practical benchmark to guide the development of more robust and operationally meaningful vision-language models for disaster response. The dataset is publicly available at https://zenodo.org/records/18267770.
|
https://arxiv.org/abs/2601.13839
|
Academic Papers
|
svg
|
411fc5c6143d2ff64e7f935e3a701d4834abd6704754ef638a694c1ddea1d4e1
|
2026-01-21T00:00:00-05:00
|
Robust Reversible Watermarking in Encrypted Images Based on Dual-MSBs Spiral Embedding
|
arXiv:2601.13840v1 Announce Type: new Abstract: Robust reversible watermarking in encrypted images (RRWEI) faces an inherent challenge in simultaneously achieving robustness, reversibility, and content privacy under severely constrained embedding capacity. Existing RRWEI schemes often exhibit limited robustness against noise, lossy compression, and cropping attacks due to insufficient redundancy in the encrypted domain. To address this challenge, this paper proposes a novel RRWEI framework that couples dual most significant bit-plane (dual-MSBs) embedding with spatial redundancy and error-correcting coding. By compressing prediction-error bit-planes, sufficient embedding space and auxiliary information for lossless reconstruction are reserved. The dual-MSBs are further reorganized using a spiral embedding strategy to distribute multiple redundant watermark copies across spatially dispersed regions, enhancing robustness against both noise and spatial loss.Experimental results on standard test images demonstrate that the proposed method consistently outperforms under evaluated settings robustness against Gaussian noise, JPEG compression, and diverse cropping attacks, while maintaining perfect reversibility and high embedding capacity. Compared with state-of-the-art RRWEI schemes, the proposed framework achieves substantially lower bit-error rates and more stable performance under a wide range of attack scenarios.
|
https://arxiv.org/abs/2601.13840
|
Academic Papers
|
svg
|
42c52fa6dc5a4b063a76f3e1947527822c5ea38fc4ed5a34a3c7b8d8f36df938
|
2026-01-21T00:00:00-05:00
|
Nemesis, an Escape Game in Graphs
|
arXiv:2601.13841v1 Announce Type: new Abstract: We define a new escape game in graphs that we call Nemesis. The game is played on a graph having a subset of vertices labeled as exits and the goal of one of the two players, called the fugitive, is to reach one of these exit vertices. The second player, i.e. the fugitive adversary, is called the Nemesis. Her goal is to trap the fugitive in a connected component which does not contain any exit. At each round of the game, the fugitive moves from one vertex to an adjacent vertex. Then the Nemesis deletes one edge anywhere in the graph. The game ends when either the fugitive reached an exit or when he is in a connected component that does not contain any exit. In trees and graphs of maximum degree bounded by 3, Nemesis can be solved in linear time. We also show that a variant of the game called Blizzard where only edges adjacent to the position of the fugitive can be deleted also admits a linear time solution. For arbitrary graphs, we show that Nemesis is PSPACE-complete, and that it is NP-hard on planar multigraphs. We extend our results to the related Cat Herding problem, proving its PSPACE-completeness. We also prove that finding a strategy based on a full binary escape tree whose leaves are exists is NP-complete.
|
https://arxiv.org/abs/2601.13841
|
Academic Papers
|
svg
|
410e68b25fbd846626b01d1b5b7ffc41e339b6eb75f191b85a07c0067a322710
|
2026-01-21T00:00:00-05:00
|
Small Models, Big Impact: Tool-Augmented AI Agents for Wireless Network Planning
|
arXiv:2601.13843v1 Announce Type: new Abstract: Large Language Models (LLMs) such as ChatGPT promise revolutionary capabilities for Sixth-Generation (6G) wireless networks but their massive computational requirements and tendency to generate technically incorrect information create deployment barriers. In this work, we introduce MAINTAINED: autonomous artificial intelligence agent for wireless network deployment. Instead of encoding domain knowledge within model parameters, our approach orchestrates specialized computational tools for geographic analysis, signal propagation modeling, and network optimization. In a real-world case study, MAINTAINED outperforms state-of-the-art LLMs including ChatGPT-4o, Claude Sonnet 4, and DeepSeek-R1 by up to 100-fold in verified performance metrics while requiring less computational resources. This paradigm shift, moving from relying on parametric knowledge towards externalizing domain knowledge into verifiable computational tools, eliminates hallucination in technical specifications and enables edge-deployable Artificial Intelligence (AI) for wireless communications.
|
https://arxiv.org/abs/2601.13843
|
Academic Papers
|
svg
|
a7406be875b81795c594fd79a5aaf0d9727e23e9981d9a76f024f2bdf8df7c7c
|
2026-01-21T00:00:00-05:00
|
Optimal L2 Regularization in High-dimensional Continual Linear Regression
|
arXiv:2601.13844v1 Announce Type: new Abstract: We study generalization in an overparameterized continual linear regression setting, where a model is trained with L2 (isotropic) regularization across a sequence of tasks. We derive a closed-form expression for the expected generalization loss in the high-dimensional regime that holds for arbitrary linear teachers. We demonstrate that isotropic regularization mitigates label noise under both single-teacher and multiple i.i.d. teacher settings, whereas prior work accommodating multiple teachers either did not employ regularization or used memory-demanding methods. Furthermore, we prove that the optimal fixed regularization strength scales nearly linearly with the number of tasks $T$, specifically as $T/\ln T$. To our knowledge, this is the first such result in theoretical continual learning. Finally, we validate our theoretical findings through experiments on linear regression and neural networks, illustrating how this scaling law affects generalization and offering a practical recipe for the design of continual learning systems.
|
https://arxiv.org/abs/2601.13844
|
Academic Papers
|
svg
|
a0ef9e870f3004165d214832874dc3eaba5a668de3358e2876729cdc96c91145
|
2026-01-21T00:00:00-05:00
|
Virtual Urbanism: An AI-Driven Framework for Quantifying Urban Identity. A Tokyo-Based Pilot Study Using Diffusion-Generated Synthetic Environments
|
arXiv:2601.13846v1 Announce Type: new Abstract: This paper introduces Virtual Urbanism (VU), a multimodal AI-driven analytical framework for quantifying urban identity through the medium of synthetic urban replicas. The framework aims to advance computationally tractable urban identity metrics. To demonstrate feasibility, the pilot study Virtual Urbanism and Tokyo Microcosms is presented. A pipeline integrating Stable Diffusion and LoRA models was used to produce synthetic replicas of nine Tokyo areas rendered as dynamic synthetic urban sequences, excluding existing orientation markers to elicit core identity-forming elements. Human-evaluation experiments (I) assessed perceptual legitimacy of replicas; (II) quantified area-level identity; (III) derived core identity-forming elements. Results showed a mean identification accuracy of ~81%, confirming the validity of the replicas. Urban Identity Level (UIL) metric enabled assessment of identity levels across areas, while semantic analysis revealed culturally embedded typologies as core identity-forming elements, positioning VU as a viable framework for AI-augmented urban analysis, outlining a path toward automated, multi-parameter identity metrics.
|
https://arxiv.org/abs/2601.13846
|
Academic Papers
|
svg
|
85f90c74dd6f9e1ae8667310dc88b6fd5920688c9482c1f3f3f3df6b2be908e7
|
2026-01-21T00:00:00-05:00
|
Emotion and Acoustics Should Agree: Cross-Level Inconsistency Analysis for Audio Deepfake Detection
|
arXiv:2601.13847v1 Announce Type: new Abstract: Audio Deepfake Detection (ADD) aims to detect spoof speech from bonafide speech. Most prior studies assume that stronger correlations within or across acoustic and emotional features imply authenticity, and thus focus on enhancing or measuring such correlations. However, existing methods often treat acoustic and emotional features in isolation or rely on correlation metrics, which overlook subtle desynchronization between them and smooth out abrupt discontinuities. To address these issues, we propose EAI-ADD, which treats cross level emotion acoustic inconsistency as the primary detection signal. We first project emotional and acoustic representations into a comparable space. Then we progressively integrate frame level and utterance level emotion features with acoustic features to capture cross level emotion acoustic inconsistencies across different temporal granularities. Experimental results on the ASVspoof 2019LA and 2021LA datasets demonstrate that the proposed EAI-ADD outperforms baselines, providing a more effective solution for audio anti spoofing detection.
|
https://arxiv.org/abs/2601.13847
|
Academic Papers
|
svg
|
d93ef59619cb00c9724517a39952809c840ce99932cb74a69dbc412253d5b0d2
|
2026-01-21T00:00:00-05:00
|
Inverting Self-Organizing Maps: A Unified Activation-Based Framework
|
arXiv:2601.13851v1 Announce Type: new Abstract: Self-Organizing Maps provide topology-preserving projections of high-dimensional data and have been widely used for visualization, clustering, and vector quantization. In this work, we show that the activation pattern of a SOM - the squared distances to its prototypes - can be inverted to recover the exact input under mild geometric conditions. This follows from a classical fact in Euclidean distance geometry: a point in $D$ dimensions is uniquely determined by its distances to $D{+}1$ affinely independent references. We derive the corresponding linear system and characterize the conditions under which the inversion is well-posed. Building upon this mechanism, we introduce the Manifold-Aware Unified SOM Inversion and Control (MUSIC) update rule, which enables controlled, semantically meaningful trajectories in latent space. MUSIC modifies squared distances to selected prototypes while preserving others, resulting in a deterministic geometric flow aligned with the SOM's piecewise-linear structure. Tikhonov regularization stabilizes the update rule and ensures smooth motion on high-dimensional datasets. Unlike variational or probabilistic generative models, MUSIC does not rely on sampling, latent priors, or encoder-decoder architectures. If no perturbation is applied, inversion recovers the exact input; when a target cluster or prototype is specified, MUSIC produces coherent semantic variations while remaining on the data manifold. This leads to a new perspective on data augmentation and controllable latent exploration based solely on prototype geometry. We validate the approach using synthetic Gaussian mixtures, the MNIST and the Faces in the Wild dataset. Across all settings, MUSIC produces smooth, interpretable trajectories that reveal the underlying geometry of the learned manifold, illustrating the advantages of SOM-based inversion over unsupervised clustering.
|
https://arxiv.org/abs/2601.13851
|
Academic Papers
|
svg
|
49fbe7bf112102e3cdb4e2c3d39f9ca3864f39a05a4fb81ffc80cd49fc04f8f8
|
2026-01-21T00:00:00-05:00
|
Probabilistic Deep Discriminant Analysis for Wind Blade Segmentation
|
arXiv:2601.13852v1 Announce Type: new Abstract: Linear discriminant analysis improves class separability but struggles with non-linearly separable data. To overcome this, we introduce Deep Discriminant Analysis (DDA), which directly optimizes the Fisher criterion utilizing deep networks. To ensure stable training and avoid computational instabilities, we incorporate signed between-class variance, bound outputs with a sigmoid function, and convert multiplicative relationships into additive ones. We present two stable DDA loss functions and augment them with a probability loss, resulting in Probabilistic DDA (PDDA). PDDA effectively minimizes class overlap in output distributions, producing highly confident predictions with reduced within-class variance. When applied to wind blade segmentation, PDDA showcases notable advances in performance and consistency, critical for wind energy maintenance. To our knowledge, this is the first application of DDA to image segmentation.
|
https://arxiv.org/abs/2601.13852
|
Academic Papers
|
svg
|
c4609f998d9f94d97204e7cd9d1dc45fa305b8577268442c7636799eb885e79d
|
2026-01-21T00:00:00-05:00
|
Question-Focused Filtering for Knowledge-based VQA
|
arXiv:2601.13856v1 Announce Type: new Abstract: Knowledge-based Visual Question Answering (KB-VQA) aims to answer questions by integrating images with external knowledge. Effective knowledge filtering is crucial for improving accuracy. Typical filtering methods use similarity metrics to locate relevant article sections from one article, leading to information selection errors at the article and intra-article levels. Although recent explorations of Multimodal Large Language Model (MLLM)-based filtering methods demonstrate superior semantic understanding and cross-article filtering capabilities, their high computational cost limits practical application. To address these issues, this paper proposes a question-focused filtering method. This approach can perform question-focused, cross-article filtering, efficiently obtaining high-quality filtered knowledge while keeping computational costs comparable to typical methods. Specifically, we design a trainable Question-Focused Filter (QFF) and a Chunk-based Dynamic Multi-Article Selection (CDA) module, which collectively alleviate information selection errors at both the article and intra-article levels. Experiments show that our method outperforms current state-of-the-art models by 4.9% on E-VQA and 3.8% on InfoSeek, validating its effectiveness. The code is publicly available at: https://github.com/leaffeall/QKVQA.
|
https://arxiv.org/abs/2601.13856
|
Academic Papers
|
svg
|
c80faa5abcd240ec809f4a127633cfadbf336c6056e3c73ad0ad477829304641
|
2026-01-21T00:00:00-05:00
|
Designing Drone Interfaces to Assist Pedestrians Crossing Non-Signalised Roads
|
arXiv:2601.13858v1 Announce Type: new Abstract: Recent research highlights the potential of drones to enhance pedestrian experiences, such as aiding navigation and supporting street-level activities. This paper explores the design of drone interfaces to assist pedestrians crossing dangerous roads without designated crosswalks or traffic lights, leveraging drones' ability to monitor and analyse real-time traffic data. Inspired by existing traffic signal systems, the interface communicates safety information through permissive alerts, prohibitive warnings, directional warnings, and collision emergency warnings. These safety cues were integrated into drone interfaces using in-situ projections and drone-equipped screens through an iterative design process. A mixed-methods, within-subjects VR evaluation (n=18) revealed that drone-assisted systems significantly improved pedestrian safety experiences and reduced mental workload compared to a baseline without any crossing aid, with projections outperforming screens. The findings suggest the potential for drone interfaces to be integrated into connected traffic systems. We also offer design recommendations for developing drone interfaces that support safe pedestrian crossings.
|
https://arxiv.org/abs/2601.13858
|
Academic Papers
|
svg
|
abf08b187c2f5ee6c06c0f0f6afddeebce212719cc67d6d5a6cb7cf494621de8
|
2026-01-21T00:00:00-05:00
|
HardSecBench: Benchmarking the Security Awareness of LLMs for Hardware Code Generation
|
arXiv:2601.13864v1 Announce Type: new Abstract: Large language models (LLMs) are being increasingly integrated into practical hardware and firmware development pipelines for code generation. Existing studies have primarily focused on evaluating the functional correctness of LLM-generated code, yet paid limited attention to its security issues. However, LLM-generated code that appears functionally sound may embed security flaws which could induce catastrophic damages after deployment. This critical research gap motivates us to design a benchmark for assessing security awareness under realistic specifications. In this work, we introduce HardSecBench, a benchmark with 924 tasks spanning Verilog Register Transfer Level (RTL) and firmware-level C, covering 76 hardware-relevant Common Weakness Enumeration (CWE) entries. Each task includes a structured specification, a secure reference implementation, and executable tests. To automate artifact synthesis, we propose a multi-agent pipeline that decouples synthesis from verification and grounds evaluation in execution evidence, enabling reliable evaluation. Using HardSecBench, we evaluate a range of LLMs on hardware and firmware code generation and find that models often satisfy functional requirements while still leaving security risks. We also find that security results vary with prompting. These findings highlight pressing challenges and offer actionable insights for future advancements in LLM-assisted hardware design. Our data and code will be released soon.
|
https://arxiv.org/abs/2601.13864
|
Academic Papers
|
svg
|
05ef0daa5db5c0ae9e10d67ccced0f23a93a668ac60c165c3e99e237544c55e5
|
2026-01-21T00:00:00-05:00
|
Understanding Human-Multi-Agent Team Formation for Creative Work
|
arXiv:2601.13865v1 Announce Type: new Abstract: Team-based collaboration is a cornerstone of modern creative work. Recent advances in generative AI open possibilities for humans to collaborate with multiple AI agents in distinct roles to address complex creative workflows. Yet, how to form Human-Multi-Agent Teams (HMATs) is underexplored, especially given that inter-agent interactions increase complexity and the risk of unexpected behaviors. In this exploratory study, we aim to understand how to form HMATs for creative work using CrafTeam, a technology probe that allows users to form and collaborate with their teams. We conducted a study with 12 design practitioners, in which participants iterated through a three-step cycle: forming HMATs, ideating with their teams, and reflecting on their teams' ideation. Our findings reveal that while participants initially attempted autonomous team operations, they ultimately adopted team formations in which they directly orchestrated agents. We discuss design considerations for HMAT formation that humans can effectively orchestrate multiple agents.
|
https://arxiv.org/abs/2601.13865
|
Academic Papers
|
svg
|
9b42d6b0dfb06cdcebfff3d88f5b1c4ef64bbeb401a7e4e39c6f9247cbd7a5aa
|
2026-01-21T00:00:00-05:00
|
OCCAM: Class-Agnostic, Training-Free, Prior-Free and Multi-Class Object Counting
|
arXiv:2601.13871v1 Announce Type: new Abstract: Class-Agnostic object Counting (CAC) involves counting instances of objects from arbitrary classes within an image. Due to its practical importance, CAC has received increasing attention in recent years. Most existing methods assume a single object class per image, rely on extensive training of large deep learning models and address the problem by incorporating additional information, such as visual exemplars or text prompts. In this paper, we present OCCAM, the first training-free approach to CAC that operates without the need of any supplementary information. Moreover, our approach addresses the multi-class variant of the problem, as it is capable of counting the object instances in each and every class among arbitrary object classes within an image. We leverage Segment Anything Model 2 (SAM2), a foundation model, and a custom threshold-based variant of the First Integer Neighbor Clustering Hierarchy (FINCH) algorithm to achieve competitive performance on widely used benchmark datasets, FSC-147 and CARPK. We propose a synthetic multi-class dataset and F1 score as a more suitable evaluation metric. The code for our method and the proposed synthetic dataset will be made publicly available at https://mikespanak.github.io/OCCAM_counter.
|
https://arxiv.org/abs/2601.13871
|
Academic Papers
|
svg
|
4956b617d8e01abfbe145e12eb4728cdd85301eff6d2a746fde762e1d216b358
|
2026-01-21T00:00:00-05:00
|
Enhanced Cyber Threat Intelligence by Network Forensic Analysis for Ransomware as a Service(RaaS) Malwares
|
arXiv:2601.13873v1 Announce Type: new Abstract: In the current era of interconnected cyberspace, there is an adverse effect of ransomware on individuals, startups, and large companies. Cybercriminals hold digital assets till the demand for payment is made. The success of ransomware upsurged with the introduction of Ransomware as a Service(RaaS) franchise in the darknet market. Obfuscation and polymorphic nature of malware make them more difficult to identify by Antivirus system. Signature based intrusion detection is still on role suffering from the scarcity of RaaS packet signatures. We have analysed RaaS samples by network forensic approach to investigate on packet captures of benign and malicious network traffic. The behavior analysis of RaaS family Ransomwares, Ryuk and Gandcrab have been investigated to classify the packets as suspicious, malicious, and non-malicious which further aid in generating RaaS packet signatures for early detection and mitigation of ransomwares belonging to RaaS family. More than 40\% of packets are found malicious in this experiment. The proposed method is also verified by Virus Total API Approach. Further, the proposed approach is recommended for integration into honeypots in the present scenario to combat with data scarcity concerned with malware samples(RaaS). This data will be helpful in developing AI-based threat intelligence mechanisms. In turn enhance detection, prevention of threats, incident response and risk assessment.
|
https://arxiv.org/abs/2601.13873
|
Academic Papers
|
svg
|
4bf3395a515f47b65d133f03fff5e5f0b5054037e1e0569f5b992575b22c6910
|
2026-01-21T00:00:00-05:00
|
Pedagogical Alignment for Vision-Language-Action Models: A Comprehensive Framework for Data, Architecture, and Evaluation in Education
|
arXiv:2601.13876v1 Announce Type: new Abstract: Science demonstrations are important for effective STEM education, yet teachers face challenges in conducting them safely and consistently across multiple occasions, where robotics can be helpful. However, current Vision-Language-Action (VLA) models require substantial computational resources and sacrifice language generation capabilities to maximize efficiency, making them unsuitable for resource-constrained educational settings that require interpretable, explanation-generating systems. We present \textit{Pedagogical VLA Framework}, a framework that applies pedagogical alignment to lightweight VLA models through four components: text healing to restore language generation capabilities, large language model (LLM) distillation to transfer pedagogical knowledge, safety training for educational environments, and pedagogical evaluation adjusted to science education contexts. We evaluate Pedagogical VLA Framework across five science demonstrations spanning physics, chemistry, biology, and earth science, using an evaluation framework developed in collaboration with science education experts. Our evaluation assesses both task performance (success rate, protocol compliance, efficiency, safety) and pedagogical quality through teacher surveys and LLM-as-Judge assessment. We additionally provide qualitative analysis of generated texts. Experimental results demonstrate that Pedagogical VLA Framework achieves comparable task performance to baseline models while producing contextually appropriate educational explanations.
|
https://arxiv.org/abs/2601.13876
|
Academic Papers
|
svg
|
8263b8ccf23bbdbef2c99e5b472ebd52c13315b096cac7c46b4241fe836c944b
|
2026-01-21T00:00:00-05:00
|
Chain-of-Thought Compression Should Not Be Blind: V-Skip for Efficient Multimodal Reasoning via Dual-Path Anchoring
|
arXiv:2601.13879v1 Announce Type: new Abstract: While Chain-of-Thought (CoT) reasoning significantly enhances the performance of Multimodal Large Language Models (MLLMs), its autoregressive nature incurs prohibitive latency constraints. Current efforts to mitigate this via token compression often fail by blindly applying text-centric metrics to multimodal contexts. We identify a critical failure mode termed Visual Amnesia, where linguistically redundant tokens are erroneously pruned, leading to hallucinations. To address this, we introduce V-Skip that reformulates token pruning as a Visual-Anchored Information Bottleneck (VA-IB) optimization problem. V-Skip employs a dual-path gating mechanism that weighs token importance through both linguistic surprisal and cross-modal attention flow, effectively rescuing visually salient anchors. Extensive experiments on Qwen2-VL and Llama-3.2 families demonstrate that V-Skip achieves a $2.9\times$ speedup with negligible accuracy loss. Specifically, it preserves fine-grained visual details, outperforming other baselines over 30\% on the DocVQA.
|
https://arxiv.org/abs/2601.13879
|
Academic Papers
|
svg
|
724cb7350501d47c73fe51519b4d0e80177e2f40b53796c8168b869ce16bc65a
|
2026-01-21T00:00:00-05:00
|
LifeAgentBench: A Multi-dimensional Benchmark and Agent for Personal Health Assistants in Digital Health
|
arXiv:2601.13880v1 Announce Type: new Abstract: Personalized digital health support requires long-horizon, cross-dimensional reasoning over heterogeneous lifestyle signals, and recent advances in mobile sensing and large language models (LLMs) make such support increasingly feasible. However, the capabilities of current LLMs in this setting remain unclear due to the lack of systematic benchmarks. In this paper, we introduce LifeAgentBench, a large-scale QA benchmark for long-horizon, cross-dimensional, and multi-user lifestyle health reasoning, containing 22,573 questions spanning from basic retrieval to complex reasoning. We release an extensible benchmark construction pipeline and a standardized evaluation protocol to enable reliable and scalable assessment of LLM-based health assistants. We then systematically evaluate 11 leading LLMs on LifeAgentBench and identify key bottlenecks in long-horizon aggregation and cross-dimensional reasoning. Motivated by these findings, we propose LifeAgent as a strong baseline agent for health assistant that integrates multi-step evidence retrieval with deterministic aggregation, achieving significant improvements compared with two widely used baselines. Case studies further demonstrate its potential in realistic daily-life scenarios. The benchmark is publicly available at https://anonymous.4open.science/r/LifeAgentBench-CE7B.
|
https://arxiv.org/abs/2601.13880
|
Academic Papers
|
svg
|
9b50818b5a808362d0c2b4e7dbcc094f59e62b4dbe8cf55231203e65b94dedf1
|
2026-01-21T00:00:00-05:00
|
OpenLearnLM Benchmark: A Unified Framework for Evaluating Knowledge, Skill, and Attitude in Educational Large Language Models
|
arXiv:2601.13882v1 Announce Type: new Abstract: Large Language Models are increasingly deployed as educational tools, yet existing benchmarks focus on narrow skills and lack grounding in learning sciences. We introduce OpenLearnLM Benchmark, a theory-grounded framework evaluating LLMs across three dimensions derived from educational assessment theory: Knowledge (curriculum-aligned content and pedagogical understanding), Skills (scenario-based competencies organized through a four-level center-role-scenario-subscenario hierarchy), and Attitude (alignment consistency and deception resistance). Our benchmark comprises 124K+ items spanning multiple subjects, educational roles, and difficulty levels based on Bloom's taxonomy. The Knowledge domain prioritizes authentic assessment items from established benchmarks, while the Attitude domain adapts Anthropic's Alignment Faking methodology to detect behavioral inconsistency under varying monitoring conditions. Evaluation of seven frontier models reveals distinct capability profiles: Claude-Opus-4.5 excels in practical skills despite lower content knowledge, while Grok-4.1-fast leads in knowledge but shows alignment concerns. Notably, no single model dominates all dimensions, validating the necessity of multi-axis evaluation. OpenLearnLM provides an open, comprehensive framework for advancing LLM readiness in authentic educational contexts.
|
https://arxiv.org/abs/2601.13882
|
Academic Papers
|
svg
|
6e67ee50ba71d8e49334ae06844c9a7170616675c8ff5e427a319350380bd83e
|
2026-01-21T00:00:00-05:00
|
Constrained MARL for Coexisting TN-NTN Resource Allocation: Scalability and Flexibility
|
arXiv:2601.13883v1 Announce Type: new Abstract: This paper considers the joint TN-NTN constrained resource allocation, where terrestrial base stations and non-terrestrial base stations coexist in the spectrum. We focus on large-scale and practical scenarios characterized by large numbers of transmission channels and users, alongside highly dynamic user behaviors. As common learning solutions fail to address these challenges, we propose a decomposition solution based on the special properties of the cross-segment interference, and then tackle the original problem via solving subproblems in a sequential learning manner. Furthermore, to enhance the flexibility of the learned policies, we design a stochastic training environment that captures the key characteristics of real-world systems. Simulation results tested on the full 20MHz bandwidth with various numerologies show that our solution significantly improves scalability compared to existing solutions and remains robust in highly dynamic scenarios.
|
https://arxiv.org/abs/2601.13883
|
Academic Papers
|
svg
|
e031b68bbc89b26000f18ca171b4ba799e5379e13d52dc7e41bd1d808f4bd885
|
2026-01-21T00:00:00-05:00
|
Confident Rankings with Fewer Items: Adaptive LLM Evaluation with Continuous Scores
|
arXiv:2601.13885v1 Announce Type: new Abstract: Computerized Adaptive Testing (CAT) has proven effective for efficient LLM evaluation on multiple-choice benchmarks, but modern LLM evaluation increasingly relies on generation tasks where outputs are scored continuously rather than marked correct/incorrect. We present a principled extension of IRT-based adaptive testing to continuous bounded scores (ROUGE, BLEU, LLM-as-a-Judge) by replacing the Bernoulli response distribution with a heteroskedastic normal distribution. Building on this, we introduce an uncertainty aware ranker with adaptive stopping criteria that achieves reliable model ranking while testing as few items and as cheaply as possible. We validate our method on five benchmarks spanning n-gram-based, embedding-based, and LLM-as-judge metrics. Our method uses 2% of the items while improving ranking correlation by 0.12 {\tau} over random sampling, with 95% accuracy on confident predictions.
|
https://arxiv.org/abs/2601.13885
|
Academic Papers
|
svg
|
0c86274ccbe0f021ac15dde892e934d912602229d2fe2ba6ed80406cbc72cb3c
|
2026-01-21T00:00:00-05:00
|
Revisiting Multi-Task Visual Representation Learning
|
arXiv:2601.13886v1 Announce Type: new Abstract: Current visual representation learning remains bifurcated: vision-language models (e.g., CLIP) excel at global semantic alignment but lack spatial precision, while self-supervised methods (e.g., MAE, DINO) capture intricate local structures yet struggle with high-level semantic context. We argue that these paradigms are fundamentally complementary and can be integrated into a principled multi-task framework, further enhanced by dense spatial supervision. We introduce MTV, a multi-task visual pretraining framework that jointly optimizes a shared backbone across vision-language contrastive, self-supervised, and dense spatial objectives. To mitigate the need for manual annotations, we leverage high-capacity "expert" models -- such as Depth Anything V2 and OWLv2 -- to synthesize dense, structured pseudo-labels at scale. Beyond the framework, we provide a systematic investigation into the mechanics of multi-task visual learning, analyzing: (i) the marginal gain of each objective, (ii) task synergies versus interference, and (iii) scaling behavior across varying data and model scales. Our results demonstrate that MTV achieves "best-of-both-worlds" performance, significantly enhancing fine-grained spatial reasoning without compromising global semantic understanding. Our findings suggest that multi-task learning, fueled by high-quality pseudo-supervision, is a scalable path toward more general visual encoders.
|
https://arxiv.org/abs/2601.13886
|
Academic Papers
|
svg
|
a5973ca456f2ee4ae4f46795069b3f4127d078ae8594c08eaad4d8d48fd2b93a
|
2026-01-21T00:00:00-05:00
|
Human Simulation Computation: A Human-Inspired Framework for Adaptive AI Systems
|
arXiv:2601.13887v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated strong capabilities in knowledge representation and reasoning based on textual data. However, their reliance on language material alone limits their ability to adapt, verify reasoning outcomes, and operate effectively in open and dynamic real-world environments. In this paper, we propose Human Simulation Computation (HSC), a human-inspired computational framework that models intelligence as a continuous, closed-loop process involving thinking, action, learning, reflection, and activity scheduling, collectively referred to as the internal reasoning process. HSC emphasizes active participation both within the internal reasoning process and in interactions with the environment, where actions are used not only to achieve goals but also to automatically refine and improve internal reasoning mechanisms without external intervention. Furthermore, HSC incorporates commonly used human thinking strategies across all stages of the internal reasoning process, such as main-feature-oriented reasoning, scope expansion through action, and on-time learning driven by environmental feedback. Through theoretical analysis, we argue that human simulation strategies cannot be fully learned from language material alone, and that human-like reasoning processes and action-grounded reasoning methods are essential for robust adaptation and effective interaction with real-world environments.
|
https://arxiv.org/abs/2601.13887
|
Academic Papers
|
svg
|
ed30ae36330ab1a5056bd7c89493c45379c2d83084f449b0f6c4201c168da2ee
|
2026-01-21T00:00:00-05:00
|
Towards Inclusive External Human-Machine Interface: Exploring the Effects of Visual and Auditory eHMI for Deaf and Hard-of-Hearing People
|
arXiv:2601.13889v1 Announce Type: new Abstract: External Human-Machine Interfaces (eHMIs) have been proposed to facilitate communication between Automated Vehicles (AVs) and pedestrians. However, no attention was given to Deaf and Hard-of-Hearing (DHH) people. We conducted a formative study through focus groups with 6 DHH people and 6 key stakeholders (including researchers, assistive technologists, and automotive interface designers) to compare proposed eHMIs and extract key design requirements. Subsequently, we investigated the effects of visual and auditory eHMI in a virtual reality user study with 32 participants (16 DHH). Results from our scenario suggesting that (1) DHH participants spent more time looking at the AV; (2) both visual and auditory eHMIs enhanced trust, usefulness, and perceived safety; and (3) only visual eHMIs reduced the time to step into the road, time looking at the AV, gaze time, and percentage looking at active visual eHMI components. Lastly, we provided five practical implications for making eHMI inclusive of DHH people.
|
https://arxiv.org/abs/2601.13889
|
Academic Papers
|
svg
|
afe6b6e90780d15418cba661c6cf2af174c49b63578518da5239254501e8355f
|
2026-01-21T00:00:00-05:00
|
Multi-Objective Hierarchical Optimization with Large Language Models
|
arXiv:2601.13892v1 Announce Type: new Abstract: Despite their widespread adoption in various domains, especially due to their powerful reasoning capabilities, Large Language Models (LLMs) are not the off-the-shelf choice to drive multi-objective optimization yet. Conventional strategies rank high in benchmarks due to their intrinsic capabilities to handle numerical inputs and careful modelling choices that balance exploration and Pareto-front exploitation, as well as handle multiple (conflicting) objectives. In this paper, we close this gap by leveraging LLMs as surrogate models and candidate samplers inside a structured hierarchical search strategy. By adaptively partitioning the input space into disjoint hyperrectangular regions and ranking them with a composite score function, we restrict the generative process of the LLM to specific, high-potential sub-spaces, hence making the problem easier to solve as the LLM doesn't have to reason about the global structure of the problem, but only locally instead. We show that under standard regularity assumptions, our algorithm generates candidate solutions that converge to the true Pareto set in Hausdorff distance. Empirically, it consistently outperforms the global LLM-based multi-objective optimizer and is on par with standard evolutionary and Bayesian optimization algorithm on synthetic and real-world benchmarks.
|
https://arxiv.org/abs/2601.13892
|
Academic Papers
|
svg
|
e1b02cdbf392c63046ffb262a8c7e420cd5015429f3cf085430d6deed80ccf62
|
2026-01-21T00:00:00-05:00
|
Multi-Location Software Model Completion
|
arXiv:2601.13894v1 Announce Type: new Abstract: In model-driven engineering and beyond, software models are key development artifacts. In practice, they often grow to substantial size and complexity, undergoing thousands of modifications over time due to evolution, refactoring, and maintenance. The rise of AI has sparked interest in how software modeling activities can be automated. Recently, LLM-based approaches for software model completion have been proposed, however, the state of the art supports only single-location model completion by predicting changes at a specific location. Going beyond, we aim to bridge the gap toward handling coordinated changes that span multiple locations across large, complex models. Specifically, we propose a novel global embedding-based next focus predictor, NextFocus, which is capable of multi-location model completion for the first time. The predictor consists of a neural network with an attention mechanism that is trained on historical software model evolution data. Starting from an existing change, it predicts further model elements to change, potentially spanning multiple parts of the model. We evaluate our approach on multi-location model changes that have actually been performed by developers in real-world projects. NextFocus achieves promising results for multi-location model completion, even when changes are heavily spread across the model. It achieves an average Precision@k score of 0.98 for $k \leq 10$, significantly outperforming the three baseline approaches.
|
https://arxiv.org/abs/2601.13894
|
Academic Papers
|
svg
|
0922ca3338715b28202bb748e28c3e143dde6c4e34e5c8f26dee7075f2c82b37
|
2026-01-21T00:00:00-05:00
|
OmniOVCD: Streamlining Open-Vocabulary Change Detection with SAM 3
|
arXiv:2601.13895v1 Announce Type: new Abstract: Change Detection (CD) is a fundamental task in remote sensing. It monitors the evolution of land cover over time. Based on this, Open-Vocabulary Change Detection (OVCD) introduces a new requirement. It aims to reduce the reliance on predefined categories. Existing training-free OVCD methods mostly use CLIP to identify categories. These methods also need extra models like DINO to extract features. However, combining different models often causes problems in matching features and makes the system unstable. Recently, the Segment Anything Model 3 (SAM 3) is introduced. It integrates segmentation and identification capabilities within one promptable model, which offers new possibilities for the OVCD task. In this paper, we propose OmniOVCD, a standalone framework designed for OVCD. By leveraging the decoupled output heads of SAM 3, we propose a Synergistic Fusion to Instance Decoupling (SFID) strategy. SFID first fuses the semantic, instance, and presence outputs of SAM 3 to construct land-cover masks, and then decomposes them into individual instance masks for change comparison. This design preserves high accuracy in category recognition and maintains instance-level consistency across images. As a result, the model can generate accurate change masks. Experiments on four public benchmarks (LEVIR-CD, WHU-CD, S2Looking, and SECOND) demonstrate SOTA performance, achieving IoU scores of 67.2, 66.5, 24.5, and 27.1 (class-average), respectively, surpassing all previous methods.
|
https://arxiv.org/abs/2601.13895
|
Academic Papers
|
svg
|
27f026ed40a74929e89ee716587ab6c50ebb445d4778fa897c0a8532d2bd991b
|
2026-01-21T00:00:00-05:00
|
TractRLFusion: A GPT-Based Multi-Critic Policy Fusion Framework for Fiber Tractography
|
arXiv:2601.13897v1 Announce Type: new Abstract: Tractography plays a pivotal role in the non-invasive reconstruction of white matter fiber pathways, providing vital information on brain connectivity and supporting precise neurosurgical planning. Although traditional methods relied mainly on classical deterministic and probabilistic approaches, recent progress has benefited from supervised deep learning (DL) and deep reinforcement learning (DRL) to improve tract reconstruction. A persistent challenge in tractography is accurately reconstructing white matter tracts while minimizing spurious connections. To address this, we propose TractRLFusion, a novel GPT-based policy fusion framework that integrates multiple RL policies through a data-driven fusion strategy. Our method employs a two-stage training data selection process for effective policy fusion, followed by a multi-critic fine-tuning phase to enhance robustness and generalization. Experiments on HCP, ISMRM, and TractoInferno datasets demonstrate that TractRLFusion outperforms individual RL policies as well as state-of-the-art classical and DRL methods in accuracy and anatomical reliability.
|
https://arxiv.org/abs/2601.13897
|
Academic Papers
|
svg
|
a3c658283c04ac8a4ef624d0a8cae9018d0cacb76b85b628051d323aebe360fc
|
2026-01-21T00:00:00-05:00
|
Towards Visually Explaining Statistical Tests with Applications in Biomedical Imaging
|
arXiv:2601.13899v1 Announce Type: new Abstract: Deep neural two-sample tests have recently shown strong power for detecting distributional differences between groups, yet their black-box nature limits interpretability and practical adoption in biomedical analysis. Moreover, most existing post-hoc explainability methods rely on class labels, making them unsuitable for label-free statistical testing settings. We propose an explainable deep statistical testing framework that augments deep two-sample tests with sample-level and feature-level explanations, revealing which individual samples and which input features drive statistically significant group differences. Our method highlights which image regions and which individual samples contribute most to the detected group difference, providing spatial and instance-wise insight into the test's decision. Applied to biomedical imaging data, the proposed framework identifies influential samples and highlights anatomically meaningful regions associated with disease-related variation. This work bridges statistical inference and explainable AI, enabling interpretable, label-free population analysis in medical imaging.
|
https://arxiv.org/abs/2601.13899
|
Academic Papers
|
svg
|
6da70f9c45f8125e2278d3ee09ff346c598fdfe18bb0f38627d229ae4e0a985b
|
2026-01-21T00:00:00-05:00
|
Mathematical and computational perspectives on the Boolean and binary rank and their relation to the real rank
|
arXiv:2601.13900v1 Announce Type: new Abstract: This survey provides a comprehensive overview of the study of the binary and Boolean rank from both a mathematical and a computational perspective, with particular emphasis on their relationship to the real rank. We review the basic definitions of these rank functions and present the main alternative formulations of the binary and Boolean rank, together with their computational complexity and their deep connection to the field of communication complexity. We summarize key techniques used to establish lower and upper bounds on the binary and Boolean rank, including methods from linear algebra, combinatorics and graph theory, isolation sets, the probabilistic method, kernelization, communication protocols and the query to communication lifting technique. Furthermore, we highlight the main mathematical properties of these ranks in comparison with those of the real rank, and discuss several non-trivial bounds on the rank of specific families of matrices. Finally, we present algorithmic approaches for computing and approximating these rank functions, such as parameterized algorithms, approximation algorithms, property testing and approximate Boolean matrix factorization (BMF). Together, the results presented outline the current theoretical knowledge in this area and suggest directions for further research.
|
https://arxiv.org/abs/2601.13900
|
Academic Papers
|
svg
|
6e2706d95d9e787eac24b66d9293303a7c9dc4f86a78c657ffc4accec763eb2d
|
2026-01-21T00:00:00-05:00
|
Know Your Contract: Extending eIDAS Trust into Public Blockchains
|
arXiv:2601.13903v1 Announce Type: new Abstract: Public blockchains lack native mechanisms to attribute on-chain actions to legally accountable entities, creating a fundamental barrier to institutional adoption and regulatory compliance. This paper presents an architecture that extends the European Union eIDAS trust framework into public blockchain ecosystems by cryptographically binding smart contracts to qualified electronic seals issued by Qualified Trust Service Providers. The mechanism establishes a verifiable chain of trust from the European Commission List of Trusted Lists to individual on-chain addresses, enabling machine-verifiable proofs for automated regulatory validation, such as Know Your Contract, Counterparty, and Business checks, without introducing new trusted intermediaries. Regulatory requirements arising from eIDAS, MiCA, PSD2, PSR, and the proposed European Business Wallet are analyzed, and a cryptographic suite meeting both eIDAS implementing regulations and EVM execution constraints following the Ethereum Fusaka upgrade is identified, namely ECDSA with P-256 and CAdES formatting. Two complementary trust validation models are presented: an off-chain workflow for agent-to-agent payment protocols and a fully on-chain workflow enabling regulatory-compliant DeFi operations between legal entities. The on-chain model converts regulatory compliance from a per-counterparty administrative burden into an automated, standardized process, enabling mutual validation at first interaction without prior business relationships. As eIDAS wallets become mandatory across EU member states, the proposed architecture provides a pathway for integrating European digital trust infrastructure into blockchain-based systems, enabling institutional DeFi participation, real-world asset tokenization, and agentic commerce within a trusted, regulatory-compliant framework.
|
https://arxiv.org/abs/2601.13903
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.