id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
41625bd694c8fcee185946b276a8822dd1b53a344bb1e5e16735d16011248700
|
2026-01-13T00:00:00-05:00
|
Near-Optimal Private Linear Regression via Iterative Hessian Mixing
|
arXiv:2601.07545v1 Announce Type: new Abstract: We study differentially private ordinary least squares (DP-OLS) with bounded data. The dominant approach, adaptive sufficient-statistics perturbation (AdaSSP), adds an adaptively chosen perturbation to the sufficient statistics, namely, the matrix $X^{\top}X$ and the vector $X^{\top}Y$, and is known to achieve near-optimal accuracy and to have strong empirical performance. In contrast, methods that rely on Gaussian-sketching, which ensure differential privacy by pre-multiplying the data with a random Gaussian matrix, are widely used in federated and distributed regression, yet remain relatively uncommon for DP-OLS. In this work, we introduce the iterative Hessian mixing, a novel DP-OLS algorithm that relies on Gaussian sketches and is inspired by the iterative Hessian sketch algorithm. We provide utility analysis for the iterative Hessian mixing as well as a new analysis for the previous methods that rely on Gaussian sketches. Then, we show that our new approach circumvents the intrinsic limitations of the prior methods and provides non-trivial improvements over AdaSSP. We conclude by running an extensive set of experiments across standard benchmarks to demonstrate further that our approach consistently outperforms these prior baselines.
|
https://arxiv.org/abs/2601.07545
|
Academic Papers
|
svg
|
412419ecf8406f3bd2236f9d32c972f2fb4aeb8b080f79735c65f07eae52f25e
|
2026-01-13T00:00:00-05:00
|
Estimators for Substitution Rates in Genomes from Read Data
|
arXiv:2601.07546v1 Announce Type: new Abstract: We study the problem of estimating the mutation rate between two sequences from noisy sequencing reads. Existing alignment-free methods typically assume direct access to the full sequences. We extend these methods to the sequencing framework, where only noisy reads from the sequences are observed. We use a simple model in which both mutations and sequencing errors are substitutions. We propose multiple estimators, provide theoretical guarantees for one of them, and evaluate the others through simulations.
|
https://arxiv.org/abs/2601.07546
|
Academic Papers
|
svg
|
e6b6ce624ad6a39446b3c83e8aac89cf9630dcbc5791b3cb9e38406d48874ac5
|
2026-01-13T00:00:00-05:00
|
On the Sequence Reconstruction Problem for the Single-Deletion Two-Substitution Channel
|
arXiv:2601.07547v1 Announce Type: new Abstract: The Levenshtein sequence reconstruction problem studies the reconstruction of a transmitted sequence from multiple erroneous copies of it. A fundamental question in this field is to determine the minimum number of erroneous copies required to guarantee correct reconstruction of the original sequence. This problem is equivalent to determining the maximum possible intersection size of two error balls associated with the underlying channel. Existing research on the sequence reconstruction problem has largely focused on channels with a single type of error, such as insertions, deletions, or substitutions alone. However, relatively little is known for channels that involve a mixture of error types, for instance, channels allowing both deletions and substitutions. In this work, we study the sequence reconstruction problem for the single-deletion two-substitution channel, which allows one deletion and at most two substitutions applied to the transmitted sequence. Specifically, we prove that if two $q$-ary length-$n$ sequences have the Hamming distance $d\geq 2$, where $q\geq 2$ is any fixed integer, then the intersection size of their error balls under the single-deletion two-substitution channel is upper bounded by $(q^2-1)n^2-(3q^2+5q-5)n+O_q(1)$, where $O_q(1)$ is a constant independent from $n$ but dependent on $q$. Moreover, we show that this upper bound is tight up to an additive constant.
|
https://arxiv.org/abs/2601.07547
|
Academic Papers
|
svg
|
eddecbd81fc7d285bc74e734e9739e0232b668db6471b51001910d42dd96b9a4
|
2026-01-13T00:00:00-05:00
|
Contextual Discrepancy-Aware Contrastive Learning for Robust Medical Time Series Diagnosis in Small-Sample Scenarios
|
arXiv:2601.07548v1 Announce Type: new Abstract: Medical time series data, such as EEG and ECG, are vital for diagnosing neurological and cardiovascular diseases. However, their precise interpretation faces significant challenges due to high annotation costs, leading to data scarcity, and the limitations of traditional contrastive learning in capturing complex temporal patterns. To address these issues, we propose CoDAC (Contextual Discrepancy-Aware Contrastive learning), a novel framework that enhances diagnostic accuracy and generalization, particularly in small-sample settings. CoDAC leverages external healthy data and introduces a Contextual Discrepancy Estimator (CDE), built upon a Transformer-based Autoencoder, to precisely quantify abnormal signals through context-aware anomaly scores. These scores dynamically inform a Dynamic Multi-views Contrastive Framework (DMCF), which adaptively weights different temporal views to focus contrastive learning on diagnostically relevant, discrepant regions. Our encoder combines dilated convolutions with multi-head attention for robust feature extraction. Comprehensive experiments on Alzheimer's Disease EEG, Parkinson's Disease EEG, and Myocardial Infarction ECG datasets demonstrate CoDAC's superior performance across all metrics, consistently outperforming state-of-the-art baselines, especially under low label availability. Ablation studies further validate the critical contributions of CDE and DMCF. CoDAC offers a robust and interpretable solution for medical time series diagnosis, effectively mitigating data scarcity challenges.
|
https://arxiv.org/abs/2601.07548
|
Academic Papers
|
svg
|
6f5bda47c79433ea7e96cf16f0dd547b3bfd8f8997a912b3a00ebcf503605e86
|
2026-01-13T00:00:00-05:00
|
TFEC: Multivariate Time-Series Clustering via Temporal-Frequency Enhanced Contrastive Learning
|
arXiv:2601.07550v1 Announce Type: new Abstract: Multivariate Time-Series (MTS) clustering is crucial for signal processing and data analysis. Although deep learning approaches, particularly those leveraging Contrastive Learning (CL), are prominent for MTS representation, existing CL-based models face two key limitations: 1) neglecting clustering information during positive/negative sample pair construction, and 2) introducing unreasonable inductive biases, e.g., destroying time dependence and periodicity through augmentation strategies, compromising representation quality. This paper, therefore, proposes a Temporal-Frequency Enhanced Contrastive (TFEC) learning framework. To preserve temporal structure while generating low-distortion representations, a temporal-frequency Co-EnHancement (CoEH) mechanism is introduced. Accordingly, a synergistic dual-path representation and cluster distribution learning framework is designed to jointly optimize cluster structure and representation fidelity. Experiments on six real-world benchmark datasets demonstrate TFEC's superiority, achieving 4.48% average NMI gains over SOTA methods, with ablation studies validating the design. The code of the paper is available at: https://github.com/yueliangy/TFEC.
|
https://arxiv.org/abs/2601.07550
|
Academic Papers
|
svg
|
104cfd47b891e69b45cfb0f9d8c1dd5b42143cfc6541a7f959f3841ed70093b4
|
2026-01-13T00:00:00-05:00
|
VirtualEnv: A Platform for Embodied AI Research
|
arXiv:2601.07553v1 Announce Type: new Abstract: As large language models (LLMs) continue to improve in reasoning and decision-making, there is a growing need for realistic and interactive environments where their abilities can be rigorously evaluated. We present VirtualEnv, a next-generation simulation platform built on Unreal Engine 5 that enables fine-grained benchmarking of LLMs in embodied and interactive scenarios. VirtualEnv supports rich agent-environment interactions, including object manipulation, navigation, and adaptive multi-agent collaboration, as well as game-inspired mechanics like escape rooms and procedurally generated environments. We provide a user-friendly API built on top of Unreal Engine, allowing researchers to deploy and control LLM-driven agents using natural language instructions. We integrate large-scale LLMs and vision-language models (VLMs), such as GPT-based models, to generate novel environments and structured tasks from multimodal inputs. Our experiments benchmark the performance of several popular LLMs across tasks of increasing complexity, analyzing differences in adaptability, planning, and multi-agent coordination. We also describe our methodology for procedural task generation, task validation, and real-time environment control. VirtualEnv is released as an open-source platform, we aim to advance research at the intersection of AI and gaming, enable standardized evaluation of LLMs in embodied AI settings, and pave the way for future developments in immersive simulations and interactive entertainment.
|
https://arxiv.org/abs/2601.07553
|
Academic Papers
|
svg
|
4ace7b1b7a091f6d80ef09da2622592c04c74e30a8395221ed599b012034746e
|
2026-01-13T00:00:00-05:00
|
Backpropagation-Free Test-Time Adaptation for Lightweight EEG-Based Brain-Computer Interfaces
|
arXiv:2601.07556v1 Announce Type: new Abstract: Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) face significant deployment challenges due to inter-subject variability, signal non-stationarity, and computational constraints. While test-time adaptation (TTA) mitigates distribution shifts under online data streams without per-use calibration sessions, existing TTA approaches heavily rely on explicitly defined loss objectives that require backpropagation for updating model parameters, which incurs computational overhead, privacy risks, and sensitivity to noisy data streams. This paper proposes Backpropagation-Free Transformations (BFT), a TTA approach for EEG decoding that eliminates such issues. BFT applies multiple sample-wise transformations of knowledge-guided augmentations or approximate Bayesian inference to each test trial, generating multiple prediction scores for a single test sample. A learning-to-rank module enhances the weighting of these predictions, enabling robust aggregation for uncertainty suppression during inference under theoretical justifications. Extensive experiments on five EEG datasets of motor imagery classification and driver drowsiness regression tasks demonstrate the effectiveness, versatility, robustness, and efficiency of BFT. This research enables lightweight plug-and-play BCIs on resource-constrained devices, broadening the real-world deployment of decoding algorithms for EEG-based BCI.
|
https://arxiv.org/abs/2601.07556
|
Academic Papers
|
svg
|
d8c88d49c0176e9254e107cacad6909502b358b37deb5e814c15510bd23ad79e
|
2026-01-13T00:00:00-05:00
|
FlyCo: Foundation Model-Empowered Drones for Autonomous 3D Structure Scanning in Open-World Environments
|
arXiv:2601.07558v1 Announce Type: new Abstract: Autonomous 3D scanning of open-world target structures via drones remains challenging despite broad applications. Existing paradigms rely on restrictive assumptions or effortful human priors, limiting practicality, efficiency, and adaptability. Recent foundation models (FMs) offer great potential to bridge this gap. This paper investigates a critical research problem: What system architecture can effectively integrate FM knowledge for this task? We answer it with FlyCo, a principled FM-empowered perception-prediction-planning loop enabling fully autonomous, prompt-driven 3D target scanning in diverse unknown open-world environments. FlyCo directly translates low-effort human prompts (text, visual annotations) into precise adaptive scanning flights via three coordinated stages: (1) perception fuses streaming sensor data with vision-language FMs for robust target grounding and tracking; (2) prediction distills FM knowledge and combines multi-modal cues to infer the partially observed target's complete geometry; (3) planning leverages predictive foresight to generate efficient and safe paths with comprehensive target coverage. Building on this, we further design key components to boost open-world target grounding efficiency and robustness, enhance prediction quality in terms of shape accuracy, zero-shot generalization, and temporal stability, and balance long-horizon flight efficiency with real-time computability and online collision avoidance. Extensive challenging real-world and simulation experiments show FlyCo delivers precise scene understanding, high efficiency, and real-time safety, outperforming existing paradigms with lower human effort and verifying the proposed architecture's practicality. Comprehensive ablations validate each component's contribution. FlyCo also serves as a flexible, extensible blueprint, readily leveraging future FM and robotics advances. Code will be released.
|
https://arxiv.org/abs/2601.07558
|
Academic Papers
|
svg
|
28d68db215bb2c8284c2572208d7e617997122711532e82210fdc4deb1af7ac2
|
2026-01-13T00:00:00-05:00
|
Stable In-hand Manipulation for a Lightweight Four-motor Prosthetic Hand
|
arXiv:2601.07559v1 Announce Type: new Abstract: Electric prosthetic hands should be lightweight to decrease the burden on the user, shaped like human hands for cosmetic purposes, and designed with motors enclosed inside to protect them from damage and dirt. Additionally, in-hand manipulation is necessary to perform daily activities such as transitioning between different postures, particularly through rotational movements, such as reorienting a pen into a writing posture after picking it up from a desk. We previously developed PLEXUS hand (Precision-Lateral dEXteroUS manipulation hand), a lightweight (311 g) prosthetic hand driven by four motors. This prosthetic performed reorientation between precision and lateral grasps with various objects. However, its controller required predefined object widths and was limited to handling lightweight objects (of weight up to 34 g). This study addresses these limitations by employing motor current feedback. Combined with the hand's previously optimized single-axis thumb, this approach achieves more stable manipulation by estimating the object's width and adjusting the index finger position to maintain stable object holding during the reorientation. Experimental validation using primitive objects of various widths (5-30 mm) and shapes (cylinders and prisms) resulted in a 100% success rate with lightweight objects and maintained a high success rate (>=80) even with heavy aluminum prisms (of weight up to 289 g). By contrast, the performance without index finger coordination dropped to just 40% on the heaviest 289 g prism. The hand also successfully executed several daily tasks, including closing bottle caps and orienting a pen for writing.
|
https://arxiv.org/abs/2601.07559
|
Academic Papers
|
svg
|
7214172703f8a2b0704d94d8da7ad7d6b6b58d6e942fae82c6a55b03a0fae516
|
2026-01-13T00:00:00-05:00
|
The Issue with Special Issues: when Guest Editors Publish in Support of Self
|
arXiv:2601.07563v1 Announce Type: new Abstract: The recent exceptional growth in the number of special issues has led to the largest delegation of editorial power in the history of scientific publishing. Has this power been used responsibly? In this article we provide the first systematic analysis of a particular form of abuse of power by guest editors: endogeny, the practice of publishing articles in ones own special issue. While moderate levels of endogeny are common in special issues, excessive endogeny is a blatant case of scientific misconduct. We define special issues containing more than 33% endogeny as Published in Support of Self (PISS). We build a dataset of over 100,000 special issues published between 2015 and 2025 by five leading publishers. The large majority of guest editors engage in endogeny responsibly, if at all. Nonetheless, despite endogeny policies by publishers and indexers, PISS is comparable in magnitude to scientific fraud. All journals heavily relying on special issues host PISS, and more than 1,000 PISS special issues are published each year, hosting tens of thousands of endogenous articles. Extreme PISS abuses are rare, as the majority of PISS occurs at moderate levels of endogeny. Since the scientific literature is a common pool resource this is not good news, as it reflects a widespread normalisation of guest editor misconduct. Fortunately, PISS can be solved by setting easily enforceable commonsense policies. We provide the data and analyses needed for indexers and academic regulators to act.
|
https://arxiv.org/abs/2601.07563
|
Academic Papers
|
svg
|
ee223cf180d8b02894a369585fec191078484fb80e80566e1f2f2e90e925ac8b
|
2026-01-13T00:00:00-05:00
|
A Unified Framework for Emotion Recognition and Sentiment Analysis via Expert-Guided Multimodal Fusion with Large Language Models
|
arXiv:2601.07565v1 Announce Type: new Abstract: Multimodal emotion understanding requires effective integration of text, audio, and visual modalities for both discrete emotion recognition and continuous sentiment analysis. We present EGMF, a unified framework combining expert-guided multimodal fusion with large language models. Our approach features three specialized expert networks--a fine-grained local expert for subtle emotional nuances, a semantic correlation expert for cross-modal relationships, and a global context expert for long-range dependencies--adaptively integrated through hierarchical dynamic gating for context-aware feature selection. Enhanced multimodal representations are integrated with LLMs via pseudo token injection and prompt-based conditioning, enabling a single generative framework to handle both classification and regression through natural language generation. We employ LoRA fine-tuning for computational efficiency. Experiments on bilingual benchmarks (MELD, CHERMA, MOSEI, SIMS-V2) demonstrate consistent improvements over state-of-the-art methods, with superior cross-lingual robustness revealing universal patterns in multimodal emotional expressions across English and Chinese. We will release the source code publicly.
|
https://arxiv.org/abs/2601.07565
|
Academic Papers
|
svg
|
db2f88b1339e86e616bf44e10d5cbbd00df7a1314efd3b1d17fd3a0d9385e27e
|
2026-01-13T00:00:00-05:00
|
Dynamic $(\Delta + 1)$ Vertex Coloring
|
arXiv:2601.07566v1 Announce Type: new Abstract: Several recent results from dynamic and sublinear graph coloring are surveyed. This problem is widely studied and has motivating applications like network topology control, constraint satisfaction, and real-time resource scheduling. Graph coloring algorithms are called colorers. In \S 1 are defined graph coloring, the dynamic model, and the notion of performance of graph algorithms in the dynamic model. In particular $(\Delta + 1)$-coloring, sublinear performance, and oblivious and adaptive adversaries are noted and motivated. In \S 2 the pair of approximately optimal dynamic vertex colorers given in arXiv:1708.09080 are summarized as a warmup for the $(\Delta + 1)$-colorers. In \S 3 the state of the art in dynamic $(\Delta + 1)$-coloring is presented. This section comprises a pair of papers (arXiv:1711.04355 and arXiv:1910.02063) that improve dynamic $(\Delta + 1)$-coloring from the naive algorithm with $O(\Delta)$ expected amortized update time to $O(\log \Delta)$, then to $O(1)$ with high probability. In \S 4 the results in arXiv:2411.04418, which gives a sublinear algorithm for $(\Delta + 1)$-coloring that generalizes oblivious adversaries to adaptive adversaries, are presented.
|
https://arxiv.org/abs/2601.07566
|
Academic Papers
|
svg
|
900916cc878994242298fb0148a511b0fedf21cee22f116677ea126f53b5d4de
|
2026-01-13T00:00:00-05:00
|
A $q$-Polymatroid Framework for Information Leakage in Secure Linear Network Coding
|
arXiv:2601.07567v1 Announce Type: new Abstract: We study information leakage in secure linear network coding schemes based on nested rank-metric codes. We show that the amount of information leaked to an adversary that observes a subset of network links is characterized by the conditional rank function of a representable $q$-polymatroid associated with the underlying rank-metric code pair. Building on this connection, we introduce the notions of $q$-polymatroid ports and $q$-access structures and describe their structural properties. Moreover, we extend Massey's correspondence between minimal codewords and minimal access sets to the rank-metric setting and prove a $q$-analogue of the Brickell--Davenport theorem.
|
https://arxiv.org/abs/2601.07567
|
Academic Papers
|
svg
|
728eabdbb0509743aa15c4e83571740c645835abce3ca50a6f6e404098dd2006
|
2026-01-13T00:00:00-05:00
|
d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation
|
arXiv:2601.07568v1 Announce Type: new Abstract: Diffusion large language models (dLLMs) offer capabilities beyond those of autoregressive (AR) LLMs, such as parallel decoding and random-order generation. However, realizing these benefits in practice is non-trivial, as dLLMs inherently face an accuracy-parallelism trade-off. Despite increasing interest, existing methods typically focus on only one-side of the coin, targeting either efficiency or performance. To address this limitation, we propose d3LLM (Pseudo-Distilled Diffusion Large Language Model), striking a balance between accuracy and parallelism: (i) during training, we introduce pseudo-trajectory distillation to teach the model which tokens can be decoded confidently at early steps, thereby improving parallelism; (ii) during inference, we employ entropy-based multi-block decoding with a KV-cache refresh mechanism to achieve high parallelism while maintaining accuracy. To better evaluate dLLMs, we also introduce AUP (Accuracy Under Parallelism), a new metric that jointly measures accuracy and parallelism. Experiments demonstrate that our d3LLM achieves up to 10$\times$ speedup over vanilla LLaDA/Dream and 5$\times$ speedup over AR models without much accuracy drop. Our code is available at https://github.com/hao-ai-lab/d3LLM.
|
https://arxiv.org/abs/2601.07568
|
Academic Papers
|
svg
|
e0d69f633ef44a209f200593e0de2e5f20dfaa693bbe8a9b9a432531891f10fb
|
2026-01-13T00:00:00-05:00
|
GPU accelerated surface-based gaze mapping for XR experiences
|
arXiv:2601.07571v1 Announce Type: new Abstract: Extended reality is a fast-growing domain for which there is an increasing need to analyze and understand user behavior. In particular, understanding human visual attention during immersive experiences is crucial for many applications. The visualization and analysis of visual attention are commonly done by building fixation density maps from eye-tracking data. Such visual attention mapping is well mastered for 3 degrees of freedom (3DoF) experiences (\textit{i.e.}, involving 360 images or videos) but much less so for 6DoFs data, when the user can move freely in the 3D space. In that case, the visual attention information has to be mapped onto the 3D objects themselves. Some solutions exist for constructing such surface-based 6DoFs attention maps, however, they own several drawbacks: processing time, strong dependence on mesh resolution and/or texture mapping, and/or unpractical data representation for further processing. In this context, we propose a novel GPU-based algorithm that resolves the issues above while being generated in interactive time and rendered in real-time. Experiment on a challenging scene demonstrates the accuracy and robustness of our approach. To stimulate research in this area, the source code is publicly released and integrated into PLUME for ease of use in XR experiments.
|
https://arxiv.org/abs/2601.07571
|
Academic Papers
|
svg
|
2efeaf6547441807d85874596869dce0f6f2c84c39687e7068d74efa72eadf44
|
2026-01-13T00:00:00-05:00
|
A Multimodal Dataset of Student Oral Presentations with Sensors and Evaluation Data
|
arXiv:2601.07576v1 Announce Type: new Abstract: Oral presentation skills are a critical component of higher education, yet comprehensive datasets capturing real-world student performance across multiple modalities remain scarce. To address this gap, we present SOPHIAS (Student Oral Presentation monitoring for Holistic Insights & Analytics using Sensors), a 12-hour multimodal dataset containing recordings of 50 oral presentations (10-15-minute presentation followed by 5-15-minute Q&A) delivered by 65 undergraduate and master's students at the Universidad Autonoma de Madrid. SOPHIAS integrates eight synchronized sensor streams from high-definition webcams, ambient and webcam audio, eye-tracking glasses, smartwatch physiological sensors, and clicker, keyboard, and mouse interactions. In addition, the dataset includes slides and rubric-based evaluations from teachers, peers, and self-assessments, along with timestamped contextual annotations. The dataset captures presentations conducted in real classroom settings, preserving authentic student behaviors, interactions, and physiological responses. SOPHIAS enables the exploration of relationships between multimodal behavioral and physiological signals and presentation performance, supports the study of peer assessment, and provides a benchmark for developing automated feedback and Multimodal Learning Analytics tools. The dataset is publicly available for research through GitHub and Science Data Bank.
|
https://arxiv.org/abs/2601.07576
|
Academic Papers
|
svg
|
e435018568c80d7a4aa7809d869c4d11ccbe91199b240ba734095b90e34ccef4
|
2026-01-13T00:00:00-05:00
|
Beyond Entangled Planning: Task-Decoupled Planning for Long-Horizon Agents
|
arXiv:2601.07577v1 Announce Type: new Abstract: Recent advances in large language models (LLMs) have enabled agents to autonomously execute complex, long-horizon tasks, yet planning remains a primary bottleneck for reliable task execution. Existing methods typically fall into two paradigms: step-wise planning, which is reactive but often short-sighted; and one-shot planning, which generates a complete plan upfront yet is brittle to execution errors. Crucially, both paradigms suffer from entangled contexts, where the agent must reason over a monolithic history spanning multiple sub-tasks. This entanglement increases cognitive load and lets local errors propagate across otherwise independent decisions, making recovery computationally expensive. To address this, we propose Task-Decoupled Planning (TDP), a training-free framework that replaces entangled reasoning with task decoupling. TDP decomposes tasks into a directed acyclic graph (DAG) of sub-goals via a Supervisor. Using a Planner and Executor with scoped contexts, TDP confines reasoning and replanning to the active sub-task. This isolation prevents error propagation and corrects deviations locally without disrupting the workflow. Results on TravelPlanner, ScienceWorld, and HotpotQA show that TDP outperforms strong baselines while reducing token consumption by up to 82%, demonstrating that sub-task decoupling improves both robustness and efficiency for long-horizon agents.
|
https://arxiv.org/abs/2601.07577
|
Academic Papers
|
svg
|
a041d308fd900f124cd7818813b312a945efd2d13ffab7e332ea32c8517c1a34
|
2026-01-13T00:00:00-05:00
|
An adjoint method for training data-driven reduced-order models
|
arXiv:2601.07579v1 Announce Type: new Abstract: Reduced-order modeling lies at the interface of numerical analysis and data-driven scientific computing, providing principled ways to compress high-fidelity simulations in science and engineering. We propose a training framework that couples a continuous-time form of operator inference with the adjoint-state method to obtain robust data-driven reduced-order models. This method minimizes a trajectory-based loss between reduced-order solutions and projected snapshot data, which removes the need to estimate time derivatives from noisy measurements and provides intrinsic temporal regularization through time integration. We derive the corresponding continuous adjoint equations to compute gradients efficiently and implement a gradient based optimizer to update the reduced model parameters. Each iteration only requires one forward reduced order solve and one adjoint solve, followed by inexpensive gradient assembly, making the method attractive for large-scale simulations. We validate the proposed method on three partial differential equations: viscous Burgers' equation, the two-dimensional Fisher-KPP equation, and an advection-diffusion equation. We perform systematic comparisons against standard operator inference under two perturbation regimes, namely reduced temporal snapshot density and additive Gaussian noise. For clean data, both approaches deliver similar accuracy, but in situations with sparse sampling and noise, the proposed adjoint-based training provides better accuracy and enhanced roll-out stability.
|
https://arxiv.org/abs/2601.07579
|
Academic Papers
|
svg
|
677b3d507f19e6b9a551f50960c9b3620f9c5d41eaa8deaa719c474e7cbbbca7
|
2026-01-13T00:00:00-05:00
|
BenchSeg: A Large-Scale Dataset and Benchmark for Multi-View Food Video Segmentation
|
arXiv:2601.07581v1 Announce Type: new Abstract: Food image segmentation is a critical task for dietary analysis, enabling accurate estimation of food volume and nutrients. However, current methods suffer from limited multi-view data and poor generalization to new viewpoints. We introduce BenchSeg, a novel multi-view food video segmentation dataset and benchmark. BenchSeg aggregates 55 dish scenes (from Nutrition5k, Vegetables & Fruits, MetaFood3D, and FoodKit) with 25,284 meticulously annotated frames, capturing each dish under free 360{\deg} camera motion. We evaluate a diverse set of 20 state-of-the-art segmentation models (e.g., SAM-based, transformer, CNN, and large multimodal) on the existing FoodSeg103 dataset and evaluate them (alone and combined with video-memory modules) on BenchSeg. Quantitative and qualitative results demonstrate that while standard image segmenters degrade sharply under novel viewpoints, memory-augmented methods maintain temporal consistency across frames. Our best model based on a combination of SeTR-MLA+XMem2 outperforms prior work (e.g., improving over FoodMem by ~2.63% mAP), offering new insights into food segmentation and tracking for dietary analysis. We release BenchSeg to foster future research. The project page including the dataset annotations and the food segmentation models can be found at https://amughrabi.github.io/benchseg.
|
https://arxiv.org/abs/2601.07581
|
Academic Papers
|
svg
|
7544b0eb8a5ed77a2784244e5de1b13478ea6d51331e3583904e362e3e8d7b40
|
2026-01-13T00:00:00-05:00
|
ES-Mem: Event Segmentation-Based Memory for Long-Term Dialogue Agents
|
arXiv:2601.07582v1 Announce Type: new Abstract: Memory is critical for dialogue agents to maintain coherence and enable continuous adaptation in long-term interactions. While existing memory mechanisms offer basic storage and retrieval capabilities, they are hindered by two primary limitations: (1) rigid memory granularity often disrupts semantic integrity, resulting in fragmented and incoherent memory units; (2) prevalent flat retrieval paradigms rely solely on surface-level semantic similarity, neglecting the structural cues of discourse required to navigate and locate specific episodic contexts. To mitigate these limitations, drawing inspiration from Event Segmentation Theory, we propose ES-Mem, a framework incorporating two core components: (1) a dynamic event segmentation module that partitions long-term interactions into semantically coherent events with distinct boundaries; (2) a hierarchical memory architecture that constructs multi-layered memories and leverages boundary semantics to anchor specific episodic memory for precise context localization. Evaluations on two memory benchmarks demonstrate that ES-Mem yields consistent performance gains over baseline methods. Furthermore, the proposed event segmentation module exhibits robust applicability on dialogue segmentation datasets.
|
https://arxiv.org/abs/2601.07582
|
Academic Papers
|
svg
|
42360340c830ead3ea14b3eb44a86760a6721593a01845749062a0a291291b6d
|
2026-01-13T00:00:00-05:00
|
Robust Multicentre Detection and Classification of Colorectal Liver Metastases on CT: Application of Foundation Models
|
arXiv:2601.07585v1 Announce Type: new Abstract: Colorectal liver metastases (CRLM) are a major cause of cancer-related mortality, and reliable detection on CT remains challenging in multi-centre settings. We developed a foundation model-based AI pipeline for patient-level classification and lesion-level detection of CRLM on contrast-enhanced CT, integrating uncertainty quantification and explainability. CT data from the EuCanImage consortium (n=2437) and an external TCIA cohort (n=197) were used. Among several pretrained models, UMedPT achieved the best performance and was fine-tuned with an MLP head for classification and an FCOS-based head for lesion detection. The classification model achieved an AUC of 0.90 and a sensitivity of 0.82 on the combined test set, with a sensitivity of 0.85 on the external cohort. Excluding the most uncertain 20 percent of cases improved AUC to 0.91 and balanced accuracy to 0.86. Decision curve analysis showed clinical benefit for threshold probabilities between 0.30 and 0.40. The detection model identified 69.1 percent of lesions overall, increasing from 30 percent to 98 percent across lesion size quartiles. Grad-CAM highlighted lesion-corresponding regions in high-confidence cases. These results demonstrate that foundation model-based pipelines can support robust and interpretable CRLM detection and classification across heterogeneous CT data.
|
https://arxiv.org/abs/2601.07585
|
Academic Papers
|
svg
|
5df03cfa6038270c0cacd671feb178ba4cb688bbe3942b0aeac2ebabf9d97fed
|
2026-01-13T00:00:00-05:00
|
A higher order polytopal method for contact mechanics with Tresca friction
|
arXiv:2601.07586v1 Announce Type: new Abstract: In this work, we design and analyze a Discrete de Rham (DDR) scheme for a contact mechanics problem involving fractures along which a model of Tresca friction is considered. Our approach is based on a mixed formulation involving a displacement field and a Lagrange multiplier, enforcing the contact conditions, representing tractions at fractures. The approximation space for the displacement is made of vectors values attached to each vertex, edge, face, and element, while the Lagrange multiplier space is approximated by piecewise constant vectors on each fracture face. The displacement degrees of freedom allow reconstruct piecewise quadratic approximations of this field. We prove a discrete Korn inequality that account for the fractures, as well as an inf-sup condition (in a non-standard $H^{-1/2}$-norm) between the discrete Lagrange multiplier space and the discrete displacement space. We provide an in-depth error analysis of the scheme and show that, contrary to usual low-order nodal-based schemes, our method is robust in the quasi-incompressible limit for the primal variable~(displacement). An extensive set of numerical experiments confirms the theoretical analysis and demonstrate the practical accuracy and robustness of the scheme.
|
https://arxiv.org/abs/2601.07586
|
Academic Papers
|
svg
|
f141b163dc6d630a25f442bf75c60e0c6426f8cfdf063355d55dc604a42f2ea0
|
2026-01-13T00:00:00-05:00
|
GRPO with State Mutations: Improving LLM-Based Hardware Test Plan Generation
|
arXiv:2601.07593v1 Announce Type: new Abstract: RTL design often relies heavily on ad-hoc testbench creation early in the design cycle. While large language models (LLMs) show promise for RTL code generation, their ability to reason about hardware specifications and generate targeted test plans remains largely unexplored. We present the first systematic study of LLM reasoning capabilities for RTL verification stimuli generation, establishing a two-stage framework that decomposes test plan generation from testbench execution. Our benchmark reveals that state-of-the-art models, including DeepSeek-R1 and Claude-4.0-Sonnet, achieve only 15.7-21.7% success rates on generating stimuli that pass golden RTL designs. To improve LLM generated stimuli, we develop a comprehensive training methodology combining supervised fine-tuning with a novel reinforcement learning approach, GRPO with State Mutation (GRPO-SMu), which enhances exploration by varying input mutations. Our approach leverages a tree-based branching mutation strategy to construct training data comprising equivalent and mutated trees, moving beyond linear mutation approaches to provide rich learning signals. Training on this curated dataset, our 7B parameter model achieves a 33.3% golden test pass rate and a 13.9% mutation detection rate, representing a 17.6% absolute improvement over baseline and outperforming much larger general-purpose models. These results demonstrate that specialized training methodologies can significantly enhance LLM reasoning capabilities for hardware verification tasks, establishing a foundation for automated sub-unit testing in semiconductor design workflows.
|
https://arxiv.org/abs/2601.07593
|
Academic Papers
|
svg
|
1f88789389c2f27761bb022e0828adf3aef92c39b069e4c3f6a6aff84024c7eb
|
2026-01-13T00:00:00-05:00
|
Pheromone-Focused Ant Colony Optimization algorithm for path planning
|
arXiv:2601.07597v1 Announce Type: new Abstract: Ant Colony Optimization (ACO) is a prominent swarm intelligence algorithm extensively applied to path planning. However, traditional ACO methods often exhibit shortcomings, such as blind search behavior and slow convergence within complex environments. To address these challenges, this paper proposes the Pheromone-Focused Ant Colony Optimization (PFACO) algorithm, which introduces three key strategies to enhance the problem-solving ability of the ant colony. First, the initial pheromone distribution is concentrated in more promising regions based on the Euclidean distances of nodes to the start and end points, balancing the trade-off between exploration and exploitation. Second, promising solutions are reinforced during colony iterations to intensify pheromone deposition along high-quality paths, accelerating convergence while maintaining solution diversity. Third, a forward-looking mechanism is implemented to penalize redundant path turns, promoting smoother and more efficient solutions. These strategies collectively produce the focused pheromones to guide the ant colony's search, which enhances the global optimization capabilities of the PFACO algorithm, significantly improving convergence speed and solution quality across diverse optimization problems. The experimental results demonstrate that PFACO consistently outperforms comparative ACO algorithms in terms of convergence speed and solution quality.
|
https://arxiv.org/abs/2601.07597
|
Academic Papers
|
svg
|
3b876c616a3fb095b1d9d0d546428594f87331d4d095d2715da506f52862f014
|
2026-01-13T00:00:00-05:00
|
Diffusion in SPAD Signals
|
arXiv:2601.07599v1 Announce Type: new Abstract: We derive the likelihood of a raw signal in a single photon avalanche diode (SPAD), given a fixed photon flux. The raw signal comprises timing of detection events, which are nonlinearly related to the flux. Moreover, they are naturally stochastic. We then derive a score function of the signal. This is a key for solving inverse problems based on SPAD signals. We focus on deriving solutions involving a diffusion model, to express image priors. We demonstrate the effect of low or high photon counts, and the consequence of exploiting timing of detection events.
|
https://arxiv.org/abs/2601.07599
|
Academic Papers
|
svg
|
2a999690e1324efedb230357d8741bee0c3614d2801a45079cc247ac28c6c44e
|
2026-01-13T00:00:00-05:00
|
Peformance Isolation for Inference Processes in Edge GPU Systems
|
arXiv:2601.07600v1 Announce Type: new Abstract: This work analyzes the main isolation mechanisms available in modern NVIDIA GPUs: MPS, MIG, and the recent Green Contexts, to ensure predictable inference time in safety-critical applications using deep learning models. The experimental methodology includes performance tests, evaluation of partitioning impact, and analysis of temporal isolation between processes, considering both the NVIDIA A100 and Jetson Orin platforms. It is observed that MIG provides a high level of isolation. At the same time, Green Contexts represent a promising alternative for edge devices by enabling fine-grained SM allocation with low overhead, albeit without memory isolation. The study also identifies current limitations and outlines potential research directions to improve temporal predictability in shared GPUs.
|
https://arxiv.org/abs/2601.07600
|
Academic Papers
|
svg
|
34ecd99d1a7017f466322b6d9a1aad7f68069f86d92a986fd81db9f791229ec1
|
2026-01-13T00:00:00-05:00
|
OODEval: Evaluating Large Language Models on Object-Oriented Design
|
arXiv:2601.07602v1 Announce Type: new Abstract: Recent advances in large language models (LLMs) have driven extensive evaluations in software engineering. however, most prior work concentrates on code-level tasks, leaving software design capabilities underexplored. To fill this gap, we conduct a comprehensive empirical study evaluating 29 LLMs on object-oriented design (OOD) tasks. Owing to the lack of standardized benchmarks and metrics, we introduce OODEval, a manually constructed benchmark comprising 50 OOD tasks of varying difficulty, and OODEval-Human, the first human-rated OOD benchmark, which includes 940 undergraduate-submitted class diagrams evaluated by instructors. We further propose CLUE (Class Likeness Unified Evaluation), a unified metric set that assesses both global correctness and fine-grained design quality in class diagram generation. Using these benchmarks and metrics, we investigate five research questions: overall correctness, comparison with humans, model dimension analysis, task feature analysis, and bad case analysis. The results indicate that while LLMs achieve high syntactic accuracy, they exhibit substantial semantic deficiencies, particularly in method and relationship generation. Among the evaluated models, Qwen3-Coder-30B achieves the best overall performance, rivaling DeepSeek-R1 and GPT-4o, while Gemma3-4B-IT outperforms GPT-4o-Mini despite its smaller parameter scale. Although top-performing LLMs nearly match the average performance of undergraduates, they remain significantly below the level of the best human designers. Further analysis shows that parameter scale, code specialization, and instruction tuning strongly influence performance, whereas increased design complexity and lower requirement readability degrade it. Bad case analysis reveals common failure modes, including keyword misuse, missing classes or relationships, and omitted methods.
|
https://arxiv.org/abs/2601.07602
|
Academic Papers
|
svg
|
de27ee0f5b976d861f6a5d12819af67e0cd25b359abbe9c0dc28ce370be19b52
|
2026-01-13T00:00:00-05:00
|
UIKA: Fast Universal Head Avatar from Pose-Free Images
|
arXiv:2601.07603v1 Announce Type: new Abstract: We present UIKA, a feed-forward animatable Gaussian head model from an arbitrary number of unposed inputs, including a single image, multi-view captures, and smartphone-captured videos. Unlike the traditional avatar method, which requires a studio-level multi-view capture system and reconstructs a human-specific model through a long-time optimization process, we rethink the task through the lenses of model representation, network design, and data preparation. First, we introduce a UV-guided avatar modeling strategy, in which each input image is associated with a pixel-wise facial correspondence estimation. Such correspondence estimation allows us to reproject each valid pixel color from screen space to UV space, which is independent of camera pose and character expression. Furthermore, we design learnable UV tokens on which the attention mechanism can be applied at both the screen and UV levels. The learned UV tokens can be decoded into canonical Gaussian attributes using aggregated UV information from all input views. To train our large avatar model, we additionally prepare a large-scale, identity-rich synthetic training dataset. Our method significantly outperforms existing approaches in both monocular and multi-view settings. Project page: https://zijian-wu.github.io/uika-page/
|
https://arxiv.org/abs/2601.07603
|
Academic Papers
|
svg
|
36163832bc03633e33a02241628bb415b8de52cbf38350dddfc7b4c438825c40
|
2026-01-13T00:00:00-05:00
|
Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments
|
arXiv:2601.07606v1 Announce Type: new Abstract: Large language models are increasingly being used to assess and forecast research ideas, yet we lack scalable ways to evaluate the quality of models' judgments about these scientific ideas. Towards this goal, we introduce PoT, a semi-verifiable benchmarking framework that links scientific idea judgments to downstream signals that become observable later (e.g., citations and shifts in researchers' agendas). PoT freezes a pre-cutoff snapshot of evidence in an offline sandbox and asks models to forecast post-cutoff outcomes, enabling verifiable evaluation when ground truth arrives, scalable benchmarking without exhaustive expert annotation, and analysis of human-model misalignment against signals such as peer-review awards. In addition, PoT provides a controlled testbed for agent-based research judgments that evaluate scientific ideas, comparing tool-using agents to non-agent baselines under prompt ablations and budget scaling. Across 30,000+ instances spanning four benchmark domains, we find that, compared with non-agent baselines, higher interaction budgets generally improve agent performance, while the benefit of tool use is strongly task-dependent. By combining time-partitioned, future-verifiable targets with an offline sandbox for tool use, PoT supports scalable evaluation of agents on future-facing scientific idea judgment tasks.
|
https://arxiv.org/abs/2601.07606
|
Academic Papers
|
svg
|
11bea5c9fd35e1e0136d367ca89c6a03f7bf92d7f8521c5248ad20aa15651dd7
|
2026-01-13T00:00:00-05:00
|
Recursive Binary Identification with Differential Privacy and Data Tampering Attacks
|
arXiv:2601.07608v1 Announce Type: new Abstract: In this paper, we consider the parameter estimation in a bandwidth-constrained sensor network communicating through an insecure medium. The sensor performs a local quantization, and transmits a 1-bit message to an estimation center through a wireless medium where the transmission of information is vulnerable to attackers. Both eavesdroppers and data tampering attackers are considered in our setting. A differential privacy method is used to protect the sensitive information against eavesdroppers. Then, a recursive projection algorithm is proposed such that the estimation center achieves the almost sure convergence and mean-square convergence when quantized measurements, differential privacy, and data tampering attacks are considered in a uniform framework. A privacy analysis including the convergence rate with privacy or without privacy is given. Further, we extend the problem to multi-agent systems. For this case, a distributed recursive projection algorithm is proposed with guaranteed almost sure and mean square convergence. A simulation example is provided to illustrate the effectiveness of the proposed algorithms.
|
https://arxiv.org/abs/2601.07608
|
Academic Papers
|
svg
|
af58fdf286fe682b7334c9aceb6fd48c6d9660113a7a4596bbc3817bbc8fc76a
|
2026-01-13T00:00:00-05:00
|
DIAGPaper: Diagnosing Valid and Specific Weaknesses in Scientific Papers via Multi-Agent Reasoning
|
arXiv:2601.07611v1 Announce Type: new Abstract: Paper weakness identification using single-agent or multi-agent LLMs has attracted increasing attention, yet existing approaches exhibit key limitations. Many multi-agent systems simulate human roles at a surface level, missing the underlying criteria that lead experts to assess complementary intellectual aspects of a paper. Moreover, prior methods implicitly assume identified weaknesses are valid, ignoring reviewer bias, misunderstanding, and the critical role of author rebuttals in validating review quality. Finally, most systems output unranked weakness lists, rather than prioritizing the most consequential issues for users. In this work, we propose DIAGPaper, a novel multi-agent framework that addresses these challenges through three tightly integrated modules. The customizer module simulates human-defined review criteria and instantiates multiple reviewer agents with criterion-specific expertise. The rebuttal module introduces author agents that engage in structured debate with reviewer agents to validate and refine proposed weaknesses. The prioritizer module learns from large-scale human review practices to assess the severity of validated weaknesses and surfaces the top-K severest ones to users. Experiments on two benchmarks, AAAR and ReviewCritique, demonstrate that DIAGPaper substantially outperforms existing methods by producing more valid and more paper-specific weaknesses, while presenting them in a user-oriented, prioritized manner.
|
https://arxiv.org/abs/2601.07611
|
Academic Papers
|
svg
|
c0011b78cf670465a708882f4d3f2c67a1c8175b31da5ff43167e5193b56bbb1
|
2026-01-13T00:00:00-05:00
|
GAP-Net: Calibrating User Intent via Gated Adaptive Progressive Learning for CTR Prediction
|
arXiv:2601.07613v1 Announce Type: new Abstract: Sequential user behavior modeling is pivotal for Click-Through Rate (CTR) prediction yet is hindered by three intrinsic bottlenecks: (1) the "Attention Sink" phenomenon, where standard Softmax compels the model to allocate probability mass to noisy behaviors; (2) the Static Query Assumption, which overlooks dynamic shifts in user intent driven by real-time contexts; and (3) Rigid View Aggregation, which fails to adaptively weight heterogeneous temporal signals according to the decision context. To bridge these gaps, we propose GAP-Net (Gated Adaptive Progressive Network), a unified framework establishing a "Triple Gating" architecture to progressively refine information from micro-level features to macro-level views. GAP-Net operates through three integrated mechanisms: (1) Adaptive Sparse-Gated Attention (ASGA) employs micro-level gating to enforce sparsity, effectively suppressing massive noise activations; (2) Gated Cascading Query Calibration (GCQC) dynamically aligns user intent by bridging real-time triggers and long-term memories via a meso-level cascading channel; and (3) Context-Gated Denoising Fusion (CGDF) performs macro-level modulation to orchestrate the aggregation of multi-view sequences. Extensive experiments on industrial datasets demonstrate that GAP-Net achieves substantial improvements over state-of-the-art baselines, exhibiting superior robustness against interaction noise and intent drift.
|
https://arxiv.org/abs/2601.07613
|
Academic Papers
|
svg
|
244a7d811baa06e7fb3d454e382ce16c7589b9af52b73103e92d96b1481677d1
|
2026-01-13T00:00:00-05:00
|
Neural Architecture for Fast and Reliable Coagulation Assessment in Clinical Settings: Leveraging Thromboelastography
|
arXiv:2601.07618v1 Announce Type: new Abstract: In an ideal medical environment, real-time coagulation monitoring can enable early detection and prompt remediation of risks. However, traditional Thromboelastography (TEG), a widely employed diagnostic modality, can only provide such outputs after nearly 1 hour of measurement. The delay might lead to elevated mortality rates. These issues clearly point out one of the key challenges for medical AI development: Mak-ing reasonable predictions based on very small data sets and accounting for variation between different patient populations, a task where conventional deep learning methods typically perform poorly. We present Physiological State Reconstruc-tion (PSR), a new algorithm specifically designed to take ad-vantage of dynamic changes between individuals and to max-imize useful information produced by small amounts of clini-cal data through mapping to reliable predictions and diagnosis. We develop MDFE to facilitate integration of varied temporal signals using multi-domain learning, and jointly learn high-level temporal interactions together with attentions via HLA; furthermore, the parameterized DAM we designed maintains the stability of the computed vital signs. PSR evaluates with 4 TEG-specialized data sets and establishes remarkable perfor-mance -- predictions of R2 > 0.98 for coagulation traits and error reduction around half compared to the state-of-the-art methods, and halving the inferencing time too. Drift-aware learning suggests a new future, with potential uses well be-yond thrombophilia discovery towards medical AI applica-tions with data scarcity.
|
https://arxiv.org/abs/2601.07618
|
Academic Papers
|
svg
|
9d2cdc7d15cac9fac4aae63a385870a1daa9c8b5665216e929d68354b4dbdcc6
|
2026-01-13T00:00:00-05:00
|
PARL: Position-Aware Relation Learning Network for Document Layout Analysis
|
arXiv:2601.07620v1 Announce Type: new Abstract: Document layout analysis aims to detect and categorize structural elements (e.g., titles, tables, figures) in scanned or digital documents. Popular methods often rely on high-quality Optical Character Recognition (OCR) to merge visual features with extracted text. This dependency introduces two major drawbacks: propagation of text recognition errors and substantial computational overhead, limiting the robustness and practical applicability of multimodal approaches. In contrast to the prevailing multimodal trend, we argue that effective layout analysis depends not on text-visual fusion, but on a deep understanding of documents' intrinsic visual structure. To this end, we propose PARL (Position-Aware Relation Learning Network), a novel OCR-free, vision-only framework that models layout through positional sensitivity and relational structure. Specifically, we first introduce a Bidirectional Spatial Position-Guided Deformable Attention module to embed explicit positional dependencies among layout elements directly into visual features. Second, we design a Graph Refinement Classifier (GRC) to refine predictions by modeling contextual relationships through a dynamically constructed layout graph. Extensive experiments show PARL achieves state-of-the-art results. It establishes a new benchmark for vision-only methods on DocLayNet and, notably, surpasses even strong multimodal models on M6Doc. Crucially, PARL (65M) is highly efficient, using roughly four times fewer parameters than large multimodal models (256M), demonstrating that sophisticated visual structure modeling can be both more efficient and robust than multimodal fusion.
|
https://arxiv.org/abs/2601.07620
|
Academic Papers
|
svg
|
5cbbc4b73dd5adfd1a33fa23d8a5e92af4694b97ad352d1e858226543882f3c9
|
2026-01-13T00:00:00-05:00
|
Searching point patterns in point clouds describing local topography
|
arXiv:2601.07621v1 Announce Type: new Abstract: We address the problem of comparing and aligning spatial point configurations in $\mathbb{R}^3$ arising from structured geometric patterns. Each pattern is decomposed into arms along which we define a normalized finite-difference operator measuring local variations of the height component with respect to the planar geometry of the pattern. This quantity provides a parametrization-independent local descriptor that complements global similarity measures. In particular, it integrates naturally with Wasserstein-type distances for comparing point distributions and with Procrustes analysis for rigid alignment of geometric structures.
|
https://arxiv.org/abs/2601.07621
|
Academic Papers
|
svg
|
a0759e956e3cd547cf83d163ac7ec8b9156fe3b7cf5cceb77b2ae7e0d571b22d
|
2026-01-13T00:00:00-05:00
|
Clipped Affine Policy: Low-Complexity Near-Optimal Online Power Control for Energy Harvesting Communications over Fading Channels
|
arXiv:2601.07622v1 Announce Type: new Abstract: This paper investigates online power control for point-to-point energy harvesting communications over wireless fading channels. A linear-policy-based approximation is derived for the relative-value function in the Bellman equation of the power control problem. This approximation leads to two fundamental power control policies: optimistic and robust clipped affine policies, both taking the form of a clipped affine function of the battery level and the reciprocal of channel signal-to-noise ratio coefficient. They are essentially battery-limited weighted directional waterfilling policies operating between adjacent time slots. By leveraging the relative-value approximation and derived policies, a domain-knowledge-enhanced reinforcement learning (RL) algorithm is proposed for online power control. The proposed approach is further extended to scenarios with energy and/or channel lookahead. Comprehensive simulation results demonstrate that the proposed methods achieve a good balance between computational complexity and optimality. In particular, the robust clipped affine policy (combined with RL, using at most five parameters) outperforms all existing approaches across various scenarios, with less than 2\% performance loss relative to the optimal policy.
|
https://arxiv.org/abs/2601.07622
|
Academic Papers
|
svg
|
e66e2f360f4067617da15a58f744ac1b759bc4ddf100a4f0df3683b16ee0de75
|
2026-01-13T00:00:00-05:00
|
Fifteen Years of Learning Analytics Research: Topics, Trends, and Challenges
|
arXiv:2601.07629v1 Announce Type: new Abstract: The learning analytics (LA) community has recently reached two important milestones: celebrating the 15th LAK conference and updating the 2011 definition of LA to reflect the 15 years of changes in the discipline. However, despite LA's growth, little is known about how research topics, funding, and collaboration, as well as the relationships among them, have developed within the community over time. This study addressed this gap by analyzing all 936 full and short papers published at LAK over a 15-year period using unsupervised machine learning, natural language processing, and network analytics. The analysis revealed a stable core of prolific authors alongside high turnover of newcomers, systematic links between funding sources and research directions, and six enduring topical centers that remain globally shared but vary in prominence across countries. These six topical centers, which encompass LA research, are: self-regulated learning, dashboards and theory, social learning, automated feedback, multimodal analytics, and outcome prediction. Our findings highlight key challenges for the future: widening participation, reducing dependency on a narrow set of funders, and ensuring that emerging research trajectories remain responsive to educational practice and societal needs.
|
https://arxiv.org/abs/2601.07629
|
Academic Papers
|
svg
|
e76dcc735ba9c7384077c373729132dbf6fa84f999b6720060b070aedd07631d
|
2026-01-13T00:00:00-05:00
|
Integrating Machine-Generated Short Descriptions into the Wikipedia Android App: A Pilot Deployment of Descartes
|
arXiv:2601.07631v1 Announce Type: new Abstract: Short descriptions are a key part of the Wikipedia user experience, but their coverage remains uneven across languages and topics. In previous work, we introduced Descartes, a multilingual model for generating short descriptions. In this report, we present the results of a pilot deployment of Descartes in the Wikipedia Android app, where editors were offered suggestions based on outputs from Descartes while editing short descriptions. The experiment spanned 12 languages, with over 3,900 articles and 375 editors participating. Overall, 90% of accepted Descartes descriptions were rated at least 3 out of 5 in quality, and their average ratings were comparable to human-written ones. Editors adopted machine suggestions both directly and with modifications, while the rate of reverts and reports remained low. The pilot also revealed practical considerations for deployment, including latency, language-specific gaps, and the need for safeguards around sensitive topics. These results indicate that Descartes's short descriptions can support editors in reducing content gaps, provided that technical, design, and community guardrails are in place.
|
https://arxiv.org/abs/2601.07631
|
Academic Papers
|
svg
|
241c4d1230422eda4a4831aac69812b301e3c7b9ecb13c04cbf2bc1867f3a4f9
|
2026-01-13T00:00:00-05:00
|
GeoMotionGPT: Geometry-Aligned Motion Understanding with Large Language Models
|
arXiv:2601.07632v1 Announce Type: new Abstract: Discrete motion tokenization has recently enabled Large Language Models (LLMs) to serve as versatile backbones for motion understanding and motion-language reasoning. However, existing pipelines typically decouple motion quantization from semantic embedding learning, linking them solely via token IDs. This approach fails to effectively align the intrinsic geometry of the motion space with the embedding space, thereby hindering the LLM's capacity for nuanced motion reasoning. We argue that alignment is most effective when both modalities share a unified geometric basis. Therefore, instead of forcing the LLM to reconstruct the complex geometry among motion tokens from scratch, we present a novel framework that explicitly enforces orthogonality on both the motion codebook and the LLM embedding space, ensuring that their relational structures naturally mirror each other. Specifically, we employ a decoder-only quantizer with Gumbel-Softmax for differentiable training and balanced codebook usage. To bridge the modalities, we use a sparse projection that maps motion codes into the LLM embedding space while preserving orthogonality. Finally, a two-stage orthonormal regularization schedule enforces soft constraints during tokenizer training and LLM fine-tuning to maintain geometric alignment without hindering semantic adaptation. Extensive experiments on HumanML3D demonstrate that our framework achieves a 20% performance improvement over current state-of-the-art methods, validating that a unified geometric basis effectively empowers the LLM for nuanced motion reasoning.
|
https://arxiv.org/abs/2601.07632
|
Academic Papers
|
svg
|
796693d9d32a4e0dcc46b74f6a8fb122efdc611460ef844c4d3d57e2de45a002
|
2026-01-13T00:00:00-05:00
|
Simple Power Analysis of Polynomial Multiplication in HQC
|
arXiv:2601.07634v1 Announce Type: new Abstract: The Hamming Quasi-Cyclic (HQC) cryptosystem was selected for standardization in the fourth round of the NIST Post-Quantum Cryptography (PQC) standardization project. The goal of the PQC project is to standardize one or more quantum-resistant public-key cryptographic algorithms. In this paper, we present a single-trace Simple Power Analysis (SPA) attack against HQC that exploits power consumption leakage that occurs during polynomial multiplication performed at the beginning of HQC decryption. Using the ChipWhisperer-Lite board, we perform and evaluate the attack, achieving a 99.69% success rate over 10 000 attack attempts. We also propose various countermeasures against the attack and evaluate their time complexity.
|
https://arxiv.org/abs/2601.07634
|
Academic Papers
|
svg
|
a5d0e28bf398c6e57128e29995a9e670b4c749890811949189607719bb48a36b
|
2026-01-13T00:00:00-05:00
|
Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning
|
arXiv:2601.07636v1 Announce Type: new Abstract: Continual Learning (CL) aims to enable models to sequentially learn multiple tasks without forgetting previous knowledge. Recent studies have shown that optimizing towards flatter loss minima can improve model generalization. However, existing sharpness-aware methods for CL suffer from two key limitations: (1) they treat sharpness regularization as a unified signal without distinguishing the contributions of its components. and (2) they introduce substantial computational overhead that impedes practical deployment. To address these challenges, we propose FLAD, a novel optimization framework that decomposes sharpness-aware perturbations into gradient-aligned and stochastic-noise components, and show that retaining only the noise component promotes generalization. We further introduce a lightweight scheduling scheme that enables FLAD to maintain significant performance gains even under constrained training time. FLAD can be seamlessly integrated into various CL paradigms and consistently outperforms standard and sharpness-aware optimizers in diverse experimental settings, demonstrating its effectiveness and practicality in CL.
|
https://arxiv.org/abs/2601.07636
|
Academic Papers
|
svg
|
34ebc9d9c966f11e0470e10be698dd4f2e1ffd46cb50529c4aaa2ca39a63b893
|
2026-01-13T00:00:00-05:00
|
SALT-KG: A Benchmark for Semantics-Aware Learning on Enterprise Tables
|
arXiv:2601.07638v1 Announce Type: new Abstract: Building upon the SALT benchmark for relational prediction (Klein et al., 2024), we introduce SALT-KG, a benchmark for semantics-aware learning on enterprise tables. SALT-KG extends SALT by linking its multi-table transactional data with a structured Operational Business Knowledge represented in a Metadata Knowledge Graph (OBKG) that captures field-level descriptions, relational dependencies, and business object types. This extension enables evaluation of models that jointly reason over tabular evidence and contextual semantics, an increasingly critical capability for foundation models on structured data. Empirical analysis reveals that while metadata-derived features yield modest improvements in classical prediction metrics, these metadata features consistently highlight gaps in the ability of models to leverage semantics in relational context. By reframing tabular prediction as semantics-conditioned reasoning, SALT-KG establishes a benchmark to advance tabular foundation models grounded in declarative knowledge, providing the first empirical step toward semantically linked tables in structured data at enterprise scale.
|
https://arxiv.org/abs/2601.07638
|
Academic Papers
|
svg
|
1b1b45cbac5d0ffc7eadf017719d931a8326c2b5effbcb6400aa74068a1e8587
|
2026-01-13T00:00:00-05:00
|
Beyond Static Tools: Test-Time Tool Evolution for Scientific Reasoning
|
arXiv:2601.07641v1 Announce Type: new Abstract: The central challenge of AI for Science is not reasoning alone, but the ability to create computational methods in an open-ended scientific world. Existing LLM-based agents rely on static, pre-defined tool libraries, a paradigm that fundamentally fails in scientific domains where tools are sparse, heterogeneous, and intrinsically incomplete. In this paper, we propose Test-Time Tool Evolution (TTE), a new paradigm that enables agents to synthesize, verify, and evolve executable tools during inference. By transforming tools from fixed resources into problem-driven artifacts, TTE overcomes the rigidity and long-tail limitations of static tool libraries. To facilitate rigorous evaluation, we introduce SciEvo, a benchmark comprising 1,590 scientific reasoning tasks supported by 925 automatically evolved tools. Extensive experiments show that TTE achieves state-of-the-art performance in both accuracy and tool efficiency, while enabling effective cross-domain adaptation of computational tools. The code and benchmark have been released at https://github.com/lujiaxuan0520/Test-Time-Tool-Evol.
|
https://arxiv.org/abs/2601.07641
|
Academic Papers
|
svg
|
71c6c8814a711340d43cb42421db3778b50ed240f443e3b7a7be299df0dbd338
|
2026-01-13T00:00:00-05:00
|
Hagenberg Risk Management Process (Part 1): Multidimensional Polar Heatmaps for Context-Sensitive Risk Analysis
|
arXiv:2601.07644v1 Announce Type: new Abstract: Traditional two-dimensional risk matrices (heatmaps) are widely used to model and visualize likelihood and impact relationships, but they face fundamental methodological limitations when applied to complex infrastructures. In particular, regulatory frameworks such as NIS2 and DORA call for more context-sensitive and system-oriented risk analysis. We argue that incorporating contextual dimensions into heatmaps enhances their analytical value. As a first step towards our Hagenberg Risk Management Process for complex infrastructures and systems, this paper introduces a multidimensional (ND) polar heatmap as a formal model that explicitly integrates additional context dimensions and subsumes classical two-dimensional models as a special case.
|
https://arxiv.org/abs/2601.07644
|
Academic Papers
|
svg
|
bb40ac635d1329f467991b4a34a215e0c52eff69bfa027a6df34d4d4dbc5d53a
|
2026-01-13T00:00:00-05:00
|
PlaM: Training-Free Plateau-Guided Model Merging for Better Visual Grounding in MLLMs
|
arXiv:2601.07645v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) rely on strong linguistic reasoning inherited from their base language models. However, multimodal instruction fine-tuning paradoxically degrades this text's reasoning capability, undermining multimodal performance. To address this issue, we propose a training-free framework to mitigate this degradation. Through layer-wise vision token masking, we reveal a common three-stage pattern in multimodal large language models: early-modal separation, mid-modal alignment, and late-modal degradation. By analyzing the behavior of MLLMs at different stages, we propose a plateau-guided model merging method that selectively injects base language model parameters into MLLMs. Experimental results based on five MLLMs on nine benchmarks demonstrate the effectiveness of our method. Attention-based analysis further reveals that merging shifts attention from diffuse, scattered patterns to focused localization on task-relevant visual regions. Our repository is on https://github.com/wzj1718/PlaM.
|
https://arxiv.org/abs/2601.07645
|
Academic Papers
|
svg
|
980ac3d974f892f67fe7230eeb70955b47c54963ca8066569e87f134e909471c
|
2026-01-13T00:00:00-05:00
|
Studying the Role of Synthetic Data for Machine Learning-based Wireless Networks Traffic Forecasting
|
arXiv:2601.07646v1 Announce Type: new Abstract: Synthetic data generation is an appealing tool for augmenting and enriching datasets, playing a crucial role in advancing artificial intelligence (AI) and machine learning (ML). Not only does synthetic data help build robust AI/ML datasets cost-effectively, but it also offers privacy-friendly solutions and bypasses the complexities of storing large data volumes. This paper proposes a novel method to generate synthetic data, based on first-order auto-regressive noise statistics, for large-scale Wi-Fi deployments. The approach operates with minimal real data requirements while producing statistically rich traffic patterns that effectively mimic real Access Point (AP) behavior. Experimental results show that ML models trained on synthetic data achieve Mean Absolute Error (MAE) values within 10 to 15 of those obtained using real data when trained on the same APs, while requiring significantly less training data. Moreover, when generalization is required, synthetic-data-trained models improve prediction accuracy by up to 50 percent compared to real-data-trained baselines, thanks to the enhanced variability and diversity of the generated traces. Overall, the proposed method bridges the gap between synthetic data generation and practical Wi-Fi traffic forecasting, providing a scalable, efficient, and real-time solution for modern wireless networks.
|
https://arxiv.org/abs/2601.07646
|
Academic Papers
|
svg
|
f5275ebeb99860ecead4d4a138ab6ac167f89bd7dd2652c018ab76ae854a8935
|
2026-01-13T00:00:00-05:00
|
Order in the Evaluation Court: A Critical Analysis of NLG Evaluation Trends
|
arXiv:2601.07648v1 Announce Type: new Abstract: Despite advances in Natural Language Generation (NLG), evaluation remains challenging. Although various new metrics and LLM-as-a-judge (LaaJ) methods are proposed, human judgment persists as the gold standard. To systematically review how NLG evaluation has evolved, we employ an automatic information extraction scheme to gather key information from NLG papers, focusing on different evaluation methods (metrics, LaaJ and human evaluation). With extracted metadata from 14,171 papers across four major conferences (ACL, EMNLP, NAACL, and INLG) over the past six years, we reveal several critical findings: (1) Task Divergence: While Dialogue Generation demonstrates a rapid shift toward LaaJ (>40% in 2025), Machine Translation remains locked into n-gram metrics, and Question Answering exhibits a substantial decline in the proportion of studies conducting human evaluation. (2) Metric Inertia: Despite the development of semantic metrics, general-purpose metrics (e.g., BLEU, ROUGE) continue to be widely used across tasks without empirical justification, often lacking the discriminative power to distinguish between specific quality criteria. (3) Human-LaaJ Divergence: Our association analysis challenges the assumption that LLMs act as mere proxies for humans; LaaJ and human evaluations prioritize very different signals, and explicit validation is scarce (<8% of papers comparing the two), with only moderate to low correlation. Based on these observations, we derive practical recommendations to improve the rigor of future NLG evaluation.
|
https://arxiv.org/abs/2601.07648
|
Academic Papers
|
svg
|
de34b5fb4eb59620c49151cdaca6cef64805a93793fde6d9573522c8e4171f4a
|
2026-01-13T00:00:00-05:00
|
Active Evaluation of General Agents: Problem Definition and Comparison of Baseline Algorithms
|
arXiv:2601.07651v1 Announce Type: new Abstract: As intelligent agents become more generally-capable, i.e. able to master a wide variety of tasks, the complexity and cost of properly evaluating them rises significantly. Tasks that assess specific capabilities of the agents can be correlated and stochastic, requiring many samples for accurate comparisons, leading to added costs. In this paper, we propose a formal definition and a conceptual framework for active evaluation of agents across multiple tasks, which assesses the performance of ranking algorithms as a function of number of evaluation data samples. Rather than curating, filtering, or compressing existing data sets as a preprocessing step, we propose an online framing: on every iteration, the ranking algorithm chooses the task and agents to sample scores from. Then, evaluation algorithms report a ranking of agents on each iteration and their performance is assessed with respect to the ground truth ranking over time. Several baselines are compared under different experimental contexts, with synthetic generated data and simulated online access to real evaluation data from Atari game-playing agents. We find that the classical Elo rating system -- while it suffers from well-known failure modes, in theory -- is a consistently reliable choice for efficient reduction of ranking error in practice. A recently-proposed method, Soft Condorcet Optimization, shows comparable performance to Elo on synthetic data and significantly outperforms Elo on real Atari agent evaluation. When task variation from the ground truth is high, selecting tasks based on proportional representation leads to higher rate of ranking error reduction.
|
https://arxiv.org/abs/2601.07651
|
Academic Papers
|
svg
|
671e52f37045cf09f2cdf13f3aa221a1242a1a4a6e5078e1b5992a0281c751f1
|
2026-01-13T00:00:00-05:00
|
Towards Automating Blockchain Consensus Verification with IsabeLLM
|
arXiv:2601.07654v1 Announce Type: new Abstract: Consensus protocols are crucial for a blockchain system as they are what allow agreement between the system's nodes in a potentially adversarial environment. For this reason, it is paramount to ensure their correct design and implementation to prevent such adversaries from carrying out malicious behaviour. Formal verification allows us to ensure the correctness of such protocols, but requires high levels of effort and expertise to carry out and thus is often omitted in the development process. In this paper, we present IsabeLLM, a tool that integrates the proof assistant Isabelle with a Large Language Model to assist and automate proofs. We demonstrate the effectiveness of IsabeLLM by using it to develop a novel model of Bitcoin's Proof of Work consensus protocol and verify its correctness. We use the DeepSeek R1 API for this demonstration and found that we were able to generate correct proofs for each of the non-trivial lemmas present in the verification.
|
https://arxiv.org/abs/2601.07654
|
Academic Papers
|
svg
|
64cea6796f59547cfc4cfdd71b1ac8833ce391f6cebc79a695617a7e33762f12
|
2026-01-13T00:00:00-05:00
|
StdGEN++: A Comprehensive System for Semantic-Decomposed 3D Character Generation
|
arXiv:2601.07660v1 Announce Type: new Abstract: We present StdGEN++, a novel and comprehensive system for generating high-fidelity, semantically decomposed 3D characters from diverse inputs. Existing 3D generative methods often produce monolithic meshes that lack the structural flexibility required by industrial pipelines in gaming and animation. Addressing this gap, StdGEN++ is built upon a Dual-branch Semantic-aware Large Reconstruction Model (Dual-Branch S-LRM), which jointly reconstructs geometry, color, and per-component semantics in a feed-forward manner. To achieve production-level fidelity, we introduce a novel semantic surface extraction formalism compatible with hybrid implicit fields. This mechanism is accelerated by a coarse-to-fine proposal scheme, which significantly reduces memory footprint and enables high-resolution mesh generation. Furthermore, we propose a video-diffusion-based texture decomposition module that disentangles appearance into editable layers (e.g., separated iris and skin), resolving semantic confusion in facial regions. Experiments demonstrate that StdGEN++ achieves state-of-the-art performance, significantly outperforming existing methods in geometric accuracy and semantic disentanglement. Crucially, the resulting structural independence unlocks advanced downstream capabilities, including non-destructive editing, physics-compliant animation, and gaze tracking, making it a robust solution for automated character asset production.
|
https://arxiv.org/abs/2601.07660
|
Academic Papers
|
svg
|
3931f495e76908e0ea036e76066424967193076d622fc0534c5da8439f1cff54
|
2026-01-13T00:00:00-05:00
|
Reasoning Models Will Blatantly Lie About Their Reasoning
|
arXiv:2601.07663v1 Announce Type: new Abstract: It has been shown that Large Reasoning Models (LRMs) may not *say what they think*: they do not always volunteer information about how certain parts of the input influence their reasoning. But it is one thing for a model to *omit* such information and another, worse thing to *lie* about it. Here, we extend the work of Chen et al. (2025) to show that LRMs will do just this: they will flatly deny relying on hints provided in the prompt in answering multiple choice questions -- even when directly asked to reflect on unusual (i.e. hinted) prompt content, even when allowed to use hints, and even though experiments *show* them to be using the hints. Our results thus have discouraging implications for CoT monitoring and interpretability.
|
https://arxiv.org/abs/2601.07663
|
Academic Papers
|
svg
|
71bebee7c58bf28e809a3f7e405ef71ef1e2270d96e4c5052be18da7bf9c0a02
|
2026-01-13T00:00:00-05:00
|
Learning to accelerate Krasnosel'skii-Mann fixed-point iterations with guarantees
|
arXiv:2601.07665v1 Announce Type: new Abstract: We introduce a principled learning to optimize (L2O) framework for solving fixed-point problems involving general nonexpansive mappings. Our idea is to deliberately inject summable perturbations into a standard Krasnosel'skii-Mann iteration to improve its average-case performance over a specific distribution of problems while retaining its convergence guarantees. Under a metric sub-regularity assumption, we prove that the proposed parametrization includes only iterations that locally achieve linear convergence-up to a vanishing bias term-and that it encompasses all iterations that do so at a sufficiently fast rate. We then demonstrate how our framework can be used to augment several widely-used operator splitting methods to accelerate the solution of structured monotone inclusion problems, and validate our approach on a best approximation problem using an L2O-augmented Douglas-Rachford splitting algorithm.
|
https://arxiv.org/abs/2601.07665
|
Academic Papers
|
svg
|
eb14693d8ea352ff7044061ad71881ab7d61124c31146cd29080069e18469be5
|
2026-01-13T00:00:00-05:00
|
Variational Contrastive Learning for Skeleton-based Action Recognition
|
arXiv:2601.07666v1 Announce Type: new Abstract: In recent years, self-supervised representation learning for skeleton-based action recognition has advanced with the development of contrastive learning methods. However, most of contrastive paradigms are inherently discriminative and often struggle to capture the variability and uncertainty intrinsic to human motion. To address this issue, we propose a variational contrastive learning framework that integrates probabilistic latent modeling with contrastive self-supervised learning. This formulation enables the learning of structured and semantically meaningful representations that generalize across different datasets and supervision levels. Extensive experiments on three widely used skeleton-based action recognition benchmarks show that our proposed method consistently outperforms existing approaches, particularly in low-label regimes. Moreover, qualitative analyses show that the features provided by our method are more relevant given the motion and sample characteristics, with more focus on important skeleton joints, when compared to the other methods.
|
https://arxiv.org/abs/2601.07666
|
Academic Papers
|
svg
|
fd477960954d643adaf373c128c46615067d68680910f83aa59c6583f792d6cd
|
2026-01-13T00:00:00-05:00
|
Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference
|
arXiv:2601.07667v1 Announce Type: new Abstract: Due to the prevalence of large language models (LLMs), key-value (KV) cache reduction for LLM inference has received remarkable attention. Among numerous works that have been proposed in recent years, layer-wise token pruning approaches, which select a subset of tokens at particular layers to retain in KV cache and prune others, are one of the most popular schemes. They primarily adopt a set of pre-defined layers, at which tokens are selected. Such design is inflexible in the sense that the accuracy significantly varies across tasks and deteriorates in harder tasks such as KV retrieval. In this paper, we propose ASL, a training-free method that adaptively chooses the selection layer for KV cache reduction, exploiting the variance of token ranks ordered by attention score. The proposed method balances the performance across different tasks while meeting the user-specified KV budget requirement. ASL operates during the prefilling stage and can be jointly used with existing KV cache reduction methods such as SnapKV to optimize the decoding stage. By evaluations on the InfiniteBench, RULER, and NIAH benchmarks, we show that equipped with one-shot token selection, where tokens are selected at a layer and propagated to deeper layers, ASL outperforms state-of-the-art layer-wise token selection methods in accuracy while maintaining decoding speed and KV cache reduction.
|
https://arxiv.org/abs/2601.07667
|
Academic Papers
|
svg
|
bfb9267cc34884e50a2e3bd8e6272f1e08b9710d755e580cae58cfede0aefefc
|
2026-01-13T00:00:00-05:00
|
Simplicial Belief
|
arXiv:2601.07669v1 Announce Type: new Abstract: Recently, much work has been carried out to study simplicial interpretations of modal logic. While notions of (distributed) knowledge have been well investigated in this context, it has been open how to model belief in simplicial models. We introduce polychromatic simplicial complexes, which naturally impose a plausibility relation on states. From this, we can define various notions of belief.
|
https://arxiv.org/abs/2601.07669
|
Academic Papers
|
svg
|
29bfeabd8b09901988368f00d48139bec8ecf723b1e02c5afba53b39211688ca
|
2026-01-13T00:00:00-05:00
|
Advancing Multinational License Plate Recognition Through Synthetic and Real Data Fusion: A Comprehensive Evaluation
|
arXiv:2601.07671v1 Announce Type: new Abstract: Automatic License Plate Recognition is a frequent research topic due to its wide-ranging practical applications. While recent studies use synthetic images to improve License Plate Recognition (LPR) results, there remain several limitations in these efforts. This work addresses these constraints by comprehensively exploring the integration of real and synthetic data to enhance LPR performance. We subject 16 Optical Character Recognition (OCR) models to a benchmarking process involving 12 public datasets acquired from various regions. Several key findings emerge from our investigation. Primarily, the massive incorporation of synthetic data substantially boosts model performance in both intra- and cross-dataset scenarios. We examine three distinct methodologies for generating synthetic data: template-based generation, character permutation, and utilizing a Generative Adversarial Network (GAN) model, each contributing significantly to performance enhancement. The combined use of these methodologies demonstrates a notable synergistic effect, leading to end-to-end results that surpass those reached by state-of-the-art methods and established commercial systems. Our experiments also underscore the efficacy of synthetic data in mitigating challenges posed by limited training data, enabling remarkable results to be achieved even with small fractions of the original training data. Finally, we investigate the trade-off between accuracy and speed among different models, identifying those that strike the optimal balance in each intra-dataset and cross-dataset settings.
|
https://arxiv.org/abs/2601.07671
|
Academic Papers
|
svg
|
fe14e1593db28dc34e4612a910d73402ac083afc106db72c4d1227473f975a2d
|
2026-01-13T00:00:00-05:00
|
On the complexity of the Maker-Breaker happy vertex game
|
arXiv:2601.07673v1 Announce Type: new Abstract: Given a c-colored graph G, a vertex of G is happy if it has the same color as all its neighbors. The notion of happy vertices was introduced by Zhang and Li to compute the homophily of a graph. Eto, et al. introduced the Maker-Maker version of the Happy vertex game, where two players compete to claim more happy vertices than their opponent. We introduce here the Maker-Breaker happy vertex game: two players, Maker and Breaker, alternately color the vertices of a graph with their respective colors. Maker aims to maximize the number of happy vertices at the end, while Breaker aims to prevent her. This game is also a scoring version of the Maker-Breaker Domination game introduced by Duchene, et al. as a happy vertex corresponds exactly to a vertex that is not dominated in the domination game. Therefore, this game is a very natural game on graphs and can be studied within the scope of scoring positional games. We initiate here the complexity study of this game, by proving that computing its score is PSPACE-complete on trees, NP-hard on caterpillars, and polynomial on subdivided stars. Finally, we provide the exact value of the score on graphs of maximum degree 2, and we provide an FPT-algorithm to compute the score on graphs of bounded neighborhood diversity. An important contribution of the paper is that, to achieve our hardness results, we introduce a new type of incidence graph called the literal-clause incidence graph for 2-SAT formulas. We prove that QMAX 2-SAT remains PSPACE-complete even if this graph is acyclic, and that MAX 2-SAT remains NP-complete, even if this graph is acyclic and has maximum degree 2, i.e. is a union of paths. We demonstrate the importance of this contribution by proving that Incidence, the scoring positional game played on a graph is also PSPACE-complete when restricted to forests.
|
https://arxiv.org/abs/2601.07673
|
Academic Papers
|
svg
|
ec38054a4186a68db88501ee4b71b171a0b531d79405e81d3f04f2c0423e9e96
|
2026-01-13T00:00:00-05:00
|
Self-Creating Random Walks for Decentralized Learning under Pac-Man Attacks
|
arXiv:2601.07674v1 Announce Type: new Abstract: Random walk (RW)-based algorithms have long been popular in distributed systems due to low overheads and scalability, with recent growing applications in decentralized learning. However, their reliance on local interactions makes them inherently vulnerable to malicious behavior. In this work, we investigate an adversarial threat that we term the ``Pac-Man'' attack, in which a malicious node probabilistically terminates any RW that visits it. This stealthy behavior gradually eliminates active RWs from the network, effectively halting the learning process without triggering failure alarms. To counter this threat, we propose the CREATE-IF-LATE (CIL) algorithm, which is a fully decentralized, resilient mechanism that enables self-creating RWs and prevents RW extinction in the presence of Pac-Man. Our theoretical analysis shows that the CIL algorithm guarantees several desirable properties, such as (i) non-extinction of the RW population, (ii) almost sure boundedness of the RW population, and (iii) convergence of RW-based stochastic gradient descent even in the presence of Pac-Man with a quantifiable deviation from the true optimum. Moreover, the learning process experiences at most a linear time delay due to Pac-Man interruptions and RW regeneration. Our extensive empirical results on both synthetic and public benchmark datasets validate our theoretical findings.
|
https://arxiv.org/abs/2601.07674
|
Academic Papers
|
svg
|
e5e05bc7ee68a8d4744a8253e6ecf52c18c420aa93d31fbbf65dc0bed601edaa
|
2026-01-13T00:00:00-05:00
|
Tab-TRM: Tiny Recursive Model for Insurance Pricing on Tabular Data
|
arXiv:2601.07675v1 Announce Type: new Abstract: We introduce Tab-TRM (Tabular-Tiny Recursive Model), a network architecture that adapts the recursive latent reasoning paradigm of Tiny Recursive Models (TRMs) to insurance modeling. Drawing inspiration from both the Hierarchical Reasoning Model (HRM) and its simplified successor TRM, the Tab-TRM model makes predictions by reasoning over the input features. It maintains two learnable latent tokens - an answer token and a reasoning state - that are iteratively refined by a compact, parameter-efficient recursive network. The recursive processing layer repeatedly updates the reasoning state given the full token sequence and then refines the answer token, in close analogy with iterative insurance pricing schemes. Conceptually, Tab-TRM bridges classical actuarial workflows - iterative generalized linear model fitting and minimum-bias calibration - on the one hand, and modern machine learning, in terms of Gradient Boosting Machines, on the other.
|
https://arxiv.org/abs/2601.07675
|
Academic Papers
|
svg
|
7634856aa11e701e14b3392200f8c2386ae358d00969d0eea1e857d2def061c5
|
2026-01-13T00:00:00-05:00
|
New $X$-Secure $T$-Private Information Retrieval Schemes via Rational Curves and Hermitian Curves
|
arXiv:2601.07676v1 Announce Type: new Abstract: $X$-secure and $T$-private information retrieval (XSTPIR) is a variant of private information retrieval where data security is guaranteed against collusion among up to $X$ servers and the user's retrieval privacy is guaranteed against collusion among up to $T$ servers. Recently, researchers have constructed XSTPIR schemes through the theory of algebraic geometry codes and algebraic curves, with the aim of obtaining XSTPIR schemes that have higher maximum PIR rates for fixed field size and $X,T$ (the number of servers $N$ is not restricted). The mainstream approach is to employ curves of higher genus that have more rational points, evolving from rational curves to elliptic curves to hyperelliptic curves and, most recently, to Hermitian curves. In this paper, we propose a different perspective: with the shared goal of constructing XSTPIR schemes with higher maximum PIR rates, we move beyond the mainstream approach of seeking curves with higher genus and more rational points. Instead, we aim to achieve this goal by enhancing the utilization efficiency of rational points on curves that have already been considered in previous work. By introducing a family of bases for the polynomial space $\text{span}_{\mathbb{F}_q}\{1,x,\dots,x^{k-1}\}$ as an alternative to the Lagrange interpolation basis, we develop two new families of XSTPIR schemes based on rational curves and Hermitian curves, respectively. Parameter comparisons demonstrate that our schemes achieve superior performance. Specifically, our Hermitian-curve-based XSTPIR scheme provides the largest known maximum PIR rates when the field size $q^2\geq 14^2$ and $X+T\geq 4q$. Moreover, for any field size $q^2\geq 28^2$ and $X+T\geq 4$, our two XSTPIR schemes collectively provide the largest known maximum PIR rates.
|
https://arxiv.org/abs/2601.07676
|
Academic Papers
|
svg
|
017bb8c18b323bc42224757ae7de01d514b95afbbb21ee21f040cbe811dd4525
|
2026-01-13T00:00:00-05:00
|
AptaFind: A lightweight local interface for automated aptamer curation from scientific literature
|
arXiv:2601.07684v1 Announce Type: new Abstract: Aptamer researchers face a literature landscape scattered across publications, supplements, and databases, with each search consuming hours that could be spent at the bench. AptaFind transforms this navigation problem through a three-tier intelligence architecture that recognizes research mining is a spectrum, not a binary success or failure. The system delivers direct sequence extraction when possible, curated research leads when extraction fails, and exhaustive literature discovery for additional confidence. By combining local language models for semantic understanding with deterministic algorithms for reliability, AptaFind operates without cloud dependencies or subscription barriers. Validation across 300 University of Texas Aptamer Database targets demonstrates 84 % with some literature found, 84 % with curated research leads, and 79 % with a direct sequence extraction, at a laptop-compute rate of over 900 targets an hour. The platform proves that even when direct sequence extraction fails, automation can still deliver the actionable intelligence researchers need by rapidly narrowing the search to high quality references.
|
https://arxiv.org/abs/2601.07684
|
Academic Papers
|
svg
|
80b55c72a58bb8c8dd3f9b44528a5f48fc0f7826ad41204f396b3dffa6887858
|
2026-01-13T00:00:00-05:00
|
Predictive Analytics for Dementia: Machine Learning on Healthcare Data
|
arXiv:2601.07685v1 Announce Type: new Abstract: Dementia is a complex syndrome impacting cognitive and emotional functions, with Alzheimer's disease being the most common form. This study focuses on enhancing dementia prediction using machine learning (ML) techniques on patient health data. Supervised learning algorithms are applied in this study, including K-Nearest Neighbors (KNN), Quadratic Discriminant Analysis (QDA), Linear Discriminant Analysis (LDA), and Gaussian Process Classifiers. To address class imbalance and improve model performance, techniques such as Synthetic Minority Over-sampling Technique (SMOTE) and Term Frequency-Inverse Document Frequency (TF-IDF) vectorization were employed. Among the models, LDA achieved the highest testing accuracy of 98%. This study highlights the importance of model interpretability and the correlation of dementia with features such as the presence of the APOE-epsilon4 allele and chronic conditions like diabetes. This research advocates for future ML innovations, particularly in integrating explainable AI approaches, to further improve predictive capabilities in dementia care.
|
https://arxiv.org/abs/2601.07685
|
Academic Papers
|
svg
|
6c713f6d8f652f8afa78cdf2d03d752b21d43261afcec3137e378198a55cb75e
|
2026-01-13T00:00:00-05:00
|
On Angels and Demons: Strategic (De)Construction of Dynamic Models
|
arXiv:2601.07690v1 Announce Type: new Abstract: In recent years, there has been growing interest in logics that formalise strategic reasoning about agents capable of modifying the structure of a given model. This line of research has been motivated by applications where a modelled system evolves over time, such as communication networks, security protocols, and multi-agent planning. In this paper, we introduce three logics for reasoning about strategies that modify the topology of weighted graphs. In Strategic Deconstruction Logic, a destructive agent (the demon) removes edges up to a certain cost. In Strategic Construction Logic, a constructive agent (the angel) adds edges within a cost bound. Finally, Strategic Update Logic combines both agents, who may cooperate or compete. We study the expressive power of these logics and the complexity of their model checking problems.
|
https://arxiv.org/abs/2601.07690
|
Academic Papers
|
svg
|
9830abb964b1f59241d77a78eb5ba10b8c96793ffc6b5ab24c52253adaa905e8
|
2026-01-13T00:00:00-05:00
|
Leveraging 3D Representation Alignment and RGB Pretrained Priors for LiDAR Scene Generation
|
arXiv:2601.07692v1 Announce Type: new Abstract: LiDAR scene synthesis is an emerging solution to scarcity in 3D data for robotic tasks such as autonomous driving. Recent approaches employ diffusion or flow matching models to generate realistic scenes, but 3D data remains limited compared to RGB datasets with millions of samples. We introduce R3DPA, the first LiDAR scene generation method to unlock image-pretrained priors for LiDAR point clouds, and leverage self-supervised 3D representations for state-of-the-art results. Specifically, we (i) align intermediate features of our generative model with self-supervised 3D features, which substantially improves generation quality; (ii) transfer knowledge from large-scale image-pretrained generative models to LiDAR generation, mitigating limited LiDAR datasets; and (iii) enable point cloud control at inference for object inpainting and scene mixing with solely an unconditional model. On the KITTI-360 benchmark R3DPA achieves state of the art performance. Code and pretrained models are available at https://github.com/valeoai/R3DPA.
|
https://arxiv.org/abs/2601.07692
|
Academic Papers
|
svg
|
ac27286f3a5b65d670f2fed9b862cf8d5e8bf6aae6c469b24f385a4627574c09
|
2026-01-13T00:00:00-05:00
|
Smooth Operator: Smooth Verifiable Reward Activates Spatial Reasoning Ability of Vision-Language Model
|
arXiv:2601.07695v1 Announce Type: new Abstract: Vision-Language Models (VLMs) face a critical bottleneck in achieving precise numerical prediction for 3D scene understanding. Traditional reinforcement learning (RL) approaches, primarily based on relative ranking, often suffer from severe reward sparsity and gradient instability, failing to effectively exploit the verifiable signals provided by 3D physical constraints. Notably, in standard GRPO frameworks, relative normalization causes "near-miss" samples (characterized by small but non-zero errors) to suffer from advantage collapse. This leads to a severe data utilization bottleneck where valuable boundary samples are discarded during optimization. To address this, we introduce the Smooth Numerical Reward Activation (SNRA) operator and the Absolute-Preserving GRPO (AP-GRPO) framework. SNRA employs a dynamically parameterized Sigmoid function to transform raw feedback into a dense, continuous reward continuum. Concurrently, AP-GRPO integrates absolute scalar gradients to mitigate the numerical information loss inherent in conventional relative-ranking mechanisms. By leveraging this approach, we constructed Numerical3D-50k, a dataset comprising 50,000 verifiable 3D subtasks. Empirical results indicate that AP-GRPO achieves performance parity with large-scale supervised methods while maintaining higher data efficiency, effectively activating latent 3D reasoning in VLMs without requiring architectural modifications.
|
https://arxiv.org/abs/2601.07695
|
Academic Papers
|
svg
|
af31a602d2b124111be3cb4f557c2ecad18fc88fb50c4282ba04edd423e7d113
|
2026-01-13T00:00:00-05:00
|
Exploring the Meta-level Reasoning of Large Language Models via a Tool-based Multi-hop Tabular Question Answering Task
|
arXiv:2601.07696v1 Announce Type: new Abstract: Recent advancements in Large Language Models (LLMs) are increasingly focused on "reasoning" ability, a concept with many overlapping definitions in the LLM discourse. We take a more structured approach, distinguishing meta-level reasoning (denoting the process of reasoning about intermediate steps required to solve a task) from object-level reasoning (which concerns the low-level execution of the aforementioned steps.) We design a novel question answering task, which is based around the values of geopolitical indicators for various countries over various years. Questions require breaking down into intermediate steps, retrieval of data, and mathematical operations over that data. The meta-level reasoning ability of LLMs is analysed by examining the selection of appropriate tools for answering questions. To bring greater depth to the analysis of LLMs beyond final answer accuracy, our task contains 'essential actions' against which we can compare the tool call output of LLMs to infer the strength of reasoning ability. We find that LLMs demonstrate good meta-level reasoning on our task, yet are flawed in some aspects of task understanding. We find that n-shot prompting has little effect on accuracy; error messages encountered do not often deteriorate performance; and provide additional evidence for the poor numeracy of LLMs. Finally, we discuss the generalisation and limitation of our findings to other task domains.
|
https://arxiv.org/abs/2601.07696
|
Academic Papers
|
svg
|
33d32b666e1a1fae8cbe19c456741d7614f0786992a5fc052f366943649fc2e5
|
2026-01-13T00:00:00-05:00
|
Emotional Support Evaluation Framework via Controllable and Diverse Seeker Simulator
|
arXiv:2601.07698v1 Announce Type: new Abstract: As emotional support chatbots have recently gained significant traction across both research and industry, a common evaluation strategy has emerged: use help-seeker simulators to interact with supporter chatbots. However, current simulators suffer from two critical limitations: (1) they fail to capture the behavioral diversity of real-world seekers, often portraying them as overly cooperative, and (2) they lack the controllability required to simulate specific seeker profiles. To address these challenges, we present a controllable seeker simulator driven by nine psychological and linguistic features that underpin seeker behavior. Using authentic Reddit conversations, we train our model via a Mixture-of-Experts (MoE) architecture, which effectively differentiates diverse seeker behaviors into specialized parameter subspaces, thereby enhancing fine-grained controllability. Our simulator achieves superior profile adherence and behavioral diversity compared to existing approaches. Furthermore, evaluating 7 prominent supporter models with our system uncovers previously obscured performance degradations. These findings underscore the utility of our framework in providing a more faithful and stress-tested evaluation for emotional support chatbots.
|
https://arxiv.org/abs/2601.07698
|
Academic Papers
|
svg
|
71445d236fadf0e4787342520c064215ba340fa575e7e3d57c322eaa754ad642
|
2026-01-13T00:00:00-05:00
|
Hidden Monotonicity: Explaining Deep Neural Networks via their DC Decomposition
|
arXiv:2601.07700v1 Announce Type: new Abstract: It has been demonstrated in various contexts that monotonicity leads to better explainability in neural networks. However, not every function can be well approximated by a monotone neural network. We demonstrate that monotonicity can still be used in two ways to boost explainability. First, we use an adaptation of the decomposition of a trained ReLU network into two monotone and convex parts, thereby overcoming numerical obstacles from an inherent blowup of the weights in this procedure. Our proposed saliency methods -- SplitCAM and SplitLRP -- improve on state of the art results on both VGG16 and Resnet18 networks on ImageNet-S across all Quantus saliency metric categories. Second, we exhibit that training a model as the difference between two monotone neural networks results in a system with strong self-explainability properties.
|
https://arxiv.org/abs/2601.07700
|
Academic Papers
|
svg
|
a46536ffe275599cecbfbbfddd1e58f831241ba6c5f00863fc96983d2026a946
|
2026-01-13T00:00:00-05:00
|
Deep Whole-body Parkour
|
arXiv:2601.07701v1 Announce Type: new Abstract: Current approaches to humanoid control generally fall into two paradigms: perceptive locomotion, which handles terrain well but is limited to pedal gaits, and general motion tracking, which reproduces complex skills but ignores environmental capabilities. This work unites these paradigms to achieve perceptive general motion control. We present a framework where exteroceptive sensing is integrated into whole-body motion tracking, permitting a humanoid to perform highly dynamic, non-locomotion tasks on uneven terrain. By training a single policy to perform multiple distinct motions across varied terrestrial features, we demonstrate the non-trivial benefit of integrating perception into the control loop. Our results show that this framework enables robust, highly dynamic multi-contact motions, such as vaulting and dive-rolling, on unstructured terrain, significantly expanding the robot's traversability beyond simple walking or running. https://project-instinct.github.io/deep-whole-body-parkour
|
https://arxiv.org/abs/2601.07701
|
Academic Papers
|
svg
|
2b8fb94d7c6a29e0132bc0a09801e8a59f7231ce0c7965ae123b4780c1477107
|
2026-01-13T00:00:00-05:00
|
TMATDG: applying TDG methods to multiple scattering via T-matrix approximation
|
arXiv:2601.07704v1 Announce Type: new Abstract: We present a MATLAB package for the solution of multiple scattering problems, coupling Trefftz Discontinuos Galerkin methods for Helmholtz scattering with the T-matrix method. We rely on the TMATROM package to numerically approximate the T-matrices and deal with multiple scattering problem, providing a framework to handle scattering by polygonal obstacles.
|
https://arxiv.org/abs/2601.07704
|
Academic Papers
|
svg
|
16d95fe7b320ce27fec3329c3c3a9b8c2e7b039d3645becc32f6d12190b2dd42
|
2026-01-13T00:00:00-05:00
|
Is Agentic RAG worth it? An experimental comparison of RAG approaches
|
arXiv:2601.07711v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) systems are usually defined by the combination of a generator and a retrieval component that extracts textual context from a knowledge base to answer user queries. However, such basic implementations exhibit several limitations, including noisy or suboptimal retrieval, misuse of retrieval for out-of-scope queries, weak query-document matching, and variability or cost associated with the generator. These shortcomings have motivated the development of "Enhanced" RAG, where dedicated modules are introduced to address specific weaknesses in the workflow. More recently, the growing self-reflective capabilities of Large Language Models (LLMs) have enabled a new paradigm, which we refer to as "Agentic" RAG. In this approach, the LLM orchestrates the entire process-deciding which actions to perform, when to perform them, and whether to iterate-thereby reducing reliance on fixed, manually engineered modules. Despite the rapid adoption of both paradigms, it remains unclear which approach is preferable under which conditions. In this work, we conduct an extensive, empirically driven evaluation of Enhanced and Agentic RAG across multiple scenarios and dimensions. Our results provide practical insights into the trade-offs between the two paradigms, offering guidance on selecting the most effective RAG design for real-world applications, considering both costs and performance.
|
https://arxiv.org/abs/2601.07711
|
Academic Papers
|
svg
|
3068fd97102f5bddbea9339020ee8699e91fc0ff67e8a6ca27c3f2dec58b2168
|
2026-01-13T00:00:00-05:00
|
Enforcing Priority in Schedule-based User Equilibrium Transit Assignment
|
arXiv:2601.07712v1 Announce Type: new Abstract: Denied boarding in congested transit systems induces queuing delays and departure-time shifts that can reshape passenger flows. Correctly modeling these responses in transit assignment hinges on the enforcement of two priority rules: continuance priority for onboard passengers and first-come-first-served (FCFS) boarding among waiting passengers. Existing schedule-based models typically enforce these rules through explicit dynamic loading and group-level expected costs, yet discrete vehicle runs can induce nontrivial within-group cost differences that undermine behavioral consistency. We revisit the implicit-priority framework of Nguyen et al. (2001), which, by encoding boarding priority through the notion of available capacity, characterizes route and departure choices based on realized personal (rather than group-averaged) travel experiences. However, the framework lacks an explicit mathematical formulation and exact computational methods for finding equilibria. Here, we derive an equivalent nonlinear complementarity problem (NCP) formulation and establish equilibrium existence under mild conditions. We also show that multiple equilibria may exist, including behaviorally questionable ones. To rule out these artifacts, we propose a refined arc-level NCP formulation that not only corresponds to a tighter, behaviorally consistent equilibrium concept but also is more computationally tractable. We reformulate the NCP as a continuously differentiable mathematical program with equilibrium constraints (MPEC) and propose two solution algorithms. Numerical studies on benchmark instances and a Hong Kong case study demonstrate that the model reproduces continuance priority and FCFS queuing and captures departure-time shifts driven by the competition for boarding priority.
|
https://arxiv.org/abs/2601.07712
|
Academic Papers
|
svg
|
3e7900772374bdcb1db260c2b3237595e0259e91c14394cd35bc11b3df0b5958
|
2026-01-13T00:00:00-05:00
|
Safe Navigation under Uncertain Obstacle Dynamics using Control Barrier Functions and Constrained Convex Generators
|
arXiv:2601.07715v1 Announce Type: new Abstract: This paper presents a sampled-data framework for the safe navigation of controlled agents in environments cluttered with obstacles governed by uncertain linear dynamics. Collision-free motion is achieved by combining Control Barrier Function (CBF)-based safety filtering with set-valued state estimation using Constrained Convex Generators (CCGs). At each sampling time, a CCG estimate of each obstacle is obtained using a finite-horizon guaranteed estimation scheme and propagated over the sampling interval to obtain a CCG-valued flow that describes the estimated obstacle evolution. However, since CCGs are defined indirectly - as an affine transformation of a generator set subject to equality constraints, rather than as a sublevel set of a scalar function - converting the estimated obstacle flows into CBFs is a nontrivial task. One of the main contributions of this paper is a procedure to perform this conversion, ultimately yielding a CBF via a convex optimization problem whose validity is established by the Implicit Function Theorem. The resulting obstacle-specific CBFs are then merged into a single CBF that is used to design a safe controller through the standard Quadratic Program (QP)-based approach. Since CCGs support Minkowski sums, the proposed framework also naturally handles rigid-body agents and generalizes existing CBF-based rigid-body navigation designs to arbitrary agent and obstacle geometries. While the main contribution is general, the paper primarily focuses on agents with first-order control-affine dynamics and second-order strict-feedback dynamics. Simulation examples demonstrate the effectiveness of the proposed method.
|
https://arxiv.org/abs/2601.07715
|
Academic Papers
|
svg
|
35ec349d306f12e55ac57332f99ce5b61be1df340f9f239919a11cc2a9095a5d
|
2026-01-13T00:00:00-05:00
|
Hiking in the Wild: A Scalable Perceptive Parkour Framework for Humanoids
|
arXiv:2601.07718v1 Announce Type: new Abstract: Achieving robust humanoid hiking in complex, unstructured environments requires transitioning from reactive proprioception to proactive perception. However, integrating exteroception remains a significant challenge: mapping-based methods suffer from state estimation drift; for instance, LiDAR-based methods do not handle torso jitter well. Existing end-to-end approaches often struggle with scalability and training complexity; specifically, some previous works using virtual obstacles are implemented case-by-case. In this work, we present \textit{Hiking in the Wild}, a scalable, end-to-end parkour perceptive framework designed for robust humanoid hiking. To ensure safety and training stability, we introduce two key mechanisms: a foothold safety mechanism combining scalable \textit{Terrain Edge Detection} with \textit{Foot Volume Points} to prevent catastrophic slippage on edges, and a \textit{Flat Patch Sampling} strategy that mitigates reward hacking by generating feasible navigation targets. Our approach utilizes a single-stage reinforcement learning scheme, mapping raw depth inputs and proprioception directly to joint actions, without relying on external state estimation. Extensive field experiments on a full-size humanoid demonstrate that our policy enables robust traversal of complex terrains at speeds up to 2.5 m/s. The training and deployment code is open-sourced to facilitate reproducible research and deployment on real robots with minimal hardware modifications.
|
https://arxiv.org/abs/2601.07718
|
Academic Papers
|
svg
|
dca2e1c16d14db9e4da9c471565a1a7e4920246b13d1506c9bde43897ac628ab
|
2026-01-13T00:00:00-05:00
|
FMAC: a Fair Fiducial Marker Accuracy Comparison Software
|
arXiv:2601.07723v1 Announce Type: new Abstract: This paper presents a method for carrying fair comparisons of the accuracy of pose estimation using fiducial markers. These comparisons rely on large sets of high-fidelity synthetic images enabling deep exploration of the 6 degrees of freedom. A low-discrepancy sampling of the space allows to check the correlations between each degree of freedom and the pose errors by plotting the 36 pairs of combinations. The images are rendered using a physically based ray tracing code that has been specifically developed to use the standard calibration coefficients of any camera directly. The software reproduces image distortions, defocus and diffraction blur. Furthermore, sub-pixel sampling is applied to sharp edges to enhance the fidelity of the rendered image. After introducing the rendering algorithm and its experimental validation, the paper proposes a method for evaluating the pose accuracy. This method is applied to well-known markers, revealing their strengths and weaknesses for pose estimation. The code is open source and available on GitHub.
|
https://arxiv.org/abs/2601.07723
|
Academic Papers
|
svg
|
791c2132ebf4ba57e2bfd17fc784dac7f92199c51166116d687d8edf8261a9dc
|
2026-01-13T00:00:00-05:00
|
Weak Composition Lattices and Ring-Linear Anticodes
|
arXiv:2601.07725v1 Announce Type: new Abstract: Lattices and partially ordered sets have played an increasingly important role in coding theory, providing combinatorial frameworks for studying structural and algebraic properties of error-correcting codes. Motivated by recent works connecting lattice theory, anticodes, and coding-theoretic invariants, we investigate ring-linear codes endowed with the Lee metric. We introduce and characterize optimal Lee-metric anticodes over the ring $\mathbb{Z}/p^s\mathbb{Z}$. We show that the family of such anticodes admits a natural partition into subtypes and forms a lattice under inclusion. We establish a bijection between this lattice and a lattice of weak compositions ordered by dominance. As an application, we use this correspondence to introduce new invariants for Lee-metric codes via an anticode approach.
|
https://arxiv.org/abs/2601.07725
|
Academic Papers
|
svg
|
74436101e916faf88689bc5729980ac51033bc1ecc9b70798eee314fa35c0c49
|
2026-01-13T00:00:00-05:00
|
TeeMAF: A TEE-Based Mutual Attestation Framework for On-Chain and Off-Chain Functions in Blockchain DApps
|
arXiv:2601.07726v1 Announce Type: new Abstract: The rapid development of Internet of Things (IoT) technology has led to growing concerns about data security and user privacy in the interactions within distributed systems. Decentralized Applications (DApps) in distributed systems consist of on-chain and off-chain functions, where on-chain functions are smart contracts running in the blockchain network, while off-chain functions operate outside the blockchain. Since smart contracts cannot access off-chain information, they cannot verify whether the off-chain functions, i.e. the software components, they interact with have been tampered or not. As a result, establishing mutual trust between the on-chain smart contracts and the off-chain functions remains a significant challenge. To address the challenge, this paper introduces TeeMAF, a generic framework for mutual attestation between on-chain and off-chain functions, leveraging Trusted Execution Environments (TEE), specifically Intel Software Guard Extensions (SGX), SCONE (a TEE container on top of Intel SGX), and remote attestation technologies. This ensures that the deployed off-chain functions of a DApp execute in a provably secure computing environment and achieve mutual attestation with the interacting on-chain functions. Through a security analysis of TeeMAF, the reliability of deployed DApps can be verified, ensuring their correct execution. Furthermore, based on this framework, this paper proposes a decentralized resource orchestration platform (a specific DApp) for deploying applications over untrusted environments. The system is implemented on Ethereum and benchmarked using Hyperledger Caliper. Performance evaluation focusing on throughput and latency demonstrates that, compared to platforms without a mutual attestation scheme, the performance overhead remains within an acceptable range.
|
https://arxiv.org/abs/2601.07726
|
Academic Papers
|
svg
|
4623ea893c4c72472243b11f326465545ca9aeff40ea6b304d1f96220f20b358
|
2026-01-13T00:00:00-05:00
|
Explicit complex time integrators for stiff problems
|
arXiv:2601.07730v1 Announce Type: new Abstract: Most numerical methods for time integration use real-valued time steps. Complex time steps, however, can provide an additional degree of freedom, as we can select the magnitude of the time step in both the real and imaginary directions. We show that specific paths in the complex time plane lead to expanded stability regions, providing clear computational advantages for complex-valued systems. In particular, we highlight the Schr\"odinger equation, for which complex time integrators can be uniquely optimal. Furthermore, we demonstrate that these benefits extend to certain classes of real-valued stiff systems by coupling complex time steps with the Projective Integration method.
|
https://arxiv.org/abs/2601.07730
|
Academic Papers
|
svg
|
c11e574a0095cb4fe4296187fd328099cc8084dcb1f6d89030313a584ba8ec3f
|
2026-01-13T00:00:00-05:00
|
Evaluating Impacts of Traffic Regulations in Complex Mobility Systems Using Scenario-Based Simulations
|
arXiv:2601.07735v1 Announce Type: new Abstract: Urban traffic regulation policies are increasingly used to address congestion, emissions, and accessibility in cities, yet their impacts are difficult to assess due to the socio-technical complexity of urban mobility systems. Recent advances in data availability and computational power enable new forms of model-driven, simulation-based decision support for transportation policy design. This paper proposes a novel simulation paradigm for the ex-ante evaluation of both direct impacts (e.g., traffic conditions, modal shift, emissions) and indirect impacts spanning transportation-related effects, social equity, and economic accessibility. The approach integrates a multi-layer urban mobility model combining a physical layer of networks, flows, and emissions with a social layer capturing behavioral responses and adaptation to policy changes. Real-world data are used to instantiate the current "as-is" scenario, while policy alternatives and behavioral assumptions are encoded as model parameters to generate multiple "what-if" scenarios. The framework supports systematic comparison across scenarios by analyzing variations in simulated outcomes induced by policy interventions. The proposed approach is illustrated through a case study aims to assess the impacts of the introduction of broad urban traffic restriction schemes. Results demonstrate the framework's ability to explore alternative regulatory designs and user responses, supporting informed and anticipatory evaluation of urban traffic policies.
|
https://arxiv.org/abs/2601.07735
|
Academic Papers
|
svg
|
1f05f2fe047dac679ef62f00ba4d91e0d6506088cb05c3a3b433d32893522cb0
|
2026-01-13T00:00:00-05:00
|
Evaluating the encoding competence of visual language models using uncommon actions
|
arXiv:2601.07737v1 Announce Type: new Abstract: We propose UAIT (Uncommon-sense Action Image-Text) dataset, a new evaluation benchmark designed to test the semantic understanding ability of visual language models (VLMs) in uncommon-sense action scenes. Unlike previous datasets that focus on common visual scenes with statistical frequency advantages, UAIT challenges models with grammatically reasonable but semantically counter-common sense image-text pairs. Such tasks require models to go beyond superficial pattern recognition and demonstrate a deep understanding of agent-patient relationships and physical feasibility. To build UAIT, we designed a semi-automated process to synthesize high-quality uncommon-sense image-text samples using large language models, few-shot prompt engineering, and text-to-image generation. Each sample is accompanied by a carefully designed multiple-choice question to test the model's competence in fine-grained reasoning. We evaluate multiple state-of-the-art visual language models and compare them with models based on contrastive learning. Experiments show that all models perform significantly worse than humans in semantic judgment, especially in distinguishing grammatical correctness from semantic rationality. Further experiments show that even the lightweight model can improve its accuracy after fine-tuning, demonstrating the great potential of directional adaptation. This study not only reveals the key weaknesses of VLMs, but also provides diagnostic tools and research directions for the development of robust models with real visual semantic reasoning capabilities.
|
https://arxiv.org/abs/2601.07737
|
Academic Papers
|
svg
|
9761cea9d15fb32d44487eab412ab2744596f99ccd20ac0a3330366bd04442d6
|
2026-01-13T00:00:00-05:00
|
Predefined-time One-Shot Cooperative Estimation, Guidance, and Control for Simultaneous Target Interception
|
arXiv:2601.07744v1 Announce Type: new Abstract: This work develops a unified nonlinear estimation-guidance-control framework for cooperative simultaneous interception of a stationary target under a heterogeneous sensing topology, where sensing capabilities are non-uniform across interceptors. Specifically, only a subset of agents is instrumented with onboard seekers (informed/seeker-equipped agents), whereas the rest of them (seeker-less agents) acquire the information about the target indirectly via the informed agents and execute a distributed cooperative guidance for simultaneous target interception. To address the resulting partial observability, a predefined-time distributed observer is leveraged, guaranteeing convergence of the target state estimates for seeker-less agents through information exchange with seeker-equipped neighbors over a directed communication graph. Thereafter, an improved time-to-go estimate accounting for wide launch envelopes is utilized to design the distributed cooperative guidance commands. This estimate is coupled with a predefined-time consensus protocol, ensuring consensus in the agents' time-to-go values. The temporal upper bounds within which both observer error and time-to-go consensus error converge to zero can be prescribed as design parameters. Furthermore, the cooperative guidance commands are realized by means of an autopilot, wherein the interceptor is steered by canard actuation. The corresponding fin deflection commands are generated using a predefined-time convergent sliding mode control law. This enables the autopilot to precisely track the commanded lateral acceleration within a design-specified time, while maintaining non-singularity of the overall design. Theoretical guarantees are supported by numerical simulations across diverse engagement geometries, verifying the estimation accuracy, the cooperative interception performance, and the autopilot response using the proposed scheme.
|
https://arxiv.org/abs/2601.07744
|
Academic Papers
|
svg
|
c4c05170d99632e564e776fa03f2597cb32907e74d931de9a376fa861bbccf04
|
2026-01-13T00:00:00-05:00
|
Improving Domain Generalization in Contrastive Learning using Adaptive Temperature Control
|
arXiv:2601.07748v1 Announce Type: new Abstract: Self-supervised pre-training with contrastive learning is a powerful method for learning from sparsely labeled data. However, performance can drop considerably when there is a shift in the distribution of data from training to test time. We study this phenomenon in a setting in which the training data come from multiple domains, and the test data come from a domain not seen at training that is subject to significant covariate shift. We present a new method for contrastive learning that incorporates domain labels to increase the domain invariance of learned representations, leading to improved out-of-distribution generalization. Our method adjusts the temperature parameter in the InfoNCE loss -- which controls the relative weighting of negative pairs -- using the probability that a negative sample comes from the same domain as the anchor. This upweights pairs from more similar domains, encouraging the model to discriminate samples based on domain-invariant attributes. Through experiments on a variant of the MNIST dataset, we demonstrate that our method yields better out-of-distribution performance than domain generalization baselines. Furthermore, our method maintains strong in-distribution task performance, substantially outperforming baselines on this measure.
|
https://arxiv.org/abs/2601.07748
|
Academic Papers
|
svg
|
6be490406b923d2c3b558cf2af46a98bb4ffb5aefff1c3793d81ff53715fb143
|
2026-01-13T00:00:00-05:00
|
On the application of the Wasserstein metric to 2D curves classification
|
arXiv:2601.07749v1 Announce Type: new Abstract: In this work we analyse a number of variants of the Wasserstein distance which allow to focus the classification on the prescribed parts (fragments) of classified 2D curves. These variants are based on the use of a number of discrete probability measures which reflect the importance of given fragments of curves. The performance of this approach is tested through a series of experiments related to the clustering analysis of 2D curves performed on data coming from the field of archaeology.
|
https://arxiv.org/abs/2601.07749
|
Academic Papers
|
svg
|
0741fb3de07a3c4851f30ffe6f2ef7c3aa5aef9673e85e656b39f4d530455d0e
|
2026-01-13T00:00:00-05:00
|
Structure First, Reason Next: Enhancing a Large Language Model using Knowledge Graph for Numerical Reasoning in Financial Documents
|
arXiv:2601.07754v1 Announce Type: new Abstract: Numerical reasoning is an important task in the analysis of financial documents. It helps in understanding and performing numerical predictions with logical conclusions for the given query seeking answers from financial texts. Recently, Large Language Models (LLMs) have shown promising results in multiple Question-Answering (Q-A) systems with the capability of logical reasoning. As documents related to finance often consist of long and complex financial contexts, LLMs appear well-suited for building high-quality automated financial question-answering systems. However, LLMs often face challenges in accurately processing the various numbers within financial reports. Extracting numerical data from unstructured text and semi-structured tables, and reliably performing accurate calculations, remains a significant bottleneck for numerical reasoning in most state-of-the-art LLMs. Recent studies have shown that structured data augmentations, such as Knowledge Graphs (KGs), have notably improved the predictions of LLMs along with logical explanations. Thus, it is an important requirement to consider inherent structured information in financial reports while using LLMs for various financial analytics. This paper proposes a framework to incorporate structured information using KGs along with LLM predictions for numerical reasoning tasks. The KGs are extracted using a proposed schema inherently from the document under processing. We evaluated our proposed framework over the benchmark data FinQA, using an open-source LLM, namely Llama 3.1 8B Instruct. We observed that the proposed framework improved execution accuracy by approximately 12% relative to the vanilla LLM.
|
https://arxiv.org/abs/2601.07754
|
Academic Papers
|
svg
|
bfd57b247505921f87c8268039d38684017709410950a4adaa5d2b443a7524a3
|
2026-01-13T00:00:00-05:00
|
On the Compact Discontinuous Galerkin method for polytopal meshes
|
arXiv:2601.07757v1 Announce Type: new Abstract: The Compact Discontinuous Galerkin method was introduced by Peraire and Persson in (SIAM J. Sci. Comput., 30, 1806--1824, 2008). In this work, we present the stability and convergence analysis for the $hp$-version of this method applied to elliptic problems on polytopal meshes. Moreover, we introduce fast and practical algorithms that allow the CDG, LDG, and BR2 methods to be implemented within a unified framework. Our numerical experiments show that the CDG method yields a compact stencil for the stiffness matrix, with faster assembly and solving times compared to the LDG and BR2 methods. We numerically study how coercivity depends on the method parameters for various mesh types, with particular focus on the number of facets per mesh element. Finally, we demonstrate the importance of choosing the correct directions for the numerical fluxes when using variable polynomial degrees.
|
https://arxiv.org/abs/2601.07757
|
Academic Papers
|
svg
|
b96aa7155bf9d0f99e983bb827efbea03beea5fb1b524f678a14f90dd84e06cd
|
2026-01-13T00:00:00-05:00
|
Free-RBF-KAN: Kolmogorov-Arnold Networks with Adaptive Radial Basis Functions for Efficient Function Learning
|
arXiv:2601.07760v1 Announce Type: new Abstract: Kolmogorov-Arnold Networks (KANs) have shown strong potential for efficiently approximating complex nonlinear functions. However, the original KAN formulation relies on B-spline basis functions, which incur substantial computational overhead due to De Boor's algorithm. To address this limitation, recent work has explored alternative basis functions such as radial basis functions (RBFs) that can improve computational efficiency and flexibility. Yet, standard RBF-KANs often sacrifice accuracy relative to the original KAN design. In this work, we propose Free-RBF-KAN, a RBF-based KAN architecture that incorporates adaptive learning grids and trainable smoothness to close this performance gap. Our method employs freely learnable RBF shapes that dynamically align grid representations with activation patterns, enabling expressive and adaptive function approximation. Additionally, we treat smoothness as a kernel parameter optimized jointly with network weights, without increasing computational complexity. We provide a general universality proof for RBF-KANs, which encompasses our Free-RBF-KAN formulation. Through a broad set of experiments, including multiscale function approximation, physics-informed machine learning, and PDE solution operator learning, Free-RBF-KAN achieves accuracy comparable to the original B-spline-based KAN while delivering faster training and inference. These results highlight Free-RBF-KAN as a compelling balance between computational efficiency and adaptive resolution, particularly for high-dimensional structured modeling tasks.
|
https://arxiv.org/abs/2601.07760
|
Academic Papers
|
svg
|
0e617a5173b191c94085c77e43e335790fb03b7e4025bbd9f8b69f4f9963032e
|
2026-01-13T00:00:00-05:00
|
Video Evidence to Reasoning Efficient Video Understanding via Explicit Evidence Grounding
|
arXiv:2601.07761v1 Announce Type: new Abstract: Large Vision-Language Models (LVLMs) face a fundamental dilemma in video reasoning: they are caught between the prohibitive computational costs of verbose reasoning and the hallucination risks of efficient, ungrounded approaches. To resolve this, we introduce the Chain of Evidence (CoE), a novel framework that architecturally decouples and co-optimizes perceptual grounding and reasoning efficiency. CoE incorporates two core innovations: (1) A lightweight Evidence Grounding Module (EGM) that acts as a query-guided filter, dynamically identifying and extracting a compact set of high-fidelity visual evidence; and (2) An Evidence-Anchoring Protocol optimized via Reinforcement Learning. Crucially, we design a composite reward mechanism that enforces process alignment, compelling the model to strictly reference identified temporal anchors during deduction, thereby mitigating hallucinations. To enable this, we construct CoE-Instruct, a large-scale dataset (164k samples) featuring a novel dual-annotation schema for separate perception and reasoning supervision. Extensive experiments on five benchmarks, including Video-MME, MVBench, and VSI-Bench, demonstrate that CoE-enhanced models establish a new state-of-the-art. They significantly outperform existing methods in accuracy, proving CoE to be a powerful and practical paradigm for reliable video understanding.
|
https://arxiv.org/abs/2601.07761
|
Academic Papers
|
svg
|
3d84af963023412b050551ca502ab5ed5370531e09ce9265e101d00fd5f6ac4b
|
2026-01-13T00:00:00-05:00
|
Structural Approach to Guiding a Present-Biased Agent
|
arXiv:2601.07763v1 Announce Type: new Abstract: Time-inconsistent behavior, such as procrastination or abandonment of long-term goals, arises when agents evaluate immediate outcomes disproportionately higher than future ones. This leads to globally suboptimal behavior, where plans are frequently revised or abandoned entirely. In the influential model of Kleinberg and Oren (2014) such behavior is modeled by a present-biased agent navigating a task graph toward a goal, making locally optimal decisions at each step based on discounted future costs. As a result, the agent may repeatedly deviate from initial plans. Recent work by Belova et al. (2024) introduced a two-agent extension of this model, where a fully-aware principal attempts to guide the present-biased agent through a specific set of critical tasks without causing abandonment. This captures a rich class of principal-agent dynamics in behavioral settings. In this paper, we provide a comprehensive algorithmic characterization of this problem. We analyze its computational complexity through the framework of parameterized algorithms, focusing on graph parameters that naturally emerge in this setting, such as treewidth, vertex cover, and feedback vertex set. Our main result is a fixed-parameter tractable algorithm when parameterized by the treewidth of the task graph and the number of distinct (v,t)-path costs. Our algorithm encaptures several input settings, such as bounded edge costs and restricted task graph structure. We demonstrate that our main result yields efficient algorithms for a number of such configurations. We complement this with tight hardness results, that highlight the extreme difficulty of the problem even on simplest graphs with bounded number of nodes and constant parameter values, and motivate our choice of parameters. We delineate tractable and intractable regions of the problem landscape, which include answers to open questions of Belova et al. (2024).
|
https://arxiv.org/abs/2601.07763
|
Academic Papers
|
svg
|
153c0d92011d15b763e5ab397cec281596adb6c042e9989c42d3370bc1897777
|
2026-01-13T00:00:00-05:00
|
Contrastive Learning with Narrative Twins for Modeling Story Salience
|
arXiv:2601.07765v1 Announce Type: new Abstract: Understanding narratives requires identifying which events are most salient for a story's progression. We present a contrastive learning framework for modeling narrative salience that learns story embeddings from narrative twins: stories that share the same plot but differ in surface form. Our model is trained to distinguish a story from both its narrative twin and a distractor with similar surface features but different plot. Using the resulting embeddings, we evaluate four narratologically motivated operations for inferring salience (deletion, shifting, disruption, and summarization). Experiments on short narratives from the ROCStories corpus and longer Wikipedia plot summaries show that contrastively learned story embeddings outperform a masked-language-model baseline, and that summarization is the most reliable operation for identifying salient sentences. If narrative twins are not available, random dropout can be used to generate the twins from a single story. Effective distractors can be obtained either by prompting LLMs or, in long-form narratives, by using different parts of the same story.
|
https://arxiv.org/abs/2601.07765
|
Academic Papers
|
svg
|
ec3839ebb161e7067280802c05fd29179d9c91812bd2f517410fed11d9f5742f
|
2026-01-13T00:00:00-05:00
|
Are LLM Decisions Faithful to Verbal Confidence?
|
arXiv:2601.07767v1 Announce Type: new Abstract: Large Language Models (LLMs) can produce surprisingly sophisticated estimates of their own uncertainty. However, it remains unclear to what extent this expressed confidence is tied to the reasoning, knowledge, or decision making of the model. To test this, we introduce $\textbf{RiskEval}$: a framework designed to evaluate whether models adjust their abstention policies in response to varying error penalties. Our evaluation of several frontier models reveals a critical dissociation: models are neither cost-aware when articulating their verbal confidence, nor strategically responsive when deciding whether to engage or abstain under high-penalty conditions. Even when extreme penalties render frequent abstention the mathematically optimal strategy, models almost never abstain, resulting in utility collapse. This indicates that calibrated verbal confidence scores may not be sufficient to create trustworthy and interpretable AI systems, as current models lack the strategic agency to convert uncertainty signals into optimal and risk-sensitive decisions.
|
https://arxiv.org/abs/2601.07767
|
Academic Papers
|
svg
|
f5e4b9311461d1bb52e80c70c5e53e6a67464f92e7ab562388c8935aa4420cba
|
2026-01-13T00:00:00-05:00
|
THETA: Triangulated Hand-State Estimation for Teleoperation and Automation in Robotic Hand Control
|
arXiv:2601.07768v1 Announce Type: new Abstract: The teleoperation of robotic hands is limited by the high costs of depth cameras and sensor gloves, commonly used to estimate hand relative joint positions (XYZ). We present a novel, cost-effective approach using three webcams for triangulation-based tracking to approximate relative joint angles (theta) of human fingers. We also introduce a modified DexHand, a low-cost robotic hand from TheRobotStudio, to demonstrate THETA's real-time application. Data collection involved 40 distinct hand gestures using three 640x480p webcams arranged at 120-degree intervals, generating over 48,000 RGB images. Joint angles were manually determined by measuring midpoints of the MCP, PIP, and DIP finger joints. Captured RGB frames were processed using a DeepLabV3 segmentation model with a ResNet-50 backbone for multi-scale hand segmentation. The segmented images were then HSV-filtered and fed into THETA's architecture, consisting of a MobileNetV2-based CNN classifier optimized for hierarchical spatial feature extraction and a 9-channel input tensor encoding multi-perspective hand representations. The classification model maps segmented hand views into discrete joint angles, achieving 97.18% accuracy, 98.72% recall, F1 Score of 0.9274, and a precision of 0.8906. In real-time inference, THETA captures simultaneous frames, segments hand regions, filters them, and compiles a 9-channel tensor for classification. Joint-angle predictions are relayed via serial to an Arduino, enabling the DexHand to replicate hand movements. Future research will increase dataset diversity, integrate wrist tracking, and apply computer vision techniques such as OpenAI-Vision. THETA potentially ensures cost-effective, user-friendly teleoperation for medical, linguistic, and manufacturing applications.
|
https://arxiv.org/abs/2601.07768
|
Academic Papers
|
svg
|
43a20877b6fd84ecfd0a6cf0107f5bc7df383dfdd6f638cd58174e2e72c1ff4f
|
2026-01-13T00:00:00-05:00
|
Beyond External Guidance: Unleashing the Semantic Richness Inside Diffusion Transformers for Improved Training
|
arXiv:2601.07773v1 Announce Type: new Abstract: Recent works such as REPA have shown that guiding diffusion models with external semantic features (e.g., DINO) can significantly accelerate the training of diffusion transformers (DiTs). However, this requires the use of pretrained external networks, introducing additional dependencies and reducing flexibility. In this work, we argue that DiTs actually have the power to guide the training of themselves, and propose \textbf{Self-Transcendence}, a simple yet effective method that achieves fast convergence using internal feature supervision only. It is found that the slow convergence in DiT training primarily stems from the difficulty of representation learning in shallow layers. To address this, we initially train the DiT model by aligning its shallow features with the latent representations from the pretrained VAE for a short phase (e.g., 40 epochs), then apply classifier-free guidance to the intermediate features, enhancing their discriminative capability and semantic expressiveness. These enriched internal features, learned entirely within the model, are used as supervision signals to guide a new DiT training. Compared to existing self-contained methods, our approach brings a significant performance boost. It can even surpass REPA in terms of generation quality and convergence speed, but without the need for any external pretrained models. Our method is not only more flexible for different backbones but also has the potential to be adopted for a wider range of diffusion-based generative tasks. The source code of our method can be found at https://github.com/csslc/Self-Transcendence.
|
https://arxiv.org/abs/2601.07773
|
Academic Papers
|
svg
|
bf4161f4cdd384436bad9fd9909471436a0031819d7b4eb70dd32912cdd214f2
|
2026-01-13T00:00:00-05:00
|
The Complexity of Games with Randomised Control
|
arXiv:2601.07775v1 Announce Type: new Abstract: We study the complexity of solving two-player infinite duration games played on a fixed finite graph, where the control of a node is not predetermined but rather assigned randomly. In classic random-turn games, control of each node is assigned randomly every time the node is visited during a play. In this work, we study two natural variants of this where control of each node is assigned only once: (i) control is assigned randomly during a play when a node is visited for the first time and does not change for the rest of the play and (ii) control is assigned a priori before the game starts for every node by independent coin tosses and then the game is played. We investigate the complexity of computing the winning probability with three kinds of objectives-reachability, parity, and energy. We show that the qualitative questions on all variants and all objectives are NL-complete. For the quantitative questions, we show that deciding whether the maximiser can win with probability at least a given threshold for every objective is PSPACE-complete under the first mechanism, and that computing the exact winning probability for every objective is sharp-P-complete under the second. To complement our hardness results for the second mechanism, we propose randomised approximation schemes that efficiently estimate the winning probability for all three objectives, assuming a bounded number of parity colours and unary-encoded weights for energy objectives, and we empirically demonstrate their fast convergence.
|
https://arxiv.org/abs/2601.07775
|
Academic Papers
|
svg
|
0a6ff04bbf014c3173f842813cd3fc56c9fb04bfb0e007ba259003dad2fbc094
|
2026-01-13T00:00:00-05:00
|
DT-ICU: Towards Explainable Digital Twins for ICU Patient Monitoring via Multi-Modal and Multi-Task Iterative Inference
|
arXiv:2601.07778v1 Announce Type: new Abstract: We introduce DT-ICU, a multimodal digital twin framework for continuous risk estimation in intensive care. DT-ICU integrates variable-length clinical time series with static patient information in a unified multitask architecture, enabling predictions to be updated as new observations accumulate over the ICU stay. We evaluate DT-ICU on the large, publicly available MIMIC-IV dataset, where it consistently outperforms established baseline models under different evaluation settings. Our test-length analysis shows that meaningful discrimination is achieved shortly after admission, while longer observation windows further improve the ranking of high-risk patients in highly imbalanced cohorts. To examine how the model leverages heterogeneous data sources, we perform systematic modality ablations, revealing that the model learnt a reasonable structured reliance on interventions, physiological response observations, and contextual information. These analyses provide interpretable insights into how multimodal signals are combined and how trade-offs between sensitivity and precision emerge. Together, these results demonstrate that DT-ICU delivers accurate, temporally robust, and interpretable predictions, supporting its potential as a practical digital twin framework for continuous patient monitoring in critical care. The source code and trained model weights for DT-ICU are publicly available at https://github.com/GUO-W/DT-ICU-release.
|
https://arxiv.org/abs/2601.07778
|
Academic Papers
|
svg
|
d4309f0129a5221aaf951c74d824c6122e3dd782e6aa9dfaeae257ad8ea1ec7a
|
2026-01-13T00:00:00-05:00
|
OS-Symphony: A Holistic Framework for Robust and Generalist Computer-Using Agent
|
arXiv:2601.07779v1 Announce Type: new Abstract: While Vision-Language Models (VLMs) have significantly advanced Computer-Using Agents (CUAs), current frameworks struggle with robustness in long-horizon workflows and generalization in novel domains. These limitations stem from a lack of granular control over historical visual context curation and the absence of visual-aware tutorial retrieval. To bridge these gaps, we introduce OS-Symphony, a holistic framework that comprises an Orchestrator coordinating two key innovations for robust automation: (1) a Reflection-Memory Agent that utilizes milestone-driven long-term memory to enable trajectory-level self-correction, effectively mitigating visual context loss in long-horizon tasks; (2) Versatile Tool Agents featuring a Multimodal Searcher that adopts a SeeAct paradigm to navigate a browser-based sandbox to synthesize live, visually aligned tutorials, thereby resolving fidelity issues in unseen scenarios. Experimental results demonstrate that OS-Symphony delivers substantial performance gains across varying model scales, establishing new state-of-the-art results on three online benchmarks, notably achieving 65.84% on OSWorld.
|
https://arxiv.org/abs/2601.07779
|
Academic Papers
|
svg
|
6895aff0f3ffa5e2e4de95bf2dc761a0ebcde96b09b37941d43e9bc790a577cd
|
2026-01-13T00:00:00-05:00
|
Enhancing Self-Correction in Large Language Models through Multi-Perspective Reflection
|
arXiv:2601.07780v1 Announce Type: new Abstract: While Chain-of-Thought (CoT) prompting advances LLM reasoning, challenges persist in consistency, accuracy, and self-correction, especially for complex or ethically sensitive tasks. Existing single-dimensional reflection methods offer insufficient improvements. We propose MyGO Poly-Reflective Chain-of-Thought (PR-CoT), a novel methodology employing structured multi-perspective reflection. After initial CoT, PR-CoT guides the LLM to self-assess its reasoning across multiple predefined angles: logical consistency, information completeness, biases/ethics, and alternative solutions. Implemented purely via prompt engineering, this process refines the initial CoT into a more robust and accurate final answer without model retraining. Experiments across arithmetic, commonsense, ethical decision-making, and logical puzzles, using GPT-three point five and GPT-four models, demonstrate PR-CoT's superior performance. It significantly outperforms traditional CoT and existing reflection methods in logical consistency and error correction, with notable gains in nuanced domains like ethical decision-making. Ablation studies, human evaluations, and qualitative analyses further validate the contribution of each reflection perspective and the overall efficacy of our poly-reflective paradigm in fostering more reliable LLM reasoning.
|
https://arxiv.org/abs/2601.07780
|
Academic Papers
|
svg
|
7734913a4197aa35fe2377cb8c1a7acfdaec2aa4f3d45d060996f0537452058b
|
2026-01-13T00:00:00-05:00
|
Beyond Single-Shot: Multi-step Tool Retrieval via Query Planning
|
arXiv:2601.07782v1 Announce Type: new Abstract: LLM agents operating over massive, dynamic tool libraries rely on effective retrieval, yet standard single-shot dense retrievers struggle with complex requests. These failures primarily stem from the disconnect between abstract user goals and technical documentation, and the limited capacity of fixed-size embeddings to model combinatorial tool compositions. To address these challenges, we propose TOOLQP, a lightweight framework that models retrieval as iterative query planning. Instead of single-shot matching, TOOLQP decomposes instructions into sub-tasks and dynamically generates queries to interact with the retriever, effectively bridging the semantic gap by targeting the specific sub-tasks required for composition. We train TOOLQP using synthetic query trajectories followed by optimization via Reinforcement Learning with Verifiable Rewards (RLVR). Experiments demonstrate that TOOLQP achieves state-of-the-art performance, exhibiting superior zero-shot generalization, robustness across diverse retrievers, and significant improvements in downstream agentic execution.
|
https://arxiv.org/abs/2601.07782
|
Academic Papers
|
svg
|
9148182fbee90bca7966ec840c9ea7bf648310ebc726aad97167fb87b8a8c044
|
2026-01-13T00:00:00-05:00
|
Affordable Data Collection System for UAVs Taxi Vibration Testing
|
arXiv:2601.07783v1 Announce Type: new Abstract: Structural vibration testing plays a key role in aerospace engineering for evaluating dynamic behaviour, ensuring reliability and verifying structural integrity. These tests rely on accurate and robust data acquisition systems (DAQ) to capture high-quality acceleration data. However, commercial DAQs that provide the required performance and features are often expensive and complex, limiting their accessibility for small-scale research and experimental applications. This work presents the design and experimental validation of an affordable and in-house-developed acceleration DAQ, tested on a small fixed-wing UAV through several Taxi Vibration Test (TVT) runs and ambient vibration measurements. The proposed system integrates several OrangePi 3 LTS single-board computers with multiple LSM6DS3TR-C MEMS inertial measurement units operating simultaneously via an Inter-Integrated Circuit (I2C) communication interface, managed under a Python-based master/slave architecture. Data is acquired at a stable sampling rate of approximately 208 Hz and post-processed using Welch's method to estimate their Power Spectral Density (PSD). Results confirm the system ability to provide consistent multi-sensor acceleration data and repeatable PSD profiles under the same test conditions; thus, demonstrating its reliability. With a total hardware cost below 600 EUR (approximately 690 USD), the developed DAQ offers a compact, scalable and cost-effective alternative for aerospace vibration analysis and structural testing.
|
https://arxiv.org/abs/2601.07783
|
Academic Papers
|
svg
|
f926c9d0b1fb559f44a1cf73b438be7ba0c4e345a9342f7c0b6f915f35a0a875
|
2026-01-13T00:00:00-05:00
|
"TODO: Fix the Mess Gemini Created": Towards Understanding GenAI-Induced Self-Admitted Technical Debt
|
arXiv:2601.07786v1 Announce Type: new Abstract: As large language models (LLMs) such as ChatGPT, Copilot, Claude, and Gemini become integrated into software development workflows, developers increasingly leave traces of AI involvement in their code comments. Among these, some comments explicitly acknowledge both the use of generative AI and the presence of technical shortcomings. Analyzing 6,540 LLM-referencing code comments from public Python and JavaScript-based GitHub repositories (November 2022-July 2025), we identified 81 that also self-admit technical debt(SATD). Developers most often describe postponed testing, incomplete adaptation, and limited understanding of AI-generated code, suggesting that AI assistance affects both when and why technical debt emerges. We term GenAI-Induced Self-admitted Technical debt (GIST) as a proposed conceptual lens to describe recurring cases where developers incorporate AI-generated code while explicitly expressing uncertainty about its behavior or correctness.
|
https://arxiv.org/abs/2601.07786
|
Academic Papers
|
svg
|
a170b7c9928f33d9c7a2d54bab31fef0ddb5b421ae26ecc11ff86490d5984f72
|
2026-01-13T00:00:00-05:00
|
Passing the Baton: Shift Handovers within Cybersecurity Incident Response Teams
|
arXiv:2601.07788v1 Announce Type: new Abstract: Effective shift transitions are crucial for cybersecurity incident response teams, yet there is limited guidance on managing these handovers. This exploratory study aimed to develop guidelines for such transitions through the analysis of existing literature and consultation with practitioners. Two draft guidelines (A and B) were created based on existing literature and online resources. Six participants from the UK and international incident response teams, with experience in shift handovers, were interviewed about handover structure, challenges, training practices, and their views on the draft guidelines. The collected data indicate the importance of signposting, evolving handover procedures, individual differences in handover style and detail, and streamlining the handover procedure. Participants agreed the drafts included all relevant details but suggested adding a post-incident review section and a service section for outages or technical difficulties. This study establishes a foundation for enhancing transition practices in cybersecurity incident response teams.
|
https://arxiv.org/abs/2601.07788
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.