id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
b5ea2192f3e995e8f622bd586880d02ef8e9f61134d84b5e68761df9964084a8
|
2026-01-13T00:00:00-05:00
|
Foundational Analysis of Safety Engineering Requirements (SAFER)
|
arXiv:2601.06335v1 Announce Type: new Abstract: We introduce a framework for Foundational Analysis of Safety Engineering Requirements (SAFER), a model-driven methodology supported by Generative AI to improve the generation and analysis of safety requirements for complex safety-critical systems. Safety requirements are often specified by multiple stakeholders with uncoordinated objectives, leading to gaps, duplications, and contradictions that jeopardize system safety and compliance. Existing approaches are largely informal and insufficient for addressing these challenges. SAFER enhances Model-Based Systems Engineering (MBSE) by consuming requirement specification models and generating the following results: (1) mapping requirements to system functions, (2) identifying functions with insufficient requirement specifications, (3) detecting duplicate requirements, and (4) identifying contradictions within requirement sets. SAFER provides structured analysis, reporting, and decision support for safety engineers. We demonstrate SAFER on an autonomous drone system, significantly improving the detection of requirement inconsistencies, enhancing both efficiency and reliability of the safety engineering process. We show that Generative AI must be augmented by formal models and queried systematically, to provide meaningful early-stage safety requirement specifications and robust safety architectures.
|
https://arxiv.org/abs/2601.06335
|
Academic Papers
|
svg
|
bcdb97ab4b308420011f91ef7e26b3ed565b27d16ec581fb7f0d0121e518a8eb
|
2026-01-13T00:00:00-05:00
|
Future-as-Label: Scalable Supervision from Real-World Outcomes
|
arXiv:2601.06336v1 Announce Type: new Abstract: Many real-world prediction problems lack labels observable at prediction time, creating a temporal gap between prediction and outcome that yields supervision only after events resolve. To address this setting, we extend reinforcement learning with verifiable rewards to temporally resolved real-world prediction, and use it to train language models to make probabilistic forecasts under causally masked information with retrospective evaluation using proper scoring rules. Supervision is derived solely from post-resolution outcomes, preserving delayed-reward semantics. On real-world forecasting benchmarks, Qwen3-32B trained using Foresight Learning improves Brier score by 27% and halves calibration error relative to its pretrained baseline, and outperforms Qwen3-235B on both constructed future-event prediction tasks and the Metaculus benchmark despite a 7x parameter disadvantage.
|
https://arxiv.org/abs/2601.06336
|
Academic Papers
|
svg
|
af43f78421a4cb3df82b9aece6555b451346721b0f12715c36a586ed6b160a91
|
2026-01-13T00:00:00-05:00
|
Circuit Mechanisms for Spatial Relation Generation in Diffusion Transformers
|
arXiv:2601.06338v1 Announce Type: new Abstract: Diffusion Transformers (DiTs) have greatly advanced text-to-image generation, but models still struggle to generate the correct spatial relations between objects as specified in the text prompt. In this study, we adopt a mechanistic interpretability approach to investigate how a DiT can generate correct spatial relations between objects. We train, from scratch, DiTs of different sizes with different text encoders to learn to generate images containing two objects whose attributes and spatial relations are specified in the text prompt. We find that, although all the models can learn this task to near-perfect accuracy, the underlying mechanisms differ drastically depending on the choice of text encoder. When using random text embeddings, we find that the spatial-relation information is passed to image tokens through a two-stage circuit, involving two cross-attention heads that separately read the spatial relation and single-object attributes in the text prompt. When using a pretrained text encoder (T5), we find that the DiT uses a different circuit that leverages information fusion in the text tokens, reading spatial-relation and single-object information together from a single text token. We further show that, although the in-domain performance is similar for the two settings, their robustness to out-of-domain perturbations differs, potentially suggesting the difficulty of generating correct relations in real-world scenarios.
|
https://arxiv.org/abs/2601.06338
|
Academic Papers
|
svg
|
a2a5cc034344fc958610c85b72ccafdc1a5edc93d5c9f74d85dd1e4987505edd
|
2026-01-13T00:00:00-05:00
|
Evaluating Robustness of Large Language Models in Enterprise Applications: Benchmarks for Perturbation Consistency Across Formats and Languages
|
arXiv:2601.06341v1 Announce Type: new Abstract: Enterprise LLM applications require consistently high quality and reliable performance across diverse scenarios, demanding robustness to minor variations. Existing research shows that even small prompt changes can lead to substantial differences in output, but has mainly focused on a narrow set of perturbations with small academic datasets, limiting their relevance to real-world applications. To address this, we present a comprehensive benchmark suite that evaluates robustness across multiple perturbation types, including general text edits (e.g., punctuation, whitespace), formatting changes (e.g., JSON, YAML), multilingual and cross-lingual inputs, and positional variations in instructions. Evaluating 11 models ranging from 4B to 120B+ parameters, we find that minor perturbations reduce performance by up to 40 percentage points on key enterprise metrics. Critically, we demonstrate that the relationship between model size and robustness is more nuanced than conventional assumptions suggest: an 8B parameter model (Ministral 3 8B) outperforms most larger models, while another 8B model (Llama 3.1 8B) performs worst overall.
|
https://arxiv.org/abs/2601.06341
|
Academic Papers
|
svg
|
7b177e5ea7bc557e29b5ad74b6eadf2de7e1c636859f0a9d9999f19ea7e52195
|
2026-01-13T00:00:00-05:00
|
BlazeAIoT: A Modular Multi-Layer Platform for Real-Time Distributed Robotics Across Edge, Fog, and Cloud Infrastructures
|
arXiv:2601.06344v1 Announce Type: new Abstract: The increasing complexity of distributed robotics has driven the need for platforms that seamlessly integrate edge, fog, and cloud computing layers while meeting strict real-time constraints. This paper introduces BlazeAIoT, a modular multi-layer platform designed to unify distributed robotics across heterogeneous infrastructures. BlazeAIoT provides dynamic data transfer, configurable services, and integrated monitoring, while ensuring resilience, security, and programming language flexibility. The architecture leverages Kubernetes-based clusters, broker interoperability (DDS, Kafka, Redis, and ROS2), and adaptive data distribution mechanisms to optimize communication and computation across diverse environments. The proposed solution includes a multi-layer configuration service, dynamic and adaptive data bridging, and hierarchical rate limiting to handle large messages. The platform is validated through robotics scenarios involving navigation and artificial intelligence-driven large-scale message processing, demonstrating robust performance under real-time constraints. Results highlight BlazeAIoT's ability to dynamically allocate services across incomplete topologies, maintain system health, and minimize latency, making it a cost-aware, scalable solution for robotics and broader IoT applications, such as smart cities and smart factories.
|
https://arxiv.org/abs/2601.06344
|
Academic Papers
|
svg
|
c1619d5dd8ae5907b2e3dfbe6afec327099148d10b149c300401afc3aaa5443a
|
2026-01-13T00:00:00-05:00
|
What Matters When Building Universal Multilingual Named Entity Recognition Models?
|
arXiv:2601.06347v1 Announce Type: new Abstract: Recent progress in universal multilingual named entity recognition (NER) has been driven by advances in multilingual transformer models and task-specific architectures, loss functions, and training datasets. Despite substantial prior work, we find that many critical design decisions for such models are made without systematic justification, with architectural components, training objectives, and data sources evaluated only in combination rather than in isolation. We argue that these decisions impede progress in the field by making it difficult to identify which choices improve model performance. In this work, we conduct extensive experiments around architectures, transformer backbones, training objectives, and data composition across a wide range of languages. Based on these insights, we introduce Otter, a universal multilingual NER model supporting over 100 languages. Otter achieves consistent improvements over strong multilingual NER baselines, outperforming GLiNER-x-base by 5.3pp in F1 and achieves competitive performance compared to large generative models such as Qwen3-32B, while being substantially more efficient. We release model checkpoints, training and evaluation code to facilitate reproducibility and future research.
|
https://arxiv.org/abs/2601.06347
|
Academic Papers
|
svg
|
b7e871908d1737ab329c0a7a1c53620004766414fd4c9c9c27b7309b9c77ed26
|
2026-01-13T00:00:00-05:00
|
Federated Learning and Class Imbalances
|
arXiv:2601.06348v1 Announce Type: new Abstract: Federated Learning (FL) enables collaborative model training across decentralized devices while preserving data privacy. However, real-world FL deployments face critical challenges such as data imbalances, including label noise and non-IID distributions. RHFL+, a state-of-the-art method, was proposed to address these challenges in settings with heterogeneous client models. This work investigates the robustness of RHFL+ under class imbalances through three key contributions: (1) reproduction of RHFL+ along with all benchmark algorithms under a unified evaluation framework; (2) extension of RHFL+ to real-world medical imaging datasets, including CBIS-DDSM, BreastMNIST and BHI; (3) a novel implementation using NVFlare, NVIDIA's production-level federated learning framework, enabling a modular, scalable and deployment-ready codebase. To validate effectiveness, extensive ablation studies, algorithmic comparisons under various noise conditions and scalability experiments across increasing numbers of clients are conducted.
|
https://arxiv.org/abs/2601.06348
|
Academic Papers
|
svg
|
f4cf9df4aff1af2aee77faee94e28bbac2565b027a00ddd3821b917b4bdbcc7c
|
2026-01-13T00:00:00-05:00
|
Fixing ill-formed UTF-16 strings with SIMD instructions
|
arXiv:2601.06349v1 Announce Type: new Abstract: UTF-16 is a widely used Unicode encoding representing characters with one or two 16-bit code units. The format relies on surrogate pairs to encode characters beyond the Basic Multilingual Plane, requiring a high surrogate followed by a low surrogate. Ill-formed UTF-16 strings -- where surrogates are mismatched -- can arise from data corruption or improper encoding, posing security and reliability risks. Consequently, programming languages such as JavaScript include functions to fix ill-formed UTF-16 strings by replacing mismatched surrogates with the Unicode replacement character (U+FFFD). We propose using Single Instruction, Multiple Data (SIMD) instructions to handle multiple code units in parallel, enabling faster and more efficient execution. Our software is part of the Google JavaScript engine (V8) and thus part of several major Web browsers.
|
https://arxiv.org/abs/2601.06349
|
Academic Papers
|
svg
|
7d4910d3bccea0e9ffa0e07a4d53b8bd41fdcc05520908c8602c97a3e53ef179
|
2026-01-13T00:00:00-05:00
|
A Fast and Effective Method for Euclidean Anticlustering: The Assignment-Based-Anticlustering Algorithm
|
arXiv:2601.06351v1 Announce Type: new Abstract: The anticlustering problem is to partition a set of objects into K equal-sized anticlusters such that the sum of distances within anticlusters is maximized. The anticlustering problem is NP-hard. We focus on anticlustering in Euclidean spaces, where the input data is tabular and each object is represented as a D-dimensional feature vector. Distances are measured as squared Euclidean distances between the respective vectors. Applications of Euclidean anticlustering include social studies, particularly in psychology, K-fold cross-validation in which each fold should be a good representative of the entire dataset, the creation of mini-batches for gradient descent in neural network training, and balanced K-cut partitioning. In particular, machine-learning applications involve million-scale datasets and very large values of K, making scalable anticlustering algorithms essential. Existing algorithms are either exact methods that can solve only small instances or heuristic methods, among which the most scalable is the exchange-based heuristic fast_anticlustering. We propose a new algorithm, the Assignment-Based Anticlustering algorithm (ABA), which scales to very large instances. A computational study shows that ABA outperforms fast_anticlustering in both solution quality and running time. Moreover, ABA scales to instances with millions of objects and hundreds of thousands of anticlusters within short running times, beyond what fast_anticlustering can handle. As a balanced K-cut partitioning method for tabular data, ABA is superior to the well-known METIS method in both solution quality and running time. The code of the ABA algorithm is available on GitHub.
|
https://arxiv.org/abs/2601.06351
|
Academic Papers
|
svg
|
4b9557a6245654e6c22e1654503b26a410c4872d2258b3dbd442435781631d69
|
2026-01-13T00:00:00-05:00
|
CARD: Cluster-level Adaptation with Reward-guided Decoding for Personalized Text Generation
|
arXiv:2601.06352v1 Announce Type: new Abstract: Adapting large language models to individual users remains challenging due to the tension between fine-grained personalization and scalable deployment. We present CARD, a hierarchical framework that achieves effective personalization through progressive refinement. CARD first clusters users according to shared stylistic patterns and learns cluster-specific LoRA adapters, enabling robust generalization and strong low-resource performance. To capture individual differences within each cluster, we propose an implicit preference learning mechanism that contrasts user-authored text with cluster-level generations, allowing the model to infer user-specific style preferences without manual annotation. At inference time, CARD injects personalization exclusively at decoding via lightweight user preference vectors and low-rank logit corrections, while keeping the base model frozen. Experiments on the LaMP and LongLaMP benchmarks show that CARD achieves competitive or superior generation quality compared to state-of-the-art baselines, while significantly improving efficiency and scalability for practical personalized text generation.
|
https://arxiv.org/abs/2601.06352
|
Academic Papers
|
svg
|
ae96f8fa430bad764046298b6d269dea308d0fcd81e2f864ee38fa52c12b136a
|
2026-01-13T00:00:00-05:00
|
Monkey Jump : MoE-Style PEFT for Efficient Multi-Task Learning
|
arXiv:2601.06356v1 Announce Type: new Abstract: Mixture-of-experts variants of parameter-efficient fine-tuning enable per-token specialization, but they introduce additional trainable routers and expert parameters, increasing memory usage and training cost. This undermines the core goal of parameter-efficient fine-tuning. We propose Monkey Jump, a method that brings mixture-of-experts-style specialization to parameter-efficient fine-tuning without introducing extra trainable parameters for experts or routers. Instead of adding new adapters as experts, Monkey Jump treats the adapters already present in each Transformer block (such as query, key, value, up, and down projections) as implicit experts and routes tokens among them. Routing is performed using k-means clustering with exponentially moving averaged cluster centers, requiring no gradients and no learned parameters. We theoretically show that token-wise routing increases expressivity and can outperform shared adapters by avoiding cancellation effects. Across multi-task experiments covering 14 text, 14 image, and 19 video benchmarks, Monkey Jump achieves competitive performance with mixture-of-experts-based parameter-efficient fine-tuning methods while using 7 to 29 times fewer trainable parameters, up to 48 percent lower memory consumption, and 1.5 to 2 times faster training. Monkey Jump is architecture-agnostic and can be applied to any adapter-based parameter-efficient fine-tuning method.
|
https://arxiv.org/abs/2601.06356
|
Academic Papers
|
svg
|
d560b4db121be80a22bc358e6a9173247de111fc9f47a68e54e22c0844fb9061
|
2026-01-13T00:00:00-05:00
|
Smart Privacy Policy Assistant: An LLM-Powered System for Transparent and Actionable Privacy Notices
|
arXiv:2601.06357v1 Announce Type: new Abstract: Most users agree to online privacy policies without reading or understanding them, even though these documents govern how personal data is collected, shared, and monetized. Privacy policies are typically long, legally complex, and difficult for non-experts to interpret. This paper presents the Smart Privacy Policy Assistant, an LLM-powered system that automatically ingests privacy policies, extracts and categorizes key clauses, assigns human-interpretable risk levels, and generates clear, concise explanations. The system is designed for real-time use through browser extensions or mobile interfaces, surfacing contextual warnings before users disclose sensitive information or grant risky permissions. We describe the end-to-end pipeline, including policy ingestion, clause categorization, risk scoring, and explanation generation, and propose an evaluation framework based on clause-level accuracy, policy-level risk agreement, and user comprehension.
|
https://arxiv.org/abs/2601.06357
|
Academic Papers
|
svg
|
be317c1edc1db72d83f252949da02f5526ab0704abbf8632233adb6adacdc4f9
|
2026-01-13T00:00:00-05:00
|
Average shortest-path length in word-adjacency networks: Chinese versus English
|
arXiv:2601.06361v1 Announce Type: new Abstract: Complex networks provide powerful tools for analyzing and understanding the intricate structures present in various systems, including natural language. Here, we analyze topology of growing word-adjacency networks constructed from Chinese and English literary works written in different periods. Unconventionally, instead of considering dictionary words only, we also include punctuation marks as if they were ordinary words. Our approach is based on two arguments: (1) punctuation carries genuine information related to emotional state, allows for logical grouping of content, provides a pause in reading, and facilitates understanding by avoiding ambiguity, and (2) our previous works have shown that punctuation marks behave like words in a Zipfian analysis and, if considered together with regular words, can improve authorship attribution in stylometric studies. We focus on a functional dependence of the average shortest path length $L(N)$ on a network size $N$ for different epochs and individual novels in their original language as well as for translations of selected novels into the other language. We approximate the empirical results with a growing network model and obtain satisfactory agreement between the two. We also observe that $L(N)$ behaves asymptotically similar for both languages if punctuation marks are included but becomes sizably larger for Chinese if punctuation marks are neglected.
|
https://arxiv.org/abs/2601.06361
|
Academic Papers
|
svg
|
319df01e22dbe6d74ddcca8ebc57d4181e940d1474ca010574e4dce16f4b39ea
|
2026-01-13T00:00:00-05:00
|
Styles + Persona-plug = Customized LLMs
|
arXiv:2601.06362v1 Announce Type: new Abstract: We discover a previously overlooked challenge in personalized text generation: personalization methods are increasingly applied under explicit style instructions, yet their behavior under such constraints remains poorly understood. To balance implicit personalization and explicit style, we formulate personalization as a distributional residual and propose PsPLUG, a lightweight soft-prompt plug-in trained with style-conditioned preference contrasts. Across LaMP benchmark, our framework improves persona alignment, maintains stylistic fidelity, and outperforms retrieval-based and soft-prompt baselines with minimal computation. These results show that residual modeling provides a simple and principled foundation for controllable, style-aware LLM personalization.
|
https://arxiv.org/abs/2601.06362
|
Academic Papers
|
svg
|
304b55a376ccce0f3926f3c054d990674bee74cd7dfd8f75ed5fdec45aa6804c
|
2026-01-13T00:00:00-05:00
|
Human-in-the-Loop Interactive Report Generation for Chronic Disease Adherence
|
arXiv:2601.06364v1 Announce Type: new Abstract: Chronic disease management requires regular adherence feedback to prevent avoidable hospitalizations, yet clinicians lack time to produce personalized patient communications. Manual authoring preserves clinical accuracy but does not scale; AI generation scales but can undermine trust in patient-facing contexts. We present a clinician-in-the-loop interface that constrains AI to data organization and preserves physician oversight through recognition-based review. A single-page editor pairs AI-generated section drafts with time-aligned visualizations, enabling inline editing with visual evidence for each claim. This division of labor (AI organizes, clinician decides) targets both efficiency and accountability. In a pilot with three physicians reviewing 24 cases, AI successfully generated clinically personalized drafts matching physicians' manual authoring practice (overall mean 4.86/10 vs. 5.0/10 baseline), requiring minimal physician editing (mean 8.3\% content modification) with zero safety-critical issues, demonstrating effective automation of content generation. However, review time remained comparable to manual practice, revealing an accountability paradox: in high-stakes clinical contexts, professional responsibility requires complete verification regardless of AI accuracy. We contribute three interaction patterns for clinical AI collaboration: bounded generation with recognition-based review via chart-text pairing, automated urgency flagging that analyzes vital trends and adherence patterns with fail-safe escalation for missed critical monitoring tasks, and progressive disclosure controls that reduce cognitive load while maintaining oversight. These patterns indicate that clinical AI efficiency requires not only accurate models, but also mechanisms for selective verification that preserve accountability.
|
https://arxiv.org/abs/2601.06364
|
Academic Papers
|
svg
|
468b3d7fbe95b65715018bc9f3315146edd80352ae7a6f9d1524dd335ca6f3b6
|
2026-01-13T00:00:00-05:00
|
SafeGPT: Preventing Data Leakage and Unethical Outputs in Enterprise LLM Use
|
arXiv:2601.06366v1 Announce Type: new Abstract: Large Language Models (LLMs) are transforming enterprise workflows but introduce security and ethics challenges when employees inadvertently share confidential data or generate policy-violating content. This paper proposes SafeGPT, a two-sided guardrail system preventing sensitive data leakage and unethical outputs. SafeGPT integrates input-side detection/redaction, output-side moderation/reframing, and human-in-the-loop feedback. Experiments demonstrate SafeGPT effectively reduces data leakage risk and biased outputs while maintaining satisfaction.
|
https://arxiv.org/abs/2601.06366
|
Academic Papers
|
svg
|
c3594aef4a7909cded3b450d4678b917ff6e7f40afac97c68c685c8fe2cf0438
|
2026-01-13T00:00:00-05:00
|
ReAct: Reflection Attack Mitigation For Asymmetric Routing
|
arXiv:2601.06367v1 Announce Type: new Abstract: Amplification Reflection Distributed Denial-of-Service (AR-DDoS) attacks remain a formidable threat, exploiting stateless protocols to flood victims with illegitimate traffic. Recent advances have enabled data-plane defenses against such attacks, but existing solutions typically assume symmetric routing and are limited to a single switch. These assumptions fail in modern networks where asymmetry is common, resulting in dropped legitimate responses and persistent connectivity issues. This paper presents ReAct, an in-network defense for AR-DDoS that is robust to asymmetry. ReAct performs request-response correlation across switches using programmable data planes and a sliding-window of Bloom filters. To handle asymmetric traffic, ReAct introduces a data-plane-based request forwarding mechanism, enabling switches to validate responses even when paths differ. ReAct can automatically adapt to routing changes with minimal intervention, ensuring continued protection even in dynamic network environments. We implemented ReAct on both a P4 interpreter and NVIDIAs Bluefield-3, demonstrating its applicability across multiple platforms. Evaluation results show that ReAct filters nearly all attack traffic without dropping legitimate responses-even under high-volume attacks and asymmetry. Compared to state-of-the-art approaches, ReAct achieves significantly lower false positives. To our knowledge, ReAct is the first data-plane AR-DDoS defense that supports dynamic, cross-switch collaboration, making it uniquely suitable for deployment in networks with asymmetry.
|
https://arxiv.org/abs/2601.06367
|
Academic Papers
|
svg
|
841a5472cf616bc64bea38c26b6b1e6eea46bfba6edd5e5c3cbc5b410fe7d37c
|
2026-01-13T00:00:00-05:00
|
From Easy to Hard++: Promoting Differentially Private Image Synthesis Through Spatial-Frequency Curriculum
|
arXiv:2601.06368v1 Announce Type: new Abstract: To improve the quality of Differentially private (DP) synthetic images, most studies have focused on improving the core optimization techniques (e.g., DP-SGD). Recently, we have witnessed a paradigm shift that takes these techniques off the shelf and studies how to use them together to achieve the best results. One notable work is DP-FETA, which proposes using `central images' for `warming up' the DP training and then using traditional DP-SGD. Inspired by DP-FETA, we are curious whether there are other such tools we can use together with DP-SGD. We first observe that using `central images' mainly works for datasets where there are many samples that look similar. To handle scenarios where images could vary significantly, we propose FETA-Pro, which introduces frequency features as `training shortcuts.' The complexity of frequency features lies between that of spatial features (captured by `central images') and full images, allowing for a finer-grained curriculum for DP training. To incorporate these two types of shortcuts together, one challenge is to handle the training discrepancy between spatial and frequency features. To address it, we leverage the pipeline generation property of generative models (instead of having one model trained with multiple features/objectives, we can have multiple models working on different features, then feed the generated results from one model into another) and use a more flexible design. Specifically, FETA-Pro introduces an auxiliary generator to produce images aligned with noisy frequency features. Then, another model is trained with these images, together with spatial features and DP-SGD. Evaluated across five sensitive image datasets, FETA-Pro shows an average of 25.7% higher fidelity and 4.1% greater utility than the best-performing baseline, under a privacy budget $\epsilon = 1$.
|
https://arxiv.org/abs/2601.06368
|
Academic Papers
|
svg
|
eddd075b098a6e20990164063ef726e83e6aa67485af01290f0e5bc69483d5d7
|
2026-01-13T00:00:00-05:00
|
Talking to Extraordinary Objects: Folktales Offer Analogies for Interacting with Technology
|
arXiv:2601.06372v1 Announce Type: new Abstract: Speech and language are valuable for interacting with technology. It would be ideal to be able to decouple their use from anthropomorphization, which has recently met an important moment of reckoning. In the world of folktales, language is everywhere and talking to extraordinary objects is not unusual. This overview presents examples of the analogies that folktales offer. Extraordinary objects in folktales are diverse and also memorable. Language capacity and intelligence are not always connected to humanness. Consideration of folktales can offer inspiration and insight for using speech and language for interacting with technology.
|
https://arxiv.org/abs/2601.06372
|
Academic Papers
|
svg
|
ee337ddcaaf8c73c99af119e417d6f687e31bce9e57fb7936269954f38c48b9f
|
2026-01-13T00:00:00-05:00
|
DemMA: Dementia Multi-Turn Dialogue Agent with Expert-Guided Reasoning and Action Simulation
|
arXiv:2601.06373v1 Announce Type: new Abstract: Simulating dementia patients with large language models (LLMs) is challenging due to the need to jointly model cognitive impairment, emotional dynamics, and nonverbal behaviors over long conversations. We present DemMA, an expert-guided dementia dialogue agent for high-fidelity multi-turn patient simulation. DemMA constructs clinically grounded dementia personas by integrating pathology information, personality traits, and subtype-specific memory-status personas informed by clinical experts. To move beyond text-only simulation, DemMA explicitly models nonverbal behaviors, including motion, facial expressions, and vocal cues. We further introduce a Chain-of-Thought distillation framework that trains a single LLM to jointly generate reasoning traces, patient utterances, and aligned behavioral actions within one forward pass, enabling efficient deployment without multi-agent inference. Extensive evaluations with experts, medical students, and LLM judges demonstrate that DemMA significantly outperforms strong baselines across multiple metrics.
|
https://arxiv.org/abs/2601.06373
|
Academic Papers
|
svg
|
343c642152c2d8ee8756b7bdd178f6ec4b286c882dd712447184e10413e058d3
|
2026-01-13T00:00:00-05:00
|
HiMem: Hierarchical Long-Term Memory for LLM Long-Horizon Agents
|
arXiv:2601.06377v1 Announce Type: new Abstract: Although long-term memory systems have made substantial progress in recent years, they still exhibit clear limitations in adaptability, scalability, and self-evolution under continuous interaction settings. Inspired by cognitive theories, we propose HiMem, a hierarchical long-term memory framework for long-horizon dialogues, designed to support memory construction, retrieval, and dynamic updating during sustained interactions. HiMem constructs cognitively consistent Episode Memory via a Topic-Aware Event--Surprise Dual-Channel Segmentation strategy, and builds Note Memory that captures stable knowledge through a multi-stage information extraction pipeline. These two memory types are semantically linked to form a hierarchical structure that bridges concrete interaction events and abstract knowledge, enabling efficient retrieval without sacrificing information fidelity. HiMem supports both hybrid and best-effort retrieval strategies to balance accuracy and efficiency, and incorporates conflict-aware Memory Reconsolidation to revise and supplement stored knowledge based on retrieval feedback. This design enables continual memory self-evolution over long-term use. Experimental results on long-horizon dialogue benchmarks demonstrate that HiMem consistently outperforms representative baselines in accuracy, consistency, and long-term reasoning, while maintaining favorable efficiency. Overall, HiMem provides a principled and scalable design paradigm for building adaptive and self-evolving LLM-based conversational agents. The code is available at https://github.com/jojopdq/HiMem.
|
https://arxiv.org/abs/2601.06377
|
Academic Papers
|
svg
|
c1b495de4843c42aa1c527e38f5b44defa5d2cf0a3611f5a501a127912bba39d
|
2026-01-13T00:00:00-05:00
|
RigMo: Unifying Rig and Motion Learning for Generative Animation
|
arXiv:2601.06378v1 Announce Type: new Abstract: Despite significant progress in 4D generation, rig and motion, the core structural and dynamic components of animation are typically modeled as separate problems. Existing pipelines rely on ground-truth skeletons and skinning weights for motion generation and treat auto-rigging as an independent process, undermining scalability and interpretability. We present RigMo, a unified generative framework that jointly learns rig and motion directly from raw mesh sequences, without any human-provided rig annotations. RigMo encodes per-vertex deformations into two compact latent spaces: a rig latent that decodes into explicit Gaussian bones and skinning weights, and a motion latent that produces time-varying SE(3) transformations. Together, these outputs define an animatable mesh with explicit structure and coherent motion, enabling feed-forward rig and motion inference for deformable objects. Beyond unified rig-motion discovery, we introduce a Motion-DiT model operating in RigMo's latent space and demonstrate that these structure-aware latents can naturally support downstream motion generation tasks. Experiments on DeformingThings4D, Objaverse-XL, and TrueBones demonstrate that RigMo learns smooth, interpretable, and physically plausible rigs, while achieving superior reconstruction and category-level generalization compared to existing auto-rigging and deformation baselines. RigMo establishes a new paradigm for unified, structure-aware, and scalable dynamic 3D modeling.
|
https://arxiv.org/abs/2601.06378
|
Academic Papers
|
svg
|
05ac02058ca182cfefc0fb99e61b82efefc66e060a882c0f74067a5073e95715
|
2026-01-13T00:00:00-05:00
|
Hierarchical Pooling and Explainability in Graph Neural Networks for Tumor and Tissue-of-Origin Classification Using RNA-seq Data
|
arXiv:2601.06381v1 Announce Type: new Abstract: This study explores the use of graph neural networks (GNNs) with hierarchical pooling and multiple convolution layers for cancer classification based on RNA-seq data. We combine gene expression data from The Cancer Genome Atlas (TCGA) with a precomputed STRING protein-protein interaction network to classify tissue origin and distinguish between normal and tumor samples. The model employs Chebyshev graph convolutions (K=2) and weighted pooling layers, aggregating gene clusters into 'supernodes' across multiple coarsening levels. This approach enables dimensionality reduction while preserving meaningful interactions. Saliency methods were applied to interpret the model by identifying key genes and biological processes relevant to cancer. Our findings reveal that increasing the number of convolution and pooling layers did not enhance classification performance. The highest F1-macro score (0.978) was achieved with a single pooling layer. However, adding more layers resulted in over-smoothing and performance degradation. However, the model proved highly interpretable through gradient methods, identifying known cancer-related genes and highlighting enriched biological processes, and its hierarchical structure can be used to develop new explainable architectures. Overall, while deeper GNN architectures did not improve performance, the hierarchical pooling structure provided valuable insights into tumor biology, making GNNs a promising tool for cancer biomarker discovery and interpretation
|
https://arxiv.org/abs/2601.06381
|
Academic Papers
|
svg
|
9870dd85669539ddccf98d4f677bcf531793f1f8d7473ba94ab22b59b461dcdf
|
2026-01-13T00:00:00-05:00
|
Dynamic Incentivized Cooperation under Changing Rewards
|
arXiv:2601.06382v1 Announce Type: new Abstract: Peer incentivization (PI) is a popular multi-agent reinforcement learning approach where all agents can reward or penalize each other to achieve cooperation in social dilemmas. Despite their potential for scalable cooperation, current PI methods heavily depend on fixed incentive values that need to be appropriately chosen with respect to the environmental rewards and thus are highly sensitive to their changes. Therefore, they fail to maintain cooperation under changing rewards in the environment, e.g., caused by modified specifications, varying supply and demand, or sensory flaws - even when the conditions for mutual cooperation remain the same. In this paper, we propose Dynamic Reward Incentives for Variable Exchange (DRIVE), an adaptive PI approach to cooperation in social dilemmas with changing rewards. DRIVE agents reciprocally exchange reward differences to incentivize mutual cooperation in a completely decentralized way. We show how DRIVE achieves mutual cooperation in the general Prisoner's Dilemma and empirically evaluate DRIVE in more complex sequential social dilemmas with changing rewards, demonstrating its ability to achieve and maintain cooperation, in contrast to current state-of-the-art PI methods.
|
https://arxiv.org/abs/2601.06382
|
Academic Papers
|
svg
|
ae79b5be72e4065068acd68c2045744d0bb8cb747d8d71f9179ba015a99c1816
|
2026-01-13T00:00:00-05:00
|
Noise Reduction for Pufferfish Privacy: A Practical Noise Calibration Method
|
arXiv:2601.06385v1 Announce Type: new Abstract: This paper introduces a relaxed noise calibration method to enhance data utility while attaining pufferfish privacy. This work builds on the existing $1$-Wasserstein (Kantorovich) mechanism by alleviating the existing overly strict condition that leads to excessive noise, and proposes a practical mechanism design algorithm as a general solution. We prove that a strict noise reduction by our approach always exists compared to $1$-Wasserstein mechanism for all privacy budgets $\epsilon$ and prior beliefs, and the noise reduction (also represents improvement on data utility) gains increase significantly for low privacy budget situations--which are commonly seen in real-world deployments. We also analyze the variation and optimality of the noise reduction with different prior distributions. Moreover, all the properties of the noise reduction still exist in the worst-case $1$-Wasserstein mechanism we introduced, when the additive noise is largest. We further show that the worst-case $1$-Wasserstein mechanism is equivalent to the $\ell_1$-sensitivity method. Experimental results on three real-world datasets demonstrate $47\%$ to $87\%$ improvement in data utility.
|
https://arxiv.org/abs/2601.06385
|
Academic Papers
|
svg
|
421f58e99b57e1f5231feecd51976f8c085f3bd5c1d4e72c416a6e70cdeb6509
|
2026-01-13T00:00:00-05:00
|
An Efficient Evolutionary Algorithm for Few-for-Many Optimization
|
arXiv:2601.06387v1 Announce Type: new Abstract: Few-for-many (F4M) optimization, recently introduced as a novel paradigm in multi-objective optimization, aims to find a small set of solutions that effectively handle a large number of conflicting objectives. Unlike traditional many-objective optimization methods, which typically attempt comprehensive coverage of the Pareto front, F4M optimization emphasizes finding a small representative solution set to efficiently address high-dimensional objective spaces. Motivated by the computational complexity and practical relevance of F4M optimization, this paper proposes a new evolutionary algorithm explicitly tailored for efficiently solving F4M optimization problems. Inspired by SMS-EMOA, our proposed approach employs a $(\mu+1)$-evolution strategy guided by the objective of F4M optimization. Furthermore, to facilitate rigorous performance assessment, we propose a novel benchmark test suite specifically designed for F4M optimization by leveraging the similarity between the R2 indicator and F4M formulations. Our test suite is highly flexible, allowing any existing multi-objective optimization problem to be transformed into a corresponding F4M instance via scalarization using the weighted Tchebycheff function. Comprehensive experimental evaluations on benchmarks demonstrate the superior performance of our algorithm compared to existing state-of-the-art algorithms, especially on instances involving a large number of objectives. The source code of the proposed algorithm will be released publicly. Source code is available at https://github.com/MOL-SZU/SoM-EMOA.
|
https://arxiv.org/abs/2601.06387
|
Academic Papers
|
svg
|
ce4af1f23291c4070f2ecbbfc1dfcd1423cfae4bba8e8941cba2432a2df616ce
|
2026-01-13T00:00:00-05:00
|
Supervised and Unsupervised Neural Network Solver for First Order Hyperbolic Nonlinear PDEs
|
arXiv:2601.06388v1 Announce Type: new Abstract: We present a neural network-based method for learning scalar hyperbolic conservation laws. Our method replaces the traditional numerical flux in finite volume schemes with a trainable neural network while preserving the conservative structure of the scheme. The model can be trained both in a supervised setting with efficiently generated synthetic data or in an unsupervised manner, leveraging the weak formulation of the partial differential equation. We provide theoretical results that our model can perform arbitrarily well, and provide associated upper bounds on neural network size. Extensive experiments demonstrate that our method often outperforms efficient schemes such as Godunov's scheme, WENO, and Discontinuous Galerkin for comparable computational budgets. Finally, we demonstrate the effectiveness of our method on a traffic prediction task, leveraging field experimental highway data from the Berkeley DeepDrive drone dataset.
|
https://arxiv.org/abs/2601.06388
|
Academic Papers
|
svg
|
4b1064053ab2e4b6bad6dca073761d6283c3d860a2954594c9de1ffedf37faeb
|
2026-01-13T00:00:00-05:00
|
Towards Building efficient Routed systems for Retrieval
|
arXiv:2601.06389v1 Announce Type: new Abstract: Late-interaction retrieval models like ColBERT achieve superior accuracy by enabling token-level interactions, but their computational cost hinders scalability and integration with Approximate Nearest Neighbor Search (ANNS). We introduce FastLane, a novel retrieval framework that dynamically routes queries to their most informative representations, eliminating redundant token comparisons. FastLane employs a learnable routing mechanism optimized alongside the embedding model, leveraging self-attention and differentiable selection to maximize efficiency. Our approach reduces computational complexity by up to 30x while maintaining competitive retrieval performance. By bridging late-interaction models with ANNS, FastLane enables scalable, low-latency retrieval, making it feasible for large-scale applications such as search engines, recommendation systems, and question-answering platforms. This work opens pathways for multi-lingual, multi-modal, and long-context retrieval, pushing the frontier of efficient and adaptive information retrieval.
|
https://arxiv.org/abs/2601.06389
|
Academic Papers
|
svg
|
551b23115d571528c962ee417c1729895df69ae7d99ccc9a866a7c35f0594877
|
2026-01-13T00:00:00-05:00
|
Object-WIPER : Training-Free Object and Associated Effect Removal in Videos
|
arXiv:2601.06391v1 Announce Type: new Abstract: In this paper, we introduce Object-WIPER, a training-free framework for removing dynamic objects and their associated visual effects from videos, and inpainting them with semantically consistent and temporally coherent content. Our approach leverages a pre-trained text-to-video diffusion transformer (DiT). Given an input video, a user-provided object mask, and query tokens describing the target object and its effects, we localize relevant visual tokens via visual-text cross-attention and visual self-attention. This produces an intermediate effect mask that we fuse with the user mask to obtain a final foreground token mask to replace. We first invert the video through the DiT to obtain structured noise, then reinitialize the masked tokens with Gaussian noise while preserving background tokens. During denoising, we copy values for the background tokens saved during inversion to maintain scene fidelity. To address the lack of suitable evaluation, we introduce a new object removal metric that rewards temporal consistency among foreground tokens across consecutive frames, coherence between foreground and background tokens within each frame, and dissimilarity between the input and output foreground tokens. Experiments on DAVIS and a newly curated real-world associated effect benchmark (WIPER-Bench) show that Object-WIPER surpasses both training-based and training-free baselines in terms of the metric, achieving clean removal and temporally stable reconstruction without any retraining. Our new benchmark, source code, and pre-trained models will be publicly available.
|
https://arxiv.org/abs/2601.06391
|
Academic Papers
|
svg
|
6ae0de874262cd8f59e08a3653fe4b7b55e7a53d339c7e11c36ec22da2a0f5c1
|
2026-01-13T00:00:00-05:00
|
Context Matters: Peer-Aware Student Behavioral Engagement Measurement via VLM Action Parsing and LLM Sequence Classification
|
arXiv:2601.06394v1 Announce Type: new Abstract: Understanding student behavior in the classroom is essential to improve both pedagogical quality and student engagement. Existing methods for predicting student engagement typically require substantial annotated data to model the diversity of student behaviors, yet privacy concerns often restrict researchers to their own proprietary datasets. Moreover, the classroom context, represented in peers' actions, is ignored. To address the aforementioned limitation, we propose a novel three-stage framework for video-based student engagement measurement. First, we explore the few-shot adaptation of the vision-language model for student action recognition, which is fine-tuned to distinguish among action categories with a few training samples. Second, to handle continuous and unpredictable student actions, we utilize the sliding temporal window technique to divide each student's 2-minute-long video into non-overlapping segments. Each segment is assigned an action category via the fine-tuned VLM model, generating a sequence of action predictions. Finally, we leverage the large language model to classify this entire sequence of actions, together with the classroom context, as belonging to an engaged or disengaged student. The experimental results demonstrate the effectiveness of the proposed approach in identifying student engagement.
|
https://arxiv.org/abs/2601.06394
|
Academic Papers
|
svg
|
d78e71577f7632879179a561f9bf756f02a16381e46daba3c612fdd2cfd890c0
|
2026-01-13T00:00:00-05:00
|
AfriqueLLM: How Data Mixing and Model Architecture Impact Continued Pre-training for African Languages
|
arXiv:2601.06395v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly multilingual, yet open models continue to underperform relative to proprietary systems, with the gap most pronounced for African languages. Continued pre-training (CPT) offers a practical route to language adaptation, but improvements on demanding capabilities such as mathematical reasoning often remain limited. This limitation is driven in part by the uneven domain coverage and missing task-relevant knowledge that characterize many low-resource language corpora. We present \texttt{AfriqueLLM}, a suite of open LLMs adapted to 20 African languages through CPT on 26B tokens. We perform a comprehensive empirical study across five base models spanning sizes and architectures, including Llama 3.1, Gemma 3, and Qwen 3, and systematically analyze how CPT data composition shapes downstream performance. In particular, we vary mixtures that include math, code, and synthetic translated data, and evaluate the resulting models on a range of multilingual benchmarks. Our results identify data composition as the primary driver of CPT gains. Adding math, code, and synthetic translated data yields consistent improvements, including on reasoning-oriented evaluations. Within a fixed architecture, larger models typically improve performance, but architectural choices dominate scale when comparing across model families. Moreover, strong multilingual performance in the base model does not reliably predict post-CPT outcomes; robust architectures coupled with task-aligned data provide a more dependable recipe. Finally, our best models improve long-context performance, including document-level translation. Models have been released on [Huggingface](https://huggingface.co/collections/McGill-NLP/afriquellm).
|
https://arxiv.org/abs/2601.06395
|
Academic Papers
|
svg
|
e611e22ac1ef65b666a85f5ebd12f23f5a81c20107f6c35aef0542714c2e3700
|
2026-01-13T00:00:00-05:00
|
MITRA: A Large-Scale Parallel Corpus and Multilingual Pretrained Language Model for Machine Translation and Semantic Retrieval for P\=ali, Sanskrit, Buddhist Chinese, and Tibetan
|
arXiv:2601.06400v1 Announce Type: new Abstract: Ancient Buddhist literature features frequent, yet often unannotated, textual parallels spread across diverse languages: Sanskrit, P\=ali, Buddhist Chinese, Tibetan, and more. The scale of this material makes manual examination prohibitive. We present the MITRA framework, which consists of a novel pipeline for multilingual parallel passage mining, MITRA-parallel, a large-scale corpus of 1.74 million parallel sentence pairs between Sanskrit, Chinese, and Tibetan, and the development of the domain-specific pretrained language model Gemma 2 MITRA. We present Gemma 2 MITRA-MT, a version of this base model fine-tuned on machine translation tasks, reaching state-of-the-art performance for machine translation of these languages into English and outperforming even much larger open-source models. We also present Gemma 2 MITRA-E, a semantic embedding model that shows state-of-the-art performance on a novel, detailed semantic embedding benchmark. We make the parallel dataset, model weights, and semantic similarity benchmark openly available to aid both NLP research and philological studies in Buddhist and classical Asian literature.
|
https://arxiv.org/abs/2601.06400
|
Academic Papers
|
svg
|
411f09e06f62490aae6a8dbc27c498a9f609cf427d4468ca358ac67167f68a13
|
2026-01-13T00:00:00-05:00
|
BizFinBench.v2: A Unified Dual-Mode Bilingual Benchmark for Expert-Level Financial Capability Alignment
|
arXiv:2601.06401v1 Announce Type: new Abstract: Large language models have undergone rapid evolution, emerging as a pivotal technology for intelligence in financial operations. However, existing benchmarks are often constrained by pitfalls such as reliance on simulated or general-purpose samples and a focus on singular, offline static scenarios. Consequently, they fail to align with the requirements for authenticity and real-time responsiveness in financial services, leading to a significant discrepancy between benchmark performance and actual operational efficacy. To address this, we introduce BizFinBench.v2, the first large-scale evaluation benchmark grounded in authentic business data from both Chinese and U.S. equity markets, integrating online assessment. We performed clustering analysis on authentic user queries from financial platforms, resulting in eight fundamental tasks and two online tasks across four core business scenarios, totaling 29,578 expert-level Q&A pairs. Experimental results demonstrate that ChatGPT-5 achieves a prominent 61.5% accuracy in main tasks, though a substantial gap relative to financial experts persists; in online tasks, DeepSeek-R1 outperforms all other commercial LLMs. Error analysis further identifies the specific capability deficiencies of existing models within practical financial business contexts. BizFinBench.v2 transcends the limitations of current benchmarks, achieving a business-level deconstruction of LLM financial capabilities and providing a precise basis for evaluating efficacy in the widespread deployment of LLMs within the financial domain. The data and code are available at https://github.com/HiThink-Research/BizFinBench.v2.
|
https://arxiv.org/abs/2601.06401
|
Academic Papers
|
svg
|
8128f6b1391da5f830e88825b9c81e68ed75b504293d18cdd899ed697829cf62
|
2026-01-13T00:00:00-05:00
|
Spatiotemporal Change-Points in Development Discourse: Insights from Social Media in Low-Resource Contexts
|
arXiv:2601.06402v1 Announce Type: new Abstract: This study investigates the spatiotemporal evolution of development discourse in low-resource settings. Analyzing more than two years of geotagged X data from Zambia, we introduce a mixed-methods pipeline utilizing topic modeling, change-point detection, and qualitative coding to identify critical shifts in public debate. We identify seven recurring themes, including public health challenges and frustration with government policy, shaped by regional events and national interventions. Notably, we detect discourse changepoints linked to the COVID19 pandemic and a geothermal project, illustrating how online conversations mirror policy flashpoints. Our analysis distinguishes between the ephemeral nature of acute crises like COVID19 and the persistent, structural reorientations driven by long-term infrastructure projects. We conceptualize "durable discourse" as sustained narrative engagement with development issues. Contributing to HCI and ICTD, we examine technology's socioeconomic impact, providing practical implications and future work for direct local engagement.
|
https://arxiv.org/abs/2601.06402
|
Academic Papers
|
svg
|
c4dc783dc7b1a91af84d62f0ced897865758165f152dc4125097401399103076
|
2026-01-13T00:00:00-05:00
|
Steer Model beyond Assistant: Controlling System Prompt Strength via Contrastive Decoding
|
arXiv:2601.06403v1 Announce Type: new Abstract: Large language models excel at complex instructions yet struggle to deviate from their helpful assistant persona, as post-training instills strong priors that resist conflicting instructions. We introduce system prompt strength, a training-free method that treats prompt adherence as a continuous control. By contrasting logits from target and default system prompts, we isolate and amplify the behavioral signal unique to the target persona by a scalar factor alpha. Across five diverse benchmarks spanning constraint satisfaction, behavioral control, pluralistic alignment, capability modulation, and stylistic control, our method yields substantial improvements: up to +8.5 strict accuracy on IFEval, +45pp refusal rate on OffTopicEval, and +13% steerability on Prompt-Steering. Our approach enables practitioners to modulate system prompt strength, providing dynamic control over model behavior without retraining.
|
https://arxiv.org/abs/2601.06403
|
Academic Papers
|
svg
|
4dea30f8620a2a8992d9cf78bae96542638ec14413c5251d7c387870f8b3718c
|
2026-01-13T00:00:00-05:00
|
One-Shot Hierarchical Federated Clustering
|
arXiv:2601.06404v1 Announce Type: new Abstract: Driven by the growth of Web-scale decentralized services, Federated Clustering (FC) aims to extract knowledge from heterogeneous clients in an unsupervised manner while preserving the clients' privacy, which has emerged as a significant challenge due to the lack of label guidance and the Non-Independent and Identically Distributed (non-IID) nature of clients. In real scenarios such as personalized recommendation and cross-device user profiling, the global cluster may be fragmented and distributed among different clients, and the clusters may exist at different granularities or even nested. Although Hierarchical Clustering (HC) is considered promising for exploring such distributions, the sophisticated recursive clustering process makes it more computationally expensive and vulnerable to privacy exposure, thus relatively unexplored under the federated learning scenario. This paper introduces an efficient one-shot hierarchical FC framework that performs client-end distribution exploration and server-end distribution aggregation through one-way prototype-level communication from clients to the server. A fine partition mechanism is developed to generate successive clusterlets to describe the complex landscape of the clients' clusters. Then, a multi-granular learning mechanism on the server is proposed to fuse the clusterlets, even when they have inconsistent granularities generated from different clients. It turns out that the complex cluster distributions across clients can be efficiently explored, and extensive experiments comparing state-of-the-art methods on ten public datasets demonstrate the superiority of the proposed method.
|
https://arxiv.org/abs/2601.06404
|
Academic Papers
|
svg
|
3575ce7b659cc42d47f6b806d04534007b5b7136061cb624b2b773bcd86e6cc2
|
2026-01-13T00:00:00-05:00
|
Representing Sounds as Neural Amplitude Fields: A Benchmark of Coordinate-MLPs and A Fourier Kolmogorov-Arnold Framework
|
arXiv:2601.06406v1 Announce Type: new Abstract: Although Coordinate-MLP-based implicit neural representations have excelled in representing radiance fields, 3D shapes, and images, their application to audio signals remains underexplored. To fill this gap, we investigate existing implicit neural representations, from which we extract 3 types of positional encoding and 16 commonly used activation functions. Through combinatorial design, we establish the first benchmark for Coordinate-MLPs in audio signal representations. Our benchmark reveals that Coordinate-MLPs require complex hyperparameter tuning and frequency-dependent initialization, limiting their robustness. To address these issues, we propose Fourier-ASR, a novel framework based on the Fourier series theorem and the Kolmogorov-Arnold representation theorem. Fourier-ASR introduces Fourier Kolmogorov-Arnold Networks (Fourier-KAN), which leverage periodicity and strong nonlinearity to represent audio signals, eliminating the need for additional positional encoding. Furthermore, a Frequency-adaptive Learning Strategy (FaLS) is proposed to enhance the convergence of Fourier-KAN by capturing high-frequency components and preventing overfitting of low-frequency signals. Extensive experiments conducted on natural speech and music datasets reveal that: (1) well-designed positional encoding and activation functions in Coordinate-MLPs can effectively improve audio representation quality; and (2) Fourier-ASR can robustly represent complex audio signals without extensive hyperparameter tuning. Looking ahead, the continuity and infinite resolution of implicit audio representations make our research highly promising for tasks such as audio compression, synthesis, and generation. The source code will be released publicly to ensure reproducibility. The code is available at https://github.com/lif314/Fourier-ASR.
|
https://arxiv.org/abs/2601.06406
|
Academic Papers
|
svg
|
99544ef20b2efec93aca7dbcc600b5ee65e4f3d82f391bbea114928484c8e2f0
|
2026-01-13T00:00:00-05:00
|
Value of Information: A Framework for Human-Agent Communication
|
arXiv:2601.06407v1 Announce Type: new Abstract: Large Language Model (LLM) agents deployed for real-world tasks face a fundamental dilemma: user requests are underspecified, yet agents must decide whether to act on incomplete information or interrupt users for clarification. Existing approaches either rely on brittle confidence thresholds that require task-specific tuning, or fail to account for the varying stakes of different decisions. We introduce a decision-theoretic framework that resolves this trade-off through the Value of Information (VoI), enabling agents to dynamically weigh the expected utility gain from asking questions against the cognitive cost imposed on users. Our inference-time method requires no hyperparameter tuning and adapts seamlessly across contexts-from casual games to medical diagnosis. Experiments across four diverse domains (20 Questions, medical diagnosis, flight booking, and e-commerce) show that VoI consistently matches or exceeds the best manually-tuned baselines, achieving up to 1.36 utility points higher in high-cost settings. This work provides a parameter-free framework for adaptive agent communication that explicitly balances task risk, query ambiguity, and user effort.
|
https://arxiv.org/abs/2601.06407
|
Academic Papers
|
svg
|
d2cb5a559f371768b2b463e5647128bc36f6f29f3a254630adc2a06a7cfec7e2
|
2026-01-13T00:00:00-05:00
|
An NPDo Approach for Principal Joint Block Diagonalization
|
arXiv:2601.06410v1 Announce Type: new Abstract: Matrix joint block-diagonalization (JBD) frequently arises from diverse applications such as independent component analysis, blind source separation, and common principal component analysis (CPCA), among others. Particularly, CPCA aims at joint diagonalization, i.e., each block size being $1$-by-$1$. This paper is concerned with {\em principal joint block-diagonalization\/} (\pjbd), which aim to achieve two goals: 1)~partial joint block-diagonalization, and 2)~identification of dominant common block-diagonal parts for all involved matrices. This is in contrast to most existing methods, especially the popular ones based on Givens rotation, which focus on full joint diagonalization and quickly become impractical for matrices of even moderate size ($300$-by-$300$ or larger). An NPDo approach is proposed and it is built on a {\em nonlinear polar decomposition with orthogonal polar factor dependency} that characterizes the solutions of the optimization problem designed to achieve \pjbd, and it is shown the associated SCF iteration is globally convergent to a stationary point while the objective function increases monotonically during the iterative process. Numerical experiments are presented to illustrate the effectiveness of the NPDo approach and its superiority to Givens rotation-based methods.
|
https://arxiv.org/abs/2601.06410
|
Academic Papers
|
svg
|
7317e29c1f76dddee1dd601fab5bd3b3db9dc633cdb99837a5953ac173728530
|
2026-01-13T00:00:00-05:00
|
Structured Episodic Event Memory
|
arXiv:2601.06411v1 Announce Type: new Abstract: Current approaches to memory in Large Language Models (LLMs) predominantly rely on static Retrieval-Augmented Generation (RAG), which often results in scattered retrieval and fails to capture the structural dependencies required for complex reasoning. For autonomous agents, these passive and flat architectures lack the cognitive organization necessary to model the dynamic and associative nature of long-term interaction. To address this, we propose Structured Episodic Event Memory (SEEM), a hierarchical framework that synergizes a graph memory layer for relational facts with a dynamic episodic memory layer for narrative progression. Grounded in cognitive frame theory, SEEM transforms interaction streams into structured Episodic Event Frames (EEFs) anchored by precise provenance pointers. Furthermore, we introduce an agentic associative fusion and Reverse Provenance Expansion (RPE) mechanism to reconstruct coherent narrative contexts from fragmented evidence. Experimental results on the LoCoMo and LongMemEval benchmarks demonstrate that SEEM significantly outperforms baselines, enabling agents to maintain superior narrative coherence and logical consistency.
|
https://arxiv.org/abs/2601.06411
|
Academic Papers
|
svg
|
c32e0675a8486b769caccee0d8ffc9b98a0795176dc35046d9200865ec1071af
|
2026-01-13T00:00:00-05:00
|
Brokerage in the Black Box: Swing States, Strategic Ambiguity, and the Global Politics of AI Governance
|
arXiv:2601.06412v1 Announce Type: new Abstract: The U.S. - China rivalry has placed frontier dual-use technologies, particularly Artificial Intelligence (AI), at the center of global power dynamics, as techno-nationalism, supply chain securitization, and competing standards deepen bifurcation within a weaponized interdependence that blurs civilian-military boundaries. Existing research, yet, mostly emphasizes superpower strategies and often overlooks the role of middle powers as autonomous actors shaping the techno-order. This study examines Technological Swing States (TSS), middle powers with both technological capacity and strategic flexibility, and their ability to navigate the frontier technologies' uncertainty and opacity to mediate great-power techno-competition regionally and globally. It reconceptualizes AI opacity not as a technical deficit, but as a structural feature and strategic resource, stemming from algorithmic complexity, political incentives that prioritize performance over explainability, and the limits of post-hoc interpretability. This structural opacity shifts authority from technical demands for explainability to institutional mechanisms, such as certification, auditing, and disclosure, converting technical constraints into strategic political opportunities. Drawing on case studies of South Korea, Singapore, and India, the paper theorizes how TSS exploit the interplay between opacity and institutional transparency through three strategies: (i) delay and hedging, (ii) selective alignment, and (iii) normative intermediation. These practices enable TSS to preserve strategic flexibility, build trust among diverse stakeholders, and broker convergence across competing governance regimes, thereby influencing institutional design, interstate bargaining, and policy outcomes in global AI governance.
|
https://arxiv.org/abs/2601.06412
|
Academic Papers
|
svg
|
aaf0ddbbb368478d61293c3008403939a7cb6054550c1a4559962e2835e6f9ed
|
2026-01-13T00:00:00-05:00
|
GlobalPaint: Spatiotemporal Coherent Video Outpainting with Global Feature Guidance
|
arXiv:2601.06413v1 Announce Type: new Abstract: Video outpainting extends a video beyond its original boundaries by synthesizing missing border content. Compared with image outpainting, it requires not only per-frame spatial plausibility but also long-range temporal coherence, especially when outpainted content becomes visible across time under camera or object motion. We propose GlobalPaint, a diffusion-based framework for spatiotemporal coherent video outpainting. Our approach adopts a hierarchical pipeline that first outpaints key frames and then completes intermediate frames via an interpolation model conditioned on the completed boundaries, reducing error accumulation in sequential processing. At the model level, we augment a pretrained image inpainting backbone with (i) an Enhanced Spatial-Temporal module featuring 3D windowed attention for stronger spatiotemporal interaction, and (ii) global feature guidance that distills OpenCLIP features from observed regions across all frames into compact global tokens using a dedicated extractor. Comprehensive evaluations on benchmark datasets demonstrate improved reconstruction quality and more natural motion compared to prior methods. Our demo page is https://yuemingpan.github.io/GlobalPaint/
|
https://arxiv.org/abs/2601.06413
|
Academic Papers
|
svg
|
ece5c320df92cf0d32f828baf874f017cf2fadc0c97660119026b6b18d923015
|
2026-01-13T00:00:00-05:00
|
Semantic Enrichment of CAD-Based Industrial Environments via Scene Graphs for Simulation and Reasoning
|
arXiv:2601.06415v1 Announce Type: new Abstract: Utilizing functional elements in an industrial environment, such as displays and interactive valves, provide effective possibilities for robot training. When preparing simulations for robots or applications that involve high-level scene understanding, the simulation environment must be equally detailed. Although CAD files for such environments deliver an exact description of the geometry and visuals, they usually lack semantic, relational and functional information, thus limiting the simulation and training possibilities. A 3D scene graph can organize semantic, spatial and functional information by enriching the environment through a Large Vision-Language Model (LVLM). In this paper we present an offline approach to creating detailed 3D scene graphs from CAD environments. This will serve as a foundation to include the relations of functional and actionable elements, which then can be used for dynamic simulation and reasoning. Key results of this research include both quantitative results of the generated semantic labels as well as qualitative results of the scene graph, especially in hindsight of pipe structures and identified functional relations. All code, results and the environment will be made available at https://cad-scenegraph.github.io
|
https://arxiv.org/abs/2601.06415
|
Academic Papers
|
svg
|
10f6d49fc714603c198fceab809f81869b01dee26ad4f3b5058cc67875455658
|
2026-01-13T00:00:00-05:00
|
Lightweight Yet Secure: Secure Scripting Language Generation via Lightweight LLMs
|
arXiv:2601.06419v1 Announce Type: new Abstract: The security of scripting languages such as PowerShell is critical given their powerful automation and administration capabilities, often exercised with elevated privileges. Today, securing these languages still demands substantial human effort to craft and enforce rules, imposing heavy burdens on typical administrators and creating critical production risks (e.g., misoperations that shut down servers).Large language models (LLMs) have demonstrated strong capabilities in code generation, vulnerability detection, and automated repair for languages like Python and JavaScript. However, their ability to assist with generating secure scripting-language code remains largely underexplored. In this paper, we present SecGenEval-PS, a benchmark designed to systematically evaluate LLMs on secure scripting generation, security analysis, and automated repair. Our results show that both proprietary and open-source models fall short in these areas. For instance, over 60% of PowerShell scripts produced by GPT-4o and o3-mini are insecure without structured guidance.To bridge this gap, we propose PSSec, a framework that combines data synthesis with fine-tuning to enhance model security capabilities. We develop a self-debugging agent that integrates static analyzers with the reasoning abilities of advanced LLMs to synthesize large-scale structured triplets of insecure scripts, violation analyses, and corresponding repairs. We then fine-tune lightweight LLMs (as small as 1.7B parameters) using supervised fine-tuning (SFT) and reinforcement learning (RL), enabling security-aware reasoning and the generation of secure PowerShell code.Across multiple LLM families, including GPT and Qwen, \textit{PSSec}-trained models match or surpass general-purpose large models on PowerShell security tasks while reducing inference cost by more than an order of magnitude.
|
https://arxiv.org/abs/2601.06419
|
Academic Papers
|
svg
|
839be3b0692074b62b8aa283ae746962be1b9f24846eeeecc7ab3424723ca4f9
|
2026-01-13T00:00:00-05:00
|
Does Inference Scaling Improve Reasoning Faithfulness? A Multi-Model Analysis of Self-Consistency Tradeoffs
|
arXiv:2601.06423v1 Announce Type: new Abstract: Self-consistency has emerged as a popular technique for improving large language model accuracy on reasoning tasks. The approach is straightforward: generate multiple reasoning paths and select the most common answer through majority voting. While this reliably boosts accuracy, it remains unclear whether these gains reflect genuine improvements in reasoning quality. We investigate a fundamental question that has not been studied before: does inference scaling improve reasoning faithfulness? We conduct a comprehensive empirical study across four frontier models (GPT-5.2, Claude Opus 4.5, Gemini-3-flash-preview, and DeepSeek-v3.2) on 100 GSM8K mathematical reasoning problems. Our analysis employs bootstrap confidence intervals, McNemar's tests for paired comparisons, and Cohen's d effect sizes to quantify the effects rigorously. The results reveal striking differences across models that challenge common assumptions about self-consistency. GPT-5.2 shows the expected pattern: accuracy improves from 78% to 90% at N=5, with faithfulness remaining relatively stable (0.540 to 0.510). Claude Opus 4.5 tells a completely different story. Its accuracy actually drops from 78% to 74.3% while faithfulness jumps dramatically from 0.270 to 0.891 at N=5. DeepSeek-v3.2, already at 98% accuracy, shows ceiling effects with modest faithfulness gains (0.440 to 0.541). Gemini-3-flash improves from 81% to 86% accuracy with a slight faithfulness decrease (0.260 to 0.212). Problem difficulty analysis reveals that GPT-5.2 solves 82% of hard problems while breaking only 13% of easy ones. Claude, in contrast, breaks 23% of easy problems, explaining its accuracy decrease. These findings matter for practitioners: self-consistency is not universally beneficial, and teams should test their specific models before deployment. We release our code and provide practical recommendations for navigating these tradeoffs.
|
https://arxiv.org/abs/2601.06423
|
Academic Papers
|
svg
|
3bf79a5cf45fba040e40178d6d52c0a8b8d3d729dd35d565347870cd45701f48
|
2026-01-13T00:00:00-05:00
|
Can a Unimodal Language Agent Provide Preferences to Tune a Multimodal Vision-Language Model?
|
arXiv:2601.06424v1 Announce Type: new Abstract: To explore a more scalable path for adding multimodal capabilities to existing LLMs, this paper addresses a fundamental question: Can a unimodal LLM, relying solely on text, reason about its own informational needs and provide effective feedback to optimize a multimodal model? To answer this, we propose a method that enables a language agent to give feedback to a vision-language model (VLM) to adapt text generation to the agent's preferences. Our results from different experiments affirm this hypothesis, showing that LLM preference feedback significantly enhances VLM descriptions. Using our proposed method, we find that the VLM can generate multimodal scene descriptions to help the LLM better understand multimodal context, leading to improvements of maximum 13% in absolute accuracy compared to the baseline multimodal approach. Furthermore, a human study validated our AI-driven feedback, showing a 64.6% preference alignment rate between the LLM's choices and human judgments. Extensive experiments provide insights on how and why the method works and its limitations.
|
https://arxiv.org/abs/2601.06424
|
Academic Papers
|
svg
|
f48afab9c2754a67fd5b01bcd0c47a7e19bbc97fd968fced444dbda9e2c8c495
|
2026-01-13T00:00:00-05:00
|
HiDVFS: A Hierarchical Multi-Agent DVFS Scheduler for OpenMP DAG Workloads
|
arXiv:2601.06425v1 Announce Type: new Abstract: With advancements in multicore embedded systems, leakage power, exponentially tied to chip temperature, has surpassed dynamic power consumption. Energy-aware solutions use dynamic voltage and frequency scaling (DVFS) to mitigate overheating in performance-intensive scenarios, while software approaches allocate high-utilization tasks across core configurations in parallel systems to reduce power. However, existing heuristics lack per-core frequency monitoring, failing to address overheating from uneven core activity, and task assignments without detailed profiling overlook irregular execution patterns. We target OpenMP DAG workloads. Because makespan, energy, and thermal goals often conflict within a single benchmark, this work prioritizes performance (makespan) while reporting energy and thermal as secondary outcomes. To overcome these issues, we propose HiDVFS (a hierarchical multi-agent, performance-aware DVFS scheduler) for parallel systems that optimizes task allocation based on profiling data, core temperatures, and makespan-first objectives. It employs three agents: one selects cores and frequencies using profiler data, another manages core combinations via temperature sensors, and a third sets task priorities during resource contention. A makespan-focused reward with energy and temperature regularizers estimates future states and enhances sample efficiency. Experiments on the NVIDIA Jetson TX2 using the BOTS suite (9 benchmarks) compare HiDVFS against state-of-the-art approaches. With multi-seed validation (seeds 42, 123, 456), HiDVFS achieves the best finetuned performance with 4.16 plus/minus 0.58s average makespan (L10), representing a 3.44x speedup over GearDVFS (14.32 plus/minus 2.61s) and 50.4% energy reduction (63.7 kJ vs 128.4 kJ). Across all BOTS benchmarks, HiDVFS achieves an average 3.95x speedup and 47.1% energy reduction.
|
https://arxiv.org/abs/2601.06425
|
Academic Papers
|
svg
|
18b9919b9c5e2b1d9fd2328a558610a6d24d08997140267e8b1b0a14e2cff504
|
2026-01-13T00:00:00-05:00
|
NC-Bench: An LLM Benchmark for Evaluating Conversational Competence
|
arXiv:2601.06426v1 Announce Type: new Abstract: The Natural Conversation Benchmark (NC-Bench) introduce a new approach to evaluating the general conversational competence of large language models (LLMs). Unlike prior benchmarks that focus on the content of model behavior, NC-Bench focuses on the form and structure of natural conversation. Grounded in the IBM Natural Conversation Framework (NCF), NC-Bench comprises three distinct sets. The Basic Conversation Competence set evaluates fundamental sequence management practices, such as answering inquiries, repairing responses, and closing conversational pairs. The RAG set applies the same sequence management patterns as the first set but incorporates retrieval-augmented generation (RAG). The Complex Request set extends the evaluation to complex requests involving more intricate sequence management patterns. Each benchmark tests a model's ability to produce contextually appropriate conversational actions in response to characteristic interaction patterns. Initial evaluations across 6 open-source models and 14 interaction patterns show that models perform well on basic answering tasks, struggle more with repair tasks (especially repeat), have mixed performance on closing sequences, and find complex multi-turn requests most challenging, with Qwen models excelling on the Basic set and Granite models on the RAG set and the Complex Request set. By operationalizing fundamental principles of human conversation, NC-Bench provides a lightweight, extensible, and theory-grounded framework for assessing and improving the conversational abilities of LLMs beyond topical or task-specific benchmarks.
|
https://arxiv.org/abs/2601.06426
|
Academic Papers
|
svg
|
28b1c1f55990c3daa06610a3bf360bd19bafc9f818e849c19a997e35c531cac7
|
2026-01-13T00:00:00-05:00
|
Teach Diffusion Language Models to Learn from Their Own Mistakes
|
arXiv:2601.06428v1 Announce Type: new Abstract: Masked Diffusion Language Models (DLMs) achieve significant speed by generating multiple tokens in parallel. However, this parallel sampling approach, especially when using fewer inference steps, will introduce strong dependency errors and cause quality to deteriorate rapidly as the generation step size grows. As a result, reliable self-correction becomes essential for maintaining high-quality multi-token generation. To address this, we propose Decoupled Self-Correction (DSC), a novel two-stage methodology. DSC first fully optimizes the DLM's generative ability before freezing the model and training a specialized correction head. This decoupling preserves the model's peak SFT performance and ensures the generated errors used for correction head training are of higher quality. Additionally, we introduce Future-Context Augmentation (FCA) to maximize the correction head's accuracy. FCA generalizes the error training distribution by augmenting samples with ground-truth tokens, effectively training the head to utilize a richer, future-looking context. This mechanism is used for reliably detecting the subtle errors of the high-fidelity base model. Our DSC framework enables the model, at inference time, to jointly generate and revise tokens, thereby correcting errors introduced by multi-token generation and mitigating error accumulation across steps. Experiments on mathematical reasoning and code generation benchmarks demonstrate that our approach substantially reduces the quality degradation associated with larger generation steps, allowing DLMs to achieve both high generation speed and strong output fidelity.
|
https://arxiv.org/abs/2601.06428
|
Academic Papers
|
svg
|
822b862a20e3ac01d3958f91a799ee1d39af804d255c333fc3b3237759a4e535
|
2026-01-13T00:00:00-05:00
|
A Unified Shape-Aware Foundation Model for Time Series Classification
|
arXiv:2601.06429v1 Announce Type: new Abstract: Foundation models pre-trained on large-scale source datasets are reshaping the traditional training paradigm for time series classification. However, existing time series foundation models primarily focus on forecasting tasks and often overlook classification-specific challenges, such as modeling interpretable shapelets that capture class-discriminative temporal features. To bridge this gap, we propose UniShape, a unified shape-aware foundation model designed for time series classification. UniShape incorporates a shape-aware adapter that adaptively aggregates multiscale discriminative subsequences (shapes) into class tokens, effectively selecting the most relevant subsequence scales to enhance model interpretability. Meanwhile, a prototype-based pretraining module is introduced to jointly learn instance- and shape-level representations, enabling the capture of transferable shape patterns. Pre-trained on a large-scale multi-domain time series dataset comprising 1.89 million samples, UniShape exhibits superior generalization across diverse target domains. Experiments on 128 UCR datasets and 30 additional time series datasets demonstrate that UniShape achieves state-of-the-art classification performance, with interpretability and ablation analyses further validating its effectiveness.
|
https://arxiv.org/abs/2601.06429
|
Academic Papers
|
svg
|
9950130b46d34db8e04affed6653dcd426afdcc8930d02507abf1b3b0579a12b
|
2026-01-13T00:00:00-05:00
|
Robust and Secure Blockage-Aware Pinching Antenna-assisted Wireless Communication
|
arXiv:2601.06430v1 Announce Type: new Abstract: In this work, we investigate a blockage-aware pinching antenna (PA) system designed for secure and robust wireless communication. The considered system comprises a base station equipped with multiple waveguides, each hosting multiple PAs, and serves multiple single-antenna legitimate users in the presence of multi-antenna eavesdroppers under imperfect channel state information (CSI). To safeguard confidential transmissions, artificial noise (AN) is deliberately injected to degrade the eavesdropping channels. Recognizing that conventional linear CSI-error bounds become overly conservative for spatially distributed PA architectures, we develop new geometry-aware uncertainty sets that jointly characterize eavesdroppers position and array-orientation errors. Building upon these sets, we formulate a robust joint optimization problem that determines per-waveguide beamforming and AN covariance, individual PA power-ratio allocation, and PA positions to maximize the system sum rate subject to secrecy constraints. The highly non-convex design problem is efficiently addressed via a low computational complexity iterative algorithm that capitalizes on block coordinate descent, penalty-based methods, majorization-minimization, the S-procedure, and Lipschitz-based surrogate functions. Simulation results demonstrate that sum rates for the proposed algorithm outperforms conventional fixed antenna systems by 4.7 dB, offering substantially improved rate and secrecy performance. In particular, (i) adaptive PA positioning preserves LoS to legitimate users while effectively exploiting waveguide geometry to disrupt eavesdropper channels, and (ii) neglecting blockage effects in the PA system significantly impacts the system design, leading to performance degradation and inadequate secrecy guarantees.
|
https://arxiv.org/abs/2601.06430
|
Academic Papers
|
svg
|
e25891169c5e17d56cfdd540cc2a9918a914c4cf64502a3fdca95e2e11e28c43
|
2026-01-13T00:00:00-05:00
|
LSRIF: Logic-Structured Reinforcement Learning for Instruction Following
|
arXiv:2601.06431v1 Announce Type: new Abstract: Instruction-following is critical for large language models, but real-world instructions often contain logical structures such as sequential dependencies and conditional branching. Existing methods typically construct datasets with parallel constraints and optimize average rewards, ignoring logical dependencies and yielding noisy signals. We propose a logic-structured training framework LSRIF that explicitly models instruction logic. We first construct a dataset LSRInstruct with constraint structures such as parallel, sequential, and conditional types, and then design structure-aware rewarding method LSRIF including average aggregation for parallel structures, failure-penalty propagation for sequential structures, and selective rewards for conditional branches. Experiments show LSRIF brings significant improvements in instruction-following (in-domain and out-of-domain) and general reasoning. Analysis reveals that learning with explicit logic structures brings parameter updates in attention layers and sharpens token-level attention to constraints and logical operators.
|
https://arxiv.org/abs/2601.06431
|
Academic Papers
|
svg
|
1371fc2eee11258dca2ca2464683d5747ac82b3847c602e786534841bf8393e5
|
2026-01-13T00:00:00-05:00
|
Certified Unlearning in Decentralized Federated Learning
|
arXiv:2601.06436v1 Announce Type: new Abstract: Driven by the right to be forgotten (RTBF), machine unlearning has become an essential requirement for privacy-preserving machine learning. However, its realization in decentralized federated learning (DFL) remains largely unexplored. In DFL, clients exchange local updates only with neighbors, causing model information to propagate and mix across the network. As a result, when a client requests data deletion, its influence is implicitly embedded throughout the system, making removal difficult without centralized coordination. We propose a novel certified unlearning framework for DFL based on Newton-style updates. Our approach first quantifies how a client's data influence propagates during training. Leveraging curvature information of the loss with respect to the target data, we then construct corrective updates using Newton-style approximations. To ensure scalability, we approximate second-order information via Fisher information matrices. The resulting updates are perturbed with calibrated noise and broadcast through the network to eliminate residual influence across clients. We theoretically prove that our approach satisfies the formal definition of certified unlearning, ensuring that the unlearned model is difficult to distinguish from a retrained model without the deleted data. We also establish utility bounds showing that the unlearned model remains close to retraining from scratch. Extensive experiments across diverse decentralized settings demonstrate the effectiveness and efficiency of our framework.
|
https://arxiv.org/abs/2601.06436
|
Academic Papers
|
svg
|
93cd992a79898278ff20578ecb4f8375ae62f7f4476afd1132a389674512ee48
|
2026-01-13T00:00:00-05:00
|
Time Travel Engine: A Shared Latent Chronological Manifold Enables Historical Navigation in Large Language Models
|
arXiv:2601.06437v1 Announce Type: new Abstract: Time functions as a fundamental dimension of human cognition, yet the mechanisms by which Large Language Models (LLMs) encode chronological progression remain opaque. We demonstrate that temporal information in their latent space is organized not as discrete clusters but as a continuous, traversable geometry. We introduce the Time Travel Engine (TTE), an interpretability-driven framework that projects diachronic linguistic patterns onto a shared chronological manifold. Unlike surface-level prompting, TTE directly modulates latent representations to induce coherent stylistic, lexical, and conceptual shifts aligned with target eras. By parameterizing diachronic evolution as a continuous manifold within the residual stream, TTE enables fluid navigation through period-specific "zeitgeists" while restricting access to future knowledge. Furthermore, experiments across diverse architectures reveal topological isomorphism between the temporal subspaces of Chinese and English-indicating that distinct languages share a universal geometric logic of historical evolution. These findings bridge historical linguistics with mechanistic interpretability, offering a novel paradigm for controlling temporal reasoning in neural networks.
|
https://arxiv.org/abs/2601.06437
|
Academic Papers
|
svg
|
3f0a8e7bbc35148bb32a2394e869bbed6f9ace713db284b947e380832f7dfa3e
|
2026-01-13T00:00:00-05:00
|
Deep Reinforcement Learning based Control Design for Aircraft Recovery from Loss-of-Control Scenario
|
arXiv:2601.06439v1 Announce Type: new Abstract: Loss-of-control (LOC) remains a leading cause of fixed-wing aircraft accidents, especially in post-stall and flat-spin regimes where conventional gain-scheduled or logic-based recovery laws may fail. This study formulates spin-recovery as a continuous-state, continuous-action Markov Decision Process and trains a Proximal Policy Optimization (PPO) agent on a high-fidelity six-degree-of-freedom F-18/HARV model that includes nonlinear aerodynamics, actuator saturation and rate coupling. A two-phase potential-based reward structure first penalizes large angular rates and then enforces trimmed flight. After 6,000 simulated episodes, the policy generalities to unseen upset initializations. Results show that the learned policy successfully arrests the angular rates and stabilizes the angle of attack. The controller performance is observed to be satisfactory for recovery from spin condition which was compared with a state-of-the-art sliding mode controller. The findings demonstrate that deep reinforcement learning can deliver interpretable, dynamically feasible manoeuvres for real-time loss of control mitigation and provide a pathway for flight-critical RL deployment.
|
https://arxiv.org/abs/2601.06439
|
Academic Papers
|
svg
|
d2f5d36ab49a0b806372ee62040577f098261f2bf7672403550a6c999a37c800
|
2026-01-13T00:00:00-05:00
|
FlexAct: Why Learn when you can Pick?
|
arXiv:2601.06441v1 Announce Type: new Abstract: Learning activation functions has emerged as a promising direction in deep learning, allowing networks to adapt activation mechanisms to task-specific demands. In this work, we introduce a novel framework that employs the Gumbel-Softmax trick to enable discrete yet differentiable selection among a predefined set of activation functions during training. Our method dynamically learns the optimal activation function independently of the input, thereby enhancing both predictive accuracy and architectural flexibility. Experiments on synthetic datasets show that our model consistently selects the most suitable activation function, underscoring its effectiveness. These results connect theoretical advances with practical utility, paving the way for more adaptive and modular neural architectures in complex learning scenarios.
|
https://arxiv.org/abs/2601.06441
|
Academic Papers
|
svg
|
551d06379d97182a188a81315e3679fef3a20a013e46e8494fbbe932eb268868
|
2026-01-13T00:00:00-05:00
|
WHU-PCPR: A cross-platform heterogeneous point cloud dataset for place recognition in complex urban scenes
|
arXiv:2601.06442v1 Announce Type: new Abstract: Point Cloud-based Place Recognition (PCPR) demonstrates considerable potential in applications such as autonomous driving, robot localization and navigation, and map update. In practical applications, point clouds used for place recognition are often acquired from different platforms and LiDARs across varying scene. However, existing PCPR datasets lack diversity in scenes, platforms, and sensors, which limits the effective development of related research. To address this gap, we establish WHU-PCPR, a cross-platform heterogeneous point cloud dataset designed for place recognition. The dataset differentiates itself from existing datasets through its distinctive characteristics: 1) cross-platform heterogeneous point clouds: collected from survey-grade vehicle-mounted Mobile Laser Scanning (MLS) systems and low-cost Portable helmet-mounted Laser Scanning (PLS) systems, each equipped with distinct mechanical and solid-state LiDAR sensors. 2) Complex localization scenes: encompassing real-time and long-term changes in both urban and campus road scenes. 3) Large-scale spatial coverage: featuring 82.3 km of trajectory over a 60-month period and an unrepeated route of approximately 30 km. Based on WHU-PCPR, we conduct extensive evaluation and in-depth analysis of several representative PCPR methods, and provide a concise discussion of key challenges and future research directions. The dataset and benchmark code are available at https://github.com/zouxianghong/WHU-PCPR.
|
https://arxiv.org/abs/2601.06442
|
Academic Papers
|
svg
|
4234388bc03c4cc8b5b97987581419fe6e2acf80174c37dba57a6d63f43f8bc9
|
2026-01-13T00:00:00-05:00
|
How to Build Robust, Scalable Models for GSV-Based Indicators in Neighborhood Research
|
arXiv:2601.06443v1 Announce Type: new Abstract: A substantial body of health research demonstrates a strong link between neighborhood environments and health outcomes. Recently, there has been increasing interest in leveraging advances in computer vision to enable large-scale, systematic characterization of neighborhood built environments. However, the generalizability of vision models across fundamentally different domains remains uncertain, for example, transferring knowledge from ImageNet to the distinct visual characteristics of Google Street View (GSV) imagery. In applied fields such as social health research, several critical questions arise: which models are most appropriate, whether to adopt unsupervised training strategies, what training scale is feasible under computational constraints, and how much such strategies benefit downstream performance. These decisions are often costly and require specialized expertise. In this paper, we answer these questions through empirical analysis and provide practical insights into how to select and adapt foundation models for datasets with limited size and labels, while leveraging larger, unlabeled datasets through unsupervised training. Our study includes comprehensive quantitative and visual analyses comparing model performance before and after unsupervised adaptation.
|
https://arxiv.org/abs/2601.06443
|
Academic Papers
|
svg
|
b0c717c16256a0df9a91a4648bdaf51b1acf1b4bca10545145231291ba703190
|
2026-01-13T00:00:00-05:00
|
Physics-Informed Tree Search for High-Dimensional Computational Design
|
arXiv:2601.06444v1 Announce Type: new Abstract: High-dimensional design spaces underpin a wide range of physics-based modeling and computational design tasks in science and engineering. These problems are commonly formulated as constrained black-box searches over rugged objective landscapes, where function evaluations are expensive, and gradients are unavailable or unreliable. Conventional global search engines and optimizers struggle in such settings due to the exponential scaling of design spaces, the presence of multiple local basins, and the absence of physical guidance in sampling. We present a physics-informed Monte Carlo Tree Search (MCTS) framework that extends policy-driven tree-based reinforcement concepts to continuous, high-dimensional scientific optimization. Our method integrates population-level decision trees with surrogate-guided directional sampling, reward shaping, and hierarchical switching between global exploration and local exploitation. These ingredients allow efficient traversal of non-convex, multimodal landscapes where physically meaningful optima are sparse. We benchmark our approach against standard global optimization baselines on a suite of canonical test functions, demonstrating superior or comparable performance in terms of convergence, robustness, and generalization. Beyond synthetic tests, we demonstrate physics-consistent applicability to (i) crystal structure optimization from clusters to bulk, (ii) fitting of classical interatomic potentials, and (iii) constrained engineering design problems. Across all cases, the method converges with high fidelity and evaluation efficiency while preserving physical constraints. Overall, our work establishes physics-informed tree search as a scalable and interpretable paradigm for computational design and high-dimensional scientific optimization, bridging discrete decision-making frameworks with continuous search in scientific design workflows.
|
https://arxiv.org/abs/2601.06444
|
Academic Papers
|
svg
|
7481112176ae51e7a74473d36d876ac4f638ddd3e02ae3d94d211e080cc81af6
|
2026-01-13T00:00:00-05:00
|
LitVISTA: A Benchmark for Narrative Orchestration in Literary Text
|
arXiv:2601.06445v1 Announce Type: new Abstract: Computational narrative analysis aims to capture rhythm, tension, and emotional dynamics in literary texts. Existing large language models can generate long stories but overly focus on causal coherence, neglecting the complex story arcs and orchestration inherent in human narratives. This creates a structural misalignment between model- and human-generated narratives. We propose VISTA Space, a high-dimensional representational framework for narrative orchestration that unifies human and model narrative perspectives. We further introduce LitVISTA, a structurally annotated benchmark grounded in literary texts, enabling systematic evaluation of models' narrative orchestration capabilities. We conduct oracle evaluations on a diverse selection of frontier LLMs, including GPT, Claude, Grok, and Gemini. Results reveal systematic deficiencies: existing models fail to construct a unified global narrative view, struggling to jointly capture narrative function and structure. Furthermore, even advanced thinking modes yield only limited gains for such literary narrative understanding.
|
https://arxiv.org/abs/2601.06445
|
Academic Papers
|
svg
|
e90cacb3d3d2cd8608bf1f88c650259faac1148f3a7f6c40021786092e0bdcc4
|
2026-01-13T00:00:00-05:00
|
Error correction methods based on two-faced processes
|
arXiv:2601.06447v1 Announce Type: new Abstract: A new approach to the problem of error correction in communication channels is proposed, in which the input sequence is transformed in such a way that the interdependence of symbols is significantly increased. Then, after the sequence is transmitted over the channel, this property is used for error correction so that the remaining error rate is significantly reduced. The complexity of encoding and decoding is linear.
|
https://arxiv.org/abs/2601.06447
|
Academic Papers
|
svg
|
df4526c3d69fa427c185c345b0319f61df7426a5b38de481992cf01f91285761
|
2026-01-13T00:00:00-05:00
|
Function-Correcting Partition codes
|
arXiv:2601.06450v1 Announce Type: new Abstract: We introduce function-correcting partition codes (FCPCs) that are a natural generalization of function-correcting codes (FCCs). A $t$-error function-correcting partition code is an $(\mathcal{P},t)$-encoding defined directly on a partition $\mathcal{P}$ of $\mathbb{F}_q^k$. For a partition $\mathcal{P}=\{P_1,P_2,\ldots,P_E\}$ a systematic mapping $\mathcal{C}_{\mathcal{P}} : \mathbb{F}_q^k \rightarrow \mathbb{F}_q^{k+r}$ is called a \emph{$(\mathcal{P},t)$-encoding} if for all $u\in P_i$ and $v\in P_j$ with $i\neq j$, $d\big(\mathcal{C}_{\mathcal{P}}(u), \mathcal{C}_{\mathcal{P}}(v)\big)\ge 2t+1.$ We show that any $t$-error correcting code for a function $f$, denoted by $(f,t)$-FCC is exactly an FCPC with respect to the domain partition induced by $f$, which makes these codes a natural generalization of FCCs. We use the join of domain partitions to construct a single code that protects multiple functions simultaneously. We define the notion of partition redundancy gain and partition rate gain to measure the bandwidth saved by using a single FCPC for multiple functions instead of constructing separate FCCs for each function. We specialize this to linear functions via coset partition of the intersection of their kernels. Then, we associate a partition graph to any given partition of $\mathbb{F}_q^k$, and show that the existence of a suitable clique in this graph yields a set of representative information vectors that achieves the optimal redundancy. We showed the existence of a full-size clique in the partition graphs of weight partition and support partition. Finally, we introduce the notion of a block-preserving contraction for a partition, which helps reduce the problem of finding optimal redundancy for an FCPC. We observe that FCPCs naturally provide a form of partial privacy, in the sense that only the domain partition of the function needs to be revealed to the transmitter.
|
https://arxiv.org/abs/2601.06450
|
Academic Papers
|
svg
|
dfe76016a0b8bd090e302c9f1fd3a86aad890d2bc69c4258cafab08ef882f069
|
2026-01-13T00:00:00-05:00
|
CulinaryCut-VLAP: A Vision-Language-Action-Physics Framework for Food Cutting via a Force-Aware Material Point Method
|
arXiv:2601.06451v1 Announce Type: new Abstract: Food cutting is a highly practical yet underexplored application at the intersection of vision and robotic manipulation. The task remains challenging because interactions between the knife and deformable materials are highly nonlinear and often entail large deformations, frequent contact, and topological change, which in turn hinder stable and safe large-scale data collection. To address these challenges, we propose a unified framework that couples a vision-language-action (VLA) dataset with a physically realistic cutting simulator built on the material point method (MPM). Our simulator adopts MLS-MPM as its computational core, reducing numerical dissipation and energy drift while preserving rotational and shear responses even under topology-changing cuts. During cutting, forces and stress distributions are estimated from impulse exchanges between particles and the grid, enabling stable tracking of transient contact forces and energy transfer. We also provide a benchmark dataset that integrates diverse cutting trajectories, multi-view visual observations, and fine-grained language instructions, together with force--torque and tool--pose labels to provide physically consistent training signals. These components realize a learning--evaluation loop that respects the core physics of cutting and establishes a safe, reproducible, and scalable foundation for advancing VLA models in deformable object manipulation.
|
https://arxiv.org/abs/2601.06451
|
Academic Papers
|
svg
|
bdf13557787cfe97c5a9586d353008ea45feb53332106c6ba884cd5d50482c5f
|
2026-01-13T00:00:00-05:00
|
ConSensus: Multi-Agent Collaboration for Multimodal Sensing
|
arXiv:2601.06453v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly grounded in sensor data to perceive and reason about human physiology and the physical world. However, accurately interpreting heterogeneous multimodal sensor data remains a fundamental challenge. We show that a single monolithic LLM often fails to reason coherently across modalities, leading to incomplete interpretations and prior-knowledge bias. We introduce ConSensus, a training-free multi-agent collaboration framework that decomposes multimodal sensing tasks into specialized, modality-aware agents. To aggregate agent-level interpretations, we propose a hybrid fusion mechanism that balances semantic aggregation, which enables cross-modal reasoning and contextual understanding, with statistical consensus, which provides robustness through agreement across modalities. While each approach has complementary failure modes, their combination enables reliable inference under sensor noise and missing data. We evaluate ConSensus on five diverse multimodal sensing benchmarks, demonstrating an average accuracy improvement of 7.1% over the single-agent baseline. Furthermore, ConSensus matches or exceeds the performance of iterative multi-agent debate methods while achieving a 12.7 times reduction in average fusion token cost through a single-round hybrid fusion protocol, yielding a robust and efficient solution for real-world multimodal sensing tasks.
|
https://arxiv.org/abs/2601.06453
|
Academic Papers
|
svg
|
fd16ede28d85582b715739e644e5865fa26a54f4917e280de95abe18bb385796
|
2026-01-13T00:00:00-05:00
|
Architecting AgentOps Needs CHANGE
|
arXiv:2601.06456v1 Announce Type: new Abstract: The emergence of Agentic AI systems has outpaced the architectural thinking required to operate them effectively. These agents differ fundamentally from traditional software: their behavior is not fixed at deployment but continuously shaped by experience, feedback, and context. Applying operational principles inherited from DevOps or MLOps, built for deterministic software and traditional ML systems, assumes that system behavior can be managed through versioning, monitoring, and rollback. This assumption breaks down for Agentic AI systems whose learning trajectories diverge over time. This introduces non-determinism making system reliability a challenge at runtime. We argue that architecting such systems requires a shift from managing control loops to enabling dynamic co-evolution among agents, infrastructure, and human oversight. To guide this shift, we introduce CHANGE, a conceptual framework comprising six capabilities for operationalizing Agentic AI systems: Contextualize, Harmonize, Anticipate, Negotiate, Generate, and Evolve. CHANGE provides a foundation for architecting an AgentOps platform to manage the lifecycle of evolving Agentic AI systems, illustrated through a customer-support system scenario. In doing so, CHANGE redefines software architecture for an era where adaptation to uncertainty and continuous evolution are inherent properties of the system.
|
https://arxiv.org/abs/2601.06456
|
Academic Papers
|
svg
|
6d9f1e66ed75f26e033f9274b7354092f1e4b3d19f931097f1792cfe3f47a327
|
2026-01-13T00:00:00-05:00
|
PixRec: Leveraging Visual Context for Next-Item Prediction in Sequential Recommendation
|
arXiv:2601.06458v1 Announce Type: new Abstract: Large Language Models (LLMs) have recently shown strong potential for usage in sequential recommendation tasks through text-only models, which combine advanced prompt design, contrastive alignment, and fine-tuning on downstream domain-specific data. While effective, these approaches overlook the rich visual information present in many real-world recommendation scenarios, particularly in e-commerce. This paper proposes PixRec - a vision-language framework that incorporates both textual attributes and product images into the recommendation pipeline. Our architecture leverages a vision-language model backbone capable of jointly processing image-text sequences, maintaining a dual-tower structure and mixed training objective while aligning multi-modal feature projections for both item-item and user-item interactions. Using the Amazon Reviews dataset augmented with product images, our experiments demonstrate $3\times$ and 40% improvements in top-rank and top-10 rank accuracy over text-only recommenders respectively, indicating that visual features can help distinguish items with similar textual descriptions. Our work outlines future directions for scaling multi-modal recommenders training, enhancing visual-text feature fusion, and evaluating inference-time performance. This work takes a step toward building software systems utilizing visual information in sequential recommendation for real-world applications like e-commerce.
|
https://arxiv.org/abs/2601.06458
|
Academic Papers
|
svg
|
f4957f4b4b4e34b427fe78335fa983ecae7838e77bf0b8d5b2dc7768301d322c
|
2026-01-13T00:00:00-05:00
|
Tone Matters: The Impact of Linguistic Tone on Hallucination in VLMs
|
arXiv:2601.06460v1 Announce Type: new Abstract: Vision-Language Models (VLMs) are increasingly used in safety-critical applications that require reliable visual grounding. However, these models often hallucinate details that are not present in the image to satisfy user prompts. While recent datasets and benchmarks have been introduced to evaluate systematic hallucinations in VLMs, many hallucination behaviors remain insufficiently characterized. In particular, prior work primarily focuses on object presence or absence, leaving it unclear how prompt phrasing and structural constraints can systematically induce hallucinations. In this paper, we investigate how different forms of prompt pressure influence hallucination behavior. We introduce Ghost-100, a procedurally generated dataset of synthetic scenes in which key visual details are deliberately removed, enabling controlled analysis of absence-based hallucinations. Using a structured 5-Level Prompt Intensity Framework, we vary prompts from neutral queries to toxic demands and rigid formatting constraints. We evaluate three representative open-weight VLMs: MiniCPM-V 2.6-8B, Qwen2-VL-7B, and Qwen3-VL-8B. Across all three models, hallucination rates do not increase monotonically with prompt intensity. All models exhibit reductions at higher intensity levels at different thresholds, though not all show sustained reduction under maximum coercion. These results suggest that current safety alignment is more effective at detecting semantic hostility than structural coercion, revealing model-specific limitations in handling compliance pressure. Our dataset is available at: https://github.com/bli1/tone-matters
|
https://arxiv.org/abs/2601.06460
|
Academic Papers
|
svg
|
74963ba95d0131e930d376fa85c7077315cfc4428b6648d0e92c74f1ba58bb64
|
2026-01-13T00:00:00-05:00
|
VIPER Strike: Defeating Visual Reasoning CAPTCHAs via Structured Vision-Language Inference
|
arXiv:2601.06461v1 Announce Type: new Abstract: Visual Reasoning CAPTCHAs (VRCs) combine visual scenes with natural-language queries that demand compositional inference over objects, attributes, and spatial relations. They are increasingly deployed as a primary defense against automated bots. Existing solvers fall into two paradigms: vision-centric, which rely on template-specific detectors but fail on novel layouts, and reasoning-centric, which leverage LLMs but struggle with fine-grained visual perception. Both lack the generality needed to handle heterogeneous VRC deployments. We present ViPer, a unified attack framework that integrates structured multi-object visual perception with adaptive LLM-based reasoning. ViPer parses visual layouts, grounds attributes to question semantics, and infers target coordinates within a modular pipeline. Evaluated on six major VRC providers (VTT, Geetest, NetEase, Dingxiang, Shumei, Xiaodun), ViPer achieves up to 93.2% success, approaching human-level performance across multiple benchmarks. Compared to prior solvers, GraphNet (83.2%), Oedipus (65.8%), and the Holistic approach (89.5%), ViPer consistently outperforms all baselines. The framework further maintains robustness across alternative LLM backbones (GPT, Grok, DeepSeek, Kimi), sustaining accuracy above 90%. To anticipate defense, we further introduce Template-Space Randomization (TSR), a lightweight strategy that perturbs linguistic templates without altering task semantics. TSR measurably reduces solver (i.e., attacker) performance. Our proposed design suggests directions for human-solvable but machine-resistant CAPTCHAs.
|
https://arxiv.org/abs/2601.06461
|
Academic Papers
|
svg
|
ae3849b62dcd7ea6837d62a0aef891c54639a92a0487205733f7f8a027bd4f60
|
2026-01-13T00:00:00-05:00
|
Gecko: An Efficient Neural Architecture Inherently Processing Sequences with Arbitrary Lengths
|
arXiv:2601.06463v1 Announce Type: new Abstract: Designing a unified neural network to efficiently and inherently process sequential data with arbitrary lengths is a central and challenging problem in sequence modeling. The design choices in Transformer, including quadratic complexity and weak length extrapolation, have limited their ability to scale to long sequences. In this work, we propose Gecko, a neural architecture that inherits the design of Mega and Megalodon (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability to capture long range dependencies, including timestep decay normalization, sliding chunk attention mechanism, and adaptive working memory. In a controlled pretraining comparison with Llama2 and Megalodon in the scale of 7 billion parameters and 2 trillion training tokens, Gecko achieves better efficiency and long-context scalability. Gecko reaches a training loss of 1.68, significantly outperforming Llama2-7B (1.75) and Megalodon-7B (1.70), and landing close to Llama2-13B (1.67). Notably, without relying on any context-extension techniques, Gecko exhibits inherent long-context processing and retrieval capabilities, stably handling sequences of up to 4 million tokens and retrieving information from contexts up to $4\times$ longer than its attention window. Code: https://github.com/XuezheMax/gecko-llm
|
https://arxiv.org/abs/2601.06463
|
Academic Papers
|
svg
|
7f854b94b15e9a352be77b20d5799959e7b3f9cd88f8f6343b68f3e831ec30d4
|
2026-01-13T00:00:00-05:00
|
On the Adversarial Robustness of 3D Large Vision-Language Models
|
arXiv:2601.06464v1 Announce Type: new Abstract: 3D Vision-Language Models (VLMs), such as PointLLM and GPT4Point, have shown strong reasoning and generalization abilities in 3D understanding tasks. However, their adversarial robustness remains largely unexplored. Prior work in 2D VLMs has shown that the integration of visual inputs significantly increases vulnerability to adversarial attacks, making these models easier to manipulate into generating toxic or misleading outputs. In this paper, we investigate whether incorporating 3D vision similarly compromises the robustness of 3D VLMs. To this end, we present the first systematic study of adversarial robustness in point-based 3D VLMs. We propose two complementary attack strategies: \textit{Vision Attack}, which perturbs the visual token features produced by the 3D encoder and projector to assess the robustness of vision-language alignment; and \textit{Caption Attack}, which directly manipulates output token sequences to evaluate end-to-end system robustness. Each attack includes both untargeted and targeted variants to measure general vulnerability and susceptibility to controlled manipulation. Our experiments reveal that 3D VLMs exhibit significant adversarial vulnerabilities under untargeted attacks, while demonstrating greater resilience against targeted attacks aimed at forcing specific harmful outputs, compared to their 2D counterparts. These findings highlight the importance of improving the adversarial robustness of 3D VLMs, especially as they are deployed in safety-critical applications.
|
https://arxiv.org/abs/2601.06464
|
Academic Papers
|
svg
|
d9d7c1db13d0008a5f6554baa84c054c1771fae91f6dda131380cac280e1bc90
|
2026-01-13T00:00:00-05:00
|
SecureDyn-FL: A Robust Privacy-Preserving Federated Learning Framework for Intrusion Detection in IoT Networks
|
arXiv:2601.06466v1 Announce Type: new Abstract: The rapid proliferation of Internet of Things (IoT) devices across domains such as smart homes, industrial control systems, and healthcare networks has significantly expanded the attack surface for cyber threats, including botnet-driven distributed denial-of-service (DDoS), malware injection, and data exfiltration. Conventional intrusion detec- tion systems (IDS) face critical challenges like privacy, scala- bility, and robustness when applied in such heterogeneous IoT environments. To address these issues, we propose SecureDyn- FL, a comprehensive and robust privacy-preserving federated learning (FL) framework tailored for intrusion detection in IoT networks. SecureDyn-FL is designed to simultaneously address multiple security dimensions in FL-based IDS: (1) poisoning detection through dynamic temporal gradient auditing, (2) privacy protection against inference and eavesdrop- ping attacks through secure aggregation, and (3) adaptation to heterogeneous non-IID data via personalized learning. The framework introduces three core contributions: (i) a dynamic temporal gradient auditing mechanism that leverages Gaussian mixture models (GMMs) and Mahalanobis distance (MD) to detect stealthy and adaptive poisoning attacks, (ii) an optimized privacy-preserving aggregation scheme based on transformed additive ElGamal encryption with adaptive pruning and quantization for secure and efficient communication, and (iii) a dual-objective personalized learning strategy that improves model adaptation under non-IID data using logit-adjusted loss. Extensive experiments on the N-BaIoT dataset under both IID and non-IID settings, including scenarios with up to 50% adversarial clients, demonstrate that SecureDyn- FL consistently outperforms state-of-the-art FL-based IDS defenses.
|
https://arxiv.org/abs/2601.06466
|
Academic Papers
|
svg
|
456653ecc907c7257a1393eff25bc3a7a75b66660ef330de34eac1f5ec08b0bc
|
2026-01-13T00:00:00-05:00
|
Style-constrained inverse design of microstructures with tailored mechanical properties using unconditional diffusion models
|
arXiv:2601.06469v1 Announce Type: new Abstract: Deep generative models, particularly denoising diffusion models, have achieved remarkable success in high-fidelity generation of architected microstructures with desired properties and styles. Nevertheless, these recent methods typically rely on conditional training mechanisms and demand substantial computational effort to prepare the labeled training dataset, which makes them inflexible since any change in the governing equations or boundary conditions requires a complete retraining process. In this study, we propose a new inverse design framework that integrates unconditional denoising diffusion models with differentiable programming techniques for architected microstructure generation. Our approach eliminates the need for expensive labeled dataset preparation and retraining for different problem settings. By reinterpreting the noise input to the diffusion model as an optimizable design variable, we formulate the design task as an optimization problem over the noise input, enabling control over the reverse denoising trajectory to guide the generated microstructure toward the desired mechanical properties while preserving the stylistic constraints encoded in the training dataset. A unified differentiation pipeline via vector-Jacobian product concatenations is developed to enable end-to-end gradient evaluation through backpropagation. Several numerical examples, ranging from the design of microstructures with specified homogenized properties to those with targeted hyperelastic and elasto-plastic behaviors, showcase the effectiveness of the framework and its potential for advanced design tasks involving diverse performance and style requirements.
|
https://arxiv.org/abs/2601.06469
|
Academic Papers
|
svg
|
e91a0d54ab7735062e8d2192118ec1ce62a27564b5d1643db064b431656b21ff
|
2026-01-13T00:00:00-05:00
|
PRISP: Privacy-Safe Few-Shot Personalization via Lightweight Adaptation
|
arXiv:2601.06471v1 Announce Type: new Abstract: Large language model (LLM) personalization aims to adapt general-purpose models to individual users. Most existing methods, however, are developed under data-rich and resource-abundant settings, often incurring privacy risks. In contrast, realistic personalization typically occurs after deployment under (i) extremely limited user data, (ii) constrained computational resources, and (iii) strict privacy requirements. We propose PRISP, a lightweight and privacy-safe personalization framework tailored to these constraints. PRISP leverages a Text-to-LoRA hypernetwork to generate task-aware LoRA parameters from task descriptions, and enables efficient user personalization by optimizing a small subset of task-aware LoRA parameters together with minimal additional modules using few-shot user data. Experiments on a few-shot variant of the LaMP benchmark demonstrate that PRISP achieves strong overall performance compared to prior approaches, while reducing computational overhead and eliminating privacy risks.
|
https://arxiv.org/abs/2601.06471
|
Academic Papers
|
svg
|
67aa1ab544eac22dfbc6153f0661ae9d8dbc13a5605dc8bd7b158ba53b3489ba
|
2026-01-13T00:00:00-05:00
|
StablePDENet: Enhancing Stability of Operator Learning for Solving Differential Equations
|
arXiv:2601.06472v1 Announce Type: new Abstract: Learning solution operators for differential equations with neural networks has shown great potential in scientific computing, but ensuring their stability under input perturbations remains a critical challenge. This paper presents a robust self-supervised neural operator framework that enhances stability through adversarial training while preserving accuracy. We formulate operator learning as a min-max optimization problem, where the model is trained against worst-case input perturbations to achieve consistent performance under both normal and adversarial conditions. We demonstrate that our method not only achieves good performance on standard inputs, but also maintains high fidelity under adversarial perturbed inputs. The results highlight the importance of stability-aware training in operator learning and provide a foundation for developing reliable neural PDE solvers in real-world applications, where input noise and uncertainties are inevitable.
|
https://arxiv.org/abs/2601.06472
|
Academic Papers
|
svg
|
6f2db3a578685fa47a6a40120eaeb7b22ffe6c4c811ad543a9bc55dc93b2437d
|
2026-01-13T00:00:00-05:00
|
Hybrid LSTM-UKF Framework: Ankle Angle and Ground Reaction Force Estimation
|
arXiv:2601.06473v1 Announce Type: new Abstract: Accurate prediction of joint kinematics and kinetics is essential for advancing gait analysis and developing intelligent assistive systems such as prosthetics and exoskeletons. This study presents a hybrid LSTM-UKF framework for estimating ankle angle and ground reaction force (GRF) across varying walking speeds. A multimodal sensor fusion strategy integrates force plate data, knee angle, and GRF signals to enrich biomechanical context. Model performance was evaluated using RMSE and $R^2$ under subject-specific validation. The LSTM-UKF consistently outperformed standalone LSTM and UKF models, achieving up to 18.6\% lower RMSE for GRF prediction at 3 km/h. Additionally, UKF integration improved robustness, reducing ankle angle RMSE by up to 22.4\% compared to UKF alone at 1 km/h. These results underscore the effectiveness of hybrid architectures for reliable gait prediction across subjects and walking conditions.
|
https://arxiv.org/abs/2601.06473
|
Academic Papers
|
svg
|
4ffd32b8e48fc983dd8adf5c353184f3feb336aa2401738273a444d860fead80
|
2026-01-13T00:00:00-05:00
|
SparseOccVLA: Bridging Occupancy and Vision-Language Models via Sparse Queries for Unified 4D Scene Understanding and Planning
|
arXiv:2601.06474v1 Announce Type: new Abstract: In autonomous driving, Vision Language Models (VLMs) excel at high-level reasoning , whereas semantic occupancy provides fine-grained details. Despite significant progress in individual fields, there is still no method that can effectively integrate both paradigms. Conventional VLMs struggle with token explosion and limited spatiotemporal reasoning, while semantic occupancy provides a unified, explicit spatial representation but is too dense to integrate efficiently with VLMs. To address these challenges and bridge the gap between VLMs and occupancy, we propose SparseOccVLA, a novel vision-language-action model that unifies scene understanding, occupancy forecasting, and trajectory planning powered by sparse occupancy queries. Starting with a lightweight Sparse Occupancy Encoder, SparseOccVLA generates compact yet highly informative sparse occupancy queries that serve as the single bridge between vision and language. These queries are aligned into the language space and reasoned by the LLM for unified scene understanding and future occupancy forecasting. Furthermore, we introduce an LLM-guided Anchor-Diffusion Planner featuring decoupled anchor scoring and denoising, as well as cross-model trajectory-condition fusion. SparseOccVLA achieves a 7% relative improvement in CIDEr over the state-of-the-art on OmniDrive-nuScenes, a 0.5 increase in mIoU score on Occ3D-nuScenes, and sets state-of-the-art open-loop planning metric on nuScenes benchmark, demonstrating its strong holistic capability.
|
https://arxiv.org/abs/2601.06474
|
Academic Papers
|
svg
|
1116d4daf0fbf73ebe601c8bd2cc72557ca3f568dafc027c3d6adddeec5e74d5
|
2026-01-13T00:00:00-05:00
|
VVTRec: Radio Interferometric Reconstruction through Visual and Textual Modality Enrichment
|
arXiv:2601.06475v1 Announce Type: new Abstract: Radio astronomy is an indispensable discipline for observing distant celestial objects. Measurements of wave signals from radio telescopes, called visibility, need to be transformed into images for astronomical observations. These dirty images blend information from real sources and artifacts. Therefore, astronomers usually perform reconstruction before imaging to obtain cleaner images. Existing methods consider only a single modality of sparse visibility data, resulting in images with remaining artifacts and insufficient modeling of correlation. To enhance the extraction of visibility information and emphasize output quality in the image domain, we propose VVTRec, a multimodal radio interferometric data reconstruction method with visibility-guided visual and textual modality enrichment. In our VVTRec, sparse visibility is transformed into image-form and text-form features to obtain enhancements in terms of spatial and semantic information, improving the structural integrity and accuracy of images. Also, we leverage Vision-Language Models (VLMs) to achieve additional training-free performance improvements. VVTRec enables sparse visibility, as a foreign modality unseen by VLMs, to accurately extract pre-trained knowledge as a supplement. Our experiments demonstrate that VVTRec effectively enhances imaging results by exploiting multimodal information without introducing excessive computational overhead.
|
https://arxiv.org/abs/2601.06475
|
Academic Papers
|
svg
|
d06fca3af8cab92fa12b18fd0d36469692ae9bf38ba5b3952cb2d01fbeb51fb9
|
2026-01-13T00:00:00-05:00
|
IndRegBias: A Dataset for Studying Indian Regional Biases in English and Code-Mixed Social Media Comments
|
arXiv:2601.06477v1 Announce Type: new Abstract: Warning: This paper consists of examples representing regional biases in Indian regions that might be offensive towards a particular region. While social biases corresponding to gender, race, socio-economic conditions, etc., have been extensively studied in the major applications of Natural Language Processing (NLP), biases corresponding to regions have garnered less attention. This is mainly because of (i) difficulty in the extraction of regional bias datasets, (ii) disagreements in annotation due to inherent human biases, and (iii) regional biases being studied in combination with other types of social biases and often being under-represented. This paper focuses on creating a dataset IndRegBias, consisting of regional biases in an Indian context reflected in users' comments on popular social media platforms, namely Reddit and YouTube. We carefully selected 25,000 comments appearing on various threads in Reddit and videos on YouTube discussing trending topics on regional issues in India. Furthermore, we propose a multilevel annotation strategy to annotate the comments describing the severity of regional biased statements. To detect the presence of regional bias and its severity in IndRegBias, we evaluate open-source Large Language Models (LLMs) and Indic Language Models (ILMs) using zero-shot, few-shot, and fine-tuning strategies. We observe that zero-shot and few-shot approaches show lower accuracy in detecting regional biases and severity in the majority of the LLMs and ILMs. However, the fine-tuning approach significantly enhances the performance of the LLM in detecting Indian regional bias along with its severity.
|
https://arxiv.org/abs/2601.06477
|
Academic Papers
|
svg
|
0358f3c8a3bbaf446302d7e3e739b2ad566099ddbeb69305cf115eb8325e025d
|
2026-01-13T00:00:00-05:00
|
Deriving Decoder-Free Sparse Autoencoders from First Principles
|
arXiv:2601.06478v1 Announce Type: new Abstract: Gradient descent on log-sum-exp (LSE) objectives performs implicit expectation--maximization (EM): the gradient with respect to each component output equals its responsibility. The same theory predicts collapse without volume control analogous to the log-determinant in Gaussian mixture models. We instantiate the theory in a single-layer encoder with an LSE objective and InfoMax regularization for volume control. Experiments confirm the theory's predictions. The gradient--responsibility identity holds exactly; LSE alone collapses; variance prevents dead components; decorrelation prevents redundancy. The model exhibits EM-like optimization dynamics in which lower loss does not correspond to better features and adaptive optimizers offer no advantage. The resulting decoder-free model learns interpretable mixture components, confirming that implicit EM theory can prescribe architectures.
|
https://arxiv.org/abs/2601.06478
|
Academic Papers
|
svg
|
e6a5c145f57ca025c0712c9e9800abbfefe5011576844ba333c412a9d68084b5
|
2026-01-13T00:00:00-05:00
|
SRFlow: A Dataset and Regularization Model for High-Resolution Facial Optical Flow via Splatting Rasterization
|
arXiv:2601.06479v1 Announce Type: new Abstract: Facial optical flow supports a wide range of tasks in facial motion analysis. However, the lack of high-resolution facial optical flow datasets has hindered progress in this area. In this paper, we introduce Splatting Rasterization Flow (SRFlow), a high-resolution facial optical flow dataset, and Splatting Rasterization Guided FlowNet (SRFlowNet), a facial optical flow model with tailored regularization losses. These losses constrain flow predictions using masks and gradients computed via difference or Sobel operator. This effectively suppresses high-frequency noise and large-scale errors in texture-less or repetitive-pattern regions, enabling SRFlowNet to be the first model explicitly capable of capturing high-resolution skin motion guided by Gaussian splatting rasterization. Experiments show that training with the SRFlow dataset improves facial optical flow estimation across various optical flow models, reducing end-point error (EPE) by up to 42% (from 0.5081 to 0.2953). Furthermore, when coupled with the SRFlow dataset, SRFlowNet achieves up to a 48% improvement in F1-score (from 0.4733 to 0.6947) on a composite of three micro-expression datasets. These results demonstrate the value of advancing both facial optical flow estimation and micro-expression recognition.
|
https://arxiv.org/abs/2601.06479
|
Academic Papers
|
svg
|
52e3d728575461552e5e2d20d12197e679fe734917be6329cccdc1da4f08dc70
|
2026-01-13T00:00:00-05:00
|
Learning Domain Agnostic Latent Embeddings of 3D Faces for Zero-shot Animal Expression Transfer
|
arXiv:2601.06484v1 Announce Type: new Abstract: We present a zero-shot framework for transferring human facial expressions to 3D animal face meshes. Our method combines intrinsic geometric descriptors (HKS/WKS) with a mesh-agnostic latent embedding that disentangles facial identity and expression. The ID latent space captures species-independent facial structure, while the expression latent space encodes deformation patterns that generalize across humans and animals. Trained only with human expression pairs, the model learns the embeddings, decoupling, and recoupling of cross-identity expressions, enabling expression transfer without requiring animal expression data. To enforce geometric consistency, we employ Jacobian loss together with vertex-position and Laplacian losses. Experiments show that our approach achieves plausible cross-species expression transfer, effectively narrowing the geometric gap between human and animal facial shapes.
|
https://arxiv.org/abs/2601.06484
|
Academic Papers
|
svg
|
7e3f5e41593a93ac58c26cd963cf712ce6d7c6ecd6f5fd14aa3d64f35853e8b4
|
2026-01-13T00:00:00-05:00
|
Coupling Smoothed Particle Hydrodynamics with Multi-Agent Deep Reinforcement Learning for Cooperative Control of Point Absorbers
|
arXiv:2601.06485v1 Announce Type: new Abstract: Wave Energy Converters, particularly point absorbers, have emerged as one of the most promising technologies for harvesting ocean wave energy. Nevertheless, achieving high conversion efficiency remains challenging due to the inherently complex and nonlinear interactions between incident waves and device motion dynamics. This study develops an optimal adaptive damping control model for the power take-off (PTO) system by coupling Smoothed Particle Hydrodynamics (SPH) with multi-agent deep reinforcement learning. The proposed framework enables real-time communication between high-fidelity SPH simulations and intelligent control agents that learn coordinated policies to maximise energy capture. In each training episode, the SPH-based environment provides instantaneous hydrodynamic states to the agents, which output continuous damping actions and receive rewards reflecting power absorption. The Multi-Agent Soft Actor Critic algorithm is employed within a centralised-training and decentralised-execution scheme to ensure stable learning in continuous, multi-body systems. The entire platform is implemented in a unified GPU-accelerated C++ environment, allowing long-horizon training and large-scale three-dimensional simulations. The approach is validated through a series of two-dimensional and three-dimensional benchmark cases under regular and irregular wave conditions. Compared with constant PTO damping, the learned control policy increases overall energy capture by 23.8% and 21.5%, respectively, demonstrating the strong potential of intelligent control for improving the performance of wave energy converter arrays. The developed three-dimensional GPU-accelerated multi-agent platform in computational hydrodynamics, is extendable to other fluid-structure interaction engineering problem that require real-time, multi-body coordinated control.
|
https://arxiv.org/abs/2601.06485
|
Academic Papers
|
svg
|
229f3ad755ae0a90dcb92579da88bee47a7fd4042564f681013706102b672c6a
|
2026-01-13T00:00:00-05:00
|
ArenaRL: Scaling RL for Open-Ended Agents via Tournament-based Relative Ranking
|
arXiv:2601.06487v1 Announce Type: new Abstract: Reinforcement learning has substantially improved the performance of LLM agents on tasks with verifiable outcomes, but it still struggles on open-ended agent tasks with vast solution spaces (e.g., complex travel planning). Due to the absence of objective ground-truth for these tasks, current RL algorithms largely rely on reward models that assign scalar scores to individual responses. We contend that such pointwise scoring suffers from an inherent discrimination collapse: the reward model struggles to distinguish subtle advantages among different trajectories, resulting in scores within a group being compressed into a narrow range. Consequently, the effective reward signal becomes dominated by noise from the reward model, leading to optimization stagnation. To address this, we propose ArenaRL, a reinforcement learning paradigm that shifts from pointwise scalar scoring to intra-group relative ranking. ArenaRL introduces a process-aware pairwise evaluation mechanism, employing multi-level rubrics to assign fine-grained relative scores to trajectories. Additionally, we construct an intra-group adversarial arena and devise a tournament-based ranking scheme to obtain stable advantage signals. Empirical results confirm that the built seeded single-elimination scheme achieves nearly equivalent advantage estimation accuracy to full pairwise comparisons with O(N^2) complexity, while operating with only O(N) complexity, striking an optimal balance between efficiency and precision. Furthermore, to address the lack of full-cycle benchmarks for open-ended agents, we build Open-Travel and Open-DeepResearch, two high-quality benchmarks featuring a comprehensive pipeline covering SFT, RL training, and multi-dimensional evaluation. Extensive experiments show that ArenaRL substantially outperforms standard RL baselines, enabling LLM agents to generate more robust solutions for complex real-world tasks.
|
https://arxiv.org/abs/2601.06487
|
Academic Papers
|
svg
|
ca23fea03bdab78995a14f8745519c3723fe6b23721aed1b8a5f2c5dbdd07fd8
|
2026-01-13T00:00:00-05:00
|
Bi-Mem: Bidirectional Construction of Hierarchical Memory for Personalized LLMs via Inductive-Reflective Agents
|
arXiv:2601.06490v1 Announce Type: new Abstract: Constructing memory from users' long-term conversations overcomes LLMs' contextual limitations and enables personalized interactions. Recent studies focus on hierarchical memory to model users' multi-granular behavioral patterns via clustering and aggregating historical conversations. However, conversational noise and memory hallucinations can be amplified during clustering, causing locally aggregated memories to misalign with the user's global persona. To mitigate this issue, we propose Bi-Mem, an agentic framework ensuring hierarchical memory fidelity through bidirectional construction. Specifically, we deploy an inductive agent to form the hierarchical memory: it extracts factual information from raw conversations to form fact-level memory, aggregates them into thematic scenes (i.e., local scene-level memory) using graph clustering, and infers users' profiles as global persona-level memory. Simultaneously, a reflective agent is designed to calibrate local scene-level memories using global constraints derived from the persona-level memory, thereby enforcing global-local alignment. For coherent memory recall, we propose an associative retrieval mechanism: beyond initial hierarchical search, a spreading activation process allows facts to evoke contextual scenes, while scene-level matches retrieve salient supporting factual information. Empirical evaluations demonstrate that Bi-Mem achieves significant improvements in question answering performance on long-term personalized conversational tasks.
|
https://arxiv.org/abs/2601.06490
|
Academic Papers
|
svg
|
51c28a1c142baee3d5cd934159496c106a15f7946c1839776b6f5a1a86895576
|
2026-01-13T00:00:00-05:00
|
Algorithms for Computing the Petz-Augustin Capacity
|
arXiv:2601.06492v1 Announce Type: new Abstract: We propose the first algorithms with non-asymptotic convergence guarantees for computing the Petz-Augustin capacity, which generalizes the channel capacity and characterizes the optimal error exponent in classical-quantum channel coding. This capacity can be equivalently expressed as the maximization of two generalizations of mutual information: the Petz-R\'{e}nyi information and the Petz-Augustin information. To maximize the Petz-R\'{e}nyi information, we show that it corresponds to a convex H\"{o}lder-smooth optimization problem, and hence the universal fast gradient method of Nesterov (2015), along with its convergence guarantees, readily applies. Regarding the maximization of the Petz-Augustin information, we adopt a two-layered approach: we show that the objective function is smooth relative to the negative Shannon entropy and can be efficiently optimized by entropic mirror descent; each iteration of entropic mirror descent requires computing the Petz-Augustin information, for which we propose a novel fixed-point algorithm and establish its contractivity with respect to the Thompson metric. Notably, this two-layered approach can be viewed as a generalization of the mirror-descent interpretation of the Blahut-Arimoto algorithm due to He et al. (2024).
|
https://arxiv.org/abs/2601.06492
|
Academic Papers
|
svg
|
146eb287027464dc60276095bdb453221d2a641262523f3f2cd23c04359af534
|
2026-01-13T00:00:00-05:00
|
On the Number of Subsequences in the Nonbinary Deletion Channel
|
arXiv:2601.06493v1 Announce Type: new Abstract: In the deletion channel, an important problem is to determine the number of subsequences derived from a string $U$ of length $n$ when subjected to $t$ deletions. It is well-known that the number of subsequences in the setting exhibits a strong dependence on the number of runs in the string $U$, where a run is defined as a maximal substring of identical characters. In this paper we study the number of subsequences of a non-binary string in this scenario, and propose some improved bounds on the number of subsequences of $r$-run non-binary strings. Specifically, we characterize a family of $r$-run non-binary strings with the maximum number of subsequences under any $t$ deletions, and show that this number can be computed in polynomial time.
|
https://arxiv.org/abs/2601.06493
|
Academic Papers
|
svg
|
0b59aa4fb12c24ab98b8c3dfba234fecddfa1a64ed949e80f0ff9072b32a4134
|
2026-01-13T00:00:00-05:00
|
3D CoCa v2: Contrastive Learners with Test-Time Search for Generalizable Spatial Intelligence
|
arXiv:2601.06496v1 Announce Type: new Abstract: Spatial intelligence refers to the ability to perceive, reason about, and describe objects and their relationships within three-dimensional environments, forming a foundation for embodied perception and scene understanding. 3D captioning aims to describe 3D scenes in natural language; however, it remains challenging due to the sparsity and irregularity of point clouds and, more critically, the weak grounding and limited out-of-distribution (OOD) generalization of existing captioners across drastically different environments, including indoor and outdoor 3D scenes. To address this challenge, we propose 3D CoCa v2, a generalizable 3D captioning framework that unifies contrastive vision-language learning with 3D caption generation and further improves robustness via test-time search (TTS) without updating the captioner parameters. 3D CoCa v2 builds on a frozen CLIP-based semantic prior, a spatially-aware 3D scene encoder for geometry, and a multimodal decoder jointly optimized with contrastive and captioning objectives, avoiding external detectors or handcrafted proposals. At inference, TTS produces diverse caption candidates and performs reward-guided selection using a compact scene summary. Experiments show improvements over 3D CoCa of +1.50 CIDEr@0.5IoU on ScanRefer and +1.61 CIDEr@0.5IoU on Nr3D, and +3.8 CIDEr@0.25 in zero-shot OOD evaluation on TOD3Cap. Code will be released at https://github.com/AIGeeksGroup/3DCoCav2.
|
https://arxiv.org/abs/2601.06496
|
Academic Papers
|
svg
|
669da0dfab078d92f41468c796e522b42bbf1dc8798ad49114e594b392ad94ff
|
2026-01-13T00:00:00-05:00
|
Coding in a Bubble? Evaluating LLMs in Resolving Context Adaptation Bugs During Code Adaptation
|
arXiv:2601.06497v1 Announce Type: new Abstract: Code adaptation is a fundamental but challenging task in software development, requiring developers to modify existing code for new contexts. A key challenge is to resolve Context Adaptation Bugs (CtxBugs), which occurs when code correct in its original context violates constraints in the target environment. Unlike isolated bugs, CtxBugs cannot be resolved through local fixes and require cross-context reasoning to identify semantic mismatches. Overlooking them may lead to critical failures in adaptation. Although Large Language Models (LLMs) show great potential in automating code-related tasks, their ability to resolve CtxBugs remains a significant and unexplored obstacle to their practical use in code adaptation. To bridge this gap, we propose CtxBugGen, a novel framework for generating CtxBugs to evaluate LLMs. Its core idea is to leverage LLMs' tendency to generate plausible but context-free code when contextual constraints are absent. The framework generates CtxBugs through a four-step process to ensure their relevance and validity: (1) Adaptation Task Selection, (2) Task-specific Perturbation,(3) LLM-based Variant Generation and (4) CtxBugs Identification. Based on the benchmark constructed by CtxBugGen, we conduct an empirical study with four state-of-the-art LLMs. Our results reveal their unsatisfactory performance in CtxBug resolution. The best performing LLM, Kimi-K2, achieves 55.93% on Pass@1 and resolves just 52.47% of CtxBugs. The presence of CtxBugs degrades LLMs' adaptation performance by up to 30%. Failure analysis indicates that LLMs often overlook CtxBugs and replicate them in their outputs. Our study highlights a critical weakness in LLMs' cross-context reasoning and emphasize the need for new methods to enhance their context awareness for reliable code adaptation.
|
https://arxiv.org/abs/2601.06497
|
Academic Papers
|
svg
|
34ca377093e9742f8e9d133d904372d77f02fd3cd0edb19b2d8f7641db69d2b6
|
2026-01-13T00:00:00-05:00
|
Spec-o3: A Tool-Augmented Vision-Language Agent for Rare Celestial Object Candidate Vetting via Automated Spectral Inspection
|
arXiv:2601.06498v1 Announce Type: new Abstract: Due to the limited generalization and interpretability of deep learning classifiers, The final vetting of rare celestial object candidates still relies on expert visual inspection--a manually intensive process. In this process, astronomers leverage specialized tools to analyze spectra and construct reliable catalogs. However, this practice has become the primary bottleneck, as it is fundamentally incapable of scaling with the data deluge from modern spectroscopic surveys. To bridge this gap, we propose Spec-o3, a tool-augmented vision-language agent that performs astronomer-aligned spectral inspection via interleaved multimodal chain-of-thought reasoning. Spec-o3 is trained with a two-stage post-training recipe: cold-start supervised fine-tuning on expert inspection trajectories followed by outcome-based reinforcement learning on rare-type verification tasks. Evaluated on five rare-object identification tasks from LAMOST, Spec-o3 establishes a new State-of-the-Art, boosting the macro-F1 score from 28.3 to 76.5 with a 7B parameter base model and outperforming both proprietary VLMs and specialized deep models. Crucially, the agent demonstrates strong generalization to unseen inspection tasks across survey shifts (from LAMOST to SDSS/DESI). Expert evaluations confirm that its reasoning traces are coherent and physically consistent, supporting transparent and trustworthy decision-making. Code, data, and models are available at \href{https://github.com/Maxwell-Jia/spec-o3}{Project HomePage}.
|
https://arxiv.org/abs/2601.06498
|
Academic Papers
|
svg
|
46a0d82b8b02331d5d0f5d9b765e0c4c2f7ed06b2e273237fccb4346c1bedd0e
|
2026-01-13T00:00:00-05:00
|
The AI Pyramid A Conceptual Framework for Workforce Capability in the Age of AI
|
arXiv:2601.06500v1 Announce Type: new Abstract: Artificial intelligence (AI) represents a qualitative shift in technological change by extending cognitive labor itself rather than merely automating routine tasks. Recent evidence shows that generative AI disproportionately affects highly educated, white collar work, challenging existing assumptions about workforce vulnerability and rendering traditional approaches to digital or AI literacy insufficient. This paper introduces the concept of AI Nativity, the capacity to integrate AI fluidly into everyday reasoning, problem solving, and decision making, and proposes the AI Pyramid, a conceptual framework for organizing human capability in an AI mediated economy. The framework distinguishes three interdependent capability layers: AI Native capability as a universal baseline for participation in AI augmented environments; AI Foundation capability for building, integrating, and sustaining AI enabled systems; and AI Deep capability for advancing frontier AI knowledge and applications. Crucially, the pyramid is not a career ladder but a system level distribution of capabilities required at scale. Building on this structure, the paper argues that effective AI workforce development requires treating capability formation as infrastructure rather than episodic training, centered on problem based learning embedded in work contexts and supported by dynamic skill ontologies and competency based measurement. The framework has implications for organizations, education systems, and governments seeking to align learning, measurement, and policy with the evolving demands of AI mediated work, while addressing productivity, resilience, and inequality at societal scale.
|
https://arxiv.org/abs/2601.06500
|
Academic Papers
|
svg
|
9edd211a68bcc33a8850823736bc48ecd4fe632046f9013813d18a18fcfc5e4c
|
2026-01-13T00:00:00-05:00
|
Coding for Fading Channels with Imperfect CSI at the Transmitter and Quantized Feedback
|
arXiv:2601.06501v1 Announce Type: new Abstract: The classical Schalkwijk-Kailath (SK) scheme for the additive Gaussian noise channel with noiseless feedback is highly efficient since its coding complexity is extremely low and the decoding error doubly exponentially decays as the coding blocklength tends to infinity. However, how to extend the SK scheme to channel models with memory has yet to be solved. In this paper, we first investigate how to design SK-type scheme for the 2-path quasi-static fading channel with noiseless feedback. By viewing the signal of the second path as a relay and adopting an amplify-and-forward (AF) relay strategy, we show that the interference path signal can help to enhance the transmission rate. Besides this, for arbitrary multi-path fading channel with feedback, we also present an SK-type scheme for such a model, which transforms the time domain channel into a frequency domain MIMO channel.
|
https://arxiv.org/abs/2601.06501
|
Academic Papers
|
svg
|
c9aa2d9e2705bc878288ee571bb9c27c19f04a3551b02a4d71c1eabbb63bfeab
|
2026-01-13T00:00:00-05:00
|
DRAGON: LLM-Driven Decomposition and Reconstruction Agents for Large-Scale Combinatorial Optimization
|
arXiv:2601.06502v1 Announce Type: new Abstract: Large Language Models (LLMs) have recently shown promise in addressing combinatorial optimization problems (COPs) through prompt-based strategies. However, their scalability and generalization remain limited, and their effectiveness diminishes as problem size increases, particularly in routing problems involving more than 30 nodes. We propose DRAGON, which stands for Decomposition and Reconstruction Agents Guided OptimizatioN, a novel framework that combines the strengths of metaheuristic design and LLM reasoning. Starting from an initial global solution, DRAGON autonomously identifies regions with high optimization potential and strategically decompose large-scale COPs into manageable subproblems. Each subproblem is then reformulated as a concise, localized optimization task and solved through targeted LLM prompting guided by accumulated experiences. Finally, the locally optimized solutions are systematically reintegrated into the original global context to yield a significantly improved overall outcome. By continuously interacting with the optimization environment and leveraging an adaptive experience memory, the agents iteratively learn from feedback, effectively coupling symbolic reasoning with heuristic search. Empirical results show that, unlike existing LLM-based solvers limited to small-scale instances, DRAGON consistently produces feasible solutions on TSPLIB, CVRPLIB, and Weibull-5k bin packing benchmarks, and achieves near-optimal results (0.16% gap) on knapsack problems with over 3M variables. This work shows the potential of feedback-driven language agents as a new paradigm for generalizable and interpretable large-scale optimization.
|
https://arxiv.org/abs/2601.06502
|
Academic Papers
|
svg
|
8cf1727222a955c8ad53e245c47f0fbecd730a66aca66f9584d9f7a025886ff8
|
2026-01-13T00:00:00-05:00
|
Some New Results on Sequence Reconstruction Problem for Deletion Channels
|
arXiv:2601.06503v1 Announce Type: new Abstract: Levenshtein first introduced the sequence reconstruction problem in $2001$. In the realm of combinatorics, the sequence reconstruction problem is equivalent to determining the value of $N(n,d,t)$, which represents the maximum size of the intersection of two metric balls of radius $t$, given that the distance between their centers is at least $d$ and the sequence length is $n$. In this paper, We present a lower bound on $N(n,3,t)$ for $n\geq 13$ and $t \geq 4$. For $t=4$, we prove that this lower bound is tight. This settles an open question posed by Pham, Goyal, and Kiah, confirming that $N(n,3,4)=20n-166$ for all $n \geq 13$.
|
https://arxiv.org/abs/2601.06503
|
Academic Papers
|
svg
|
cdcc06daee324b9f68525d647ffbf1aa411c2955d39147fc3ec8ca7b35812008
|
2026-01-13T00:00:00-05:00
|
Neural Nonmyopic Bayesian Optimization in Dynamic Cost Settings
|
arXiv:2601.06505v1 Announce Type: new Abstract: Bayesian optimization (BO) is a common framework for optimizing black-box functions, yet most existing methods assume static query costs and rely on myopic acquisition strategies. We introduce LookaHES, a nonmyopic BO framework designed for dynamic, history-dependent cost environments, where evaluation costs vary with prior actions, such as travel distance in spatial tasks or edit distance in sequence design. LookaHES combines a multi-step variant of $H$-Entropy Search with pathwise sampling and neural policy optimization, enabling long-horizon planning beyond twenty steps without the exponential complexity of existing nonmyopic methods. The key innovation is the integration of neural policies, including large language models, to effectively navigate structured, combinatorial action spaces such as protein sequences. These policies amortize lookahead planning and can be integrated with domain-specific constraints during rollout. Empirically, LookaHES outperforms strong myopic and nonmyopic baselines across nine synthetic benchmarks from two to eight dimensions and two real-world tasks: geospatial optimization using NASA night-light imagery and protein sequence design with constrained token-level edits. In short, LookaHES provides a general, scalable, and cost-aware solution for robust long-horizon optimization in complex decision spaces, which makes it a useful tool for researchers in machine learning, statistics, and applied domains. Our implementation is available at https://github.com/sangttruong/nonmyopia.
|
https://arxiv.org/abs/2601.06505
|
Academic Papers
|
svg
|
9f9448af963e363284e492c4fc648573fe1bc953b5b0288201d33400bb601434
|
2026-01-13T00:00:00-05:00
|
Precision Meets Art: Autonomous Multi-UAV System for Large Scale Mural Drawing
|
arXiv:2601.06508v1 Announce Type: new Abstract: The integration of autonomous unmanned aerial vehicles (UAVs) into large-scale artistic projects has emerged as a new application in robotics. This paper presents the design, deployment, and testing of a novel multi-drone system for automated mural painting in outdoor settings. This technology makes use of new software that coordinates multiple drones simultaneously, utilizing state-machine algorithms for task execution. Key advancements are the complex positioning system that combines 2D localization using a single motion tracking camera with onboard LiDAR for precise positioning, and a novel flight control algorithm, which works differently along the trajectory and normally to it, ensuring smoothness and high precision of the drawings at the same time. A 100 square meters mural was created using the developed multi-drone system, validating the system's efficacy. Compared to single-drone approaches, our multi-UAV solution significantly improves scalability and operational speed while maintaining high stability even in harsh weather conditions. The findings highlight the potential of autonomous robotic swarms in creative applications, paving the way for further advancements in large-scale robotic art.
|
https://arxiv.org/abs/2601.06508
|
Academic Papers
|
svg
|
77b372e7a57ec8593b15ccf5d7cb43bbbe916784157e9b3e47d33a967a15a83a
|
2026-01-13T00:00:00-05:00
|
A novel RF-enabled Non-Destructive Inspection Method through Machine Learning and Programmable Wireless Environments
|
arXiv:2601.06512v1 Announce Type: new Abstract: Contemporary industrial Non-Destructive Inspection (NDI) methods require sensing capabilities that operate in occluded, hazardous, or access restricted environments. Yet, the current visual inspection based on optical cameras offers limited quality of service to that respect. In that sense, novel methods for workpiece inspection, suitable, for smart manufacturing are needed. Programmable Wireless Environments (PWE) could help towards that direction, by redefining the wireless Radio Frequency (RF) wave propagation as a controllable inspector entity. In this work, we propose a novel approach to Non-Destructive Inspection, leveraging an RF sensing pipeline based on RF wavefront encoding for retrieving workpiece-image entries from a designated database. This approach combines PWE-enabled RF wave manipulation with machine learning (ML) tools trained to produce visual outputs for quality inspection. Specifically, we establish correlation relationships between RF wavefronts and target industrial assets, hence yielding a dataset which links wavefronts to their corresponding images in a structured manner. Subsequently, a Generative Adversarial Network (GAN) derives visual representations closely matching the database entries. Our results indicate that the proposed method achieves an SSIM 99.5% matching score in visual outputs, paving the way for next-generation quality control workflows in industry.
|
https://arxiv.org/abs/2601.06512
|
Academic Papers
|
svg
|
0f98c833d92b7bb236f0a26e782e8051d65ce6d34f06d6bb518cdb2aefb8564d
|
2026-01-13T00:00:00-05:00
|
Convergence Analysis of Weighted Median Opinion Dynamics with Higher-Order Effects
|
arXiv:2601.06515v1 Announce Type: new Abstract: The weighted median mechanism provides a robust alternative to weighted averaging in opinion dynamics. Existing models, however, are predominantly formulated on pairwise interaction graphs, which limits their ability to represent higher-order environmental effects. In this work, a generalized weighted median opinion dynamics model is proposed by incorporating high-order interactions through a simplicial complex representation. The resulting dynamics are formulated as a nonlinear discrete-time system with synchronous opinion updates, in which intrinsic agent interactions and external environmental influences are jointly modeled. Sufficient conditions for asymptotic consensus are established for heterogeneous systems composed of opinionated and unopinionated agents. For homogeneous opinionated systems, convergence and convergence rates are rigorously analyzed using the Banach fixed-point theorem. Theoretical results demonstrate the stability of the proposed dynamics under mild conditions, and numerical simulations are provided to corroborate the analysis. This work extends median-based opinion dynamics to high-order interaction settings and provides a system-level framework for stability and consensus analysis.
|
https://arxiv.org/abs/2601.06515
|
Academic Papers
|
svg
|
66709d3630701030ee73f93e3569d6861952638de29de62ec46b1af65ba68228
|
2026-01-13T00:00:00-05:00
|
Pareto-Optimal Model Selection for Low-Cost, Single-Lead EMG Control in Embedded Systems
|
arXiv:2601.06516v1 Announce Type: new Abstract: Consumer-grade biosensors offer a cost-effective alternative to medical-grade electromyography (EMG) systems, reducing hardware costs from thousands of dollars to approximately $13. However, these low-cost sensors introduce significant signal instability and motion artifacts. Deploying machine learning models on resource-constrained edge devices like the ESP32 presents a challenge: balancing classification accuracy with strict latency (<100ms) and memory (<320KB) constraints. Using a single-subject dataset comprising 1,540 seconds of raw data (1.54M data points, segmented into ~1,300 one-second windows), I evaluate 18 model architectures, ranging from statistical heuristics to deep transfer learning (ResNet50) and custom hybrid networks (MaxCRNN). While my custom "MaxCRNN" (Inception + Bi-LSTM + Attention) achieved the highest safety (99% Precision) and robustness, I identify Random Forest (74% accuracy) as the Pareto-optimal solution for embedded control on legacy microcontrollers. I demonstrate that reliable, low-latency EMG control is feasible on commodity hardware, with Deep Learning offering a path to near-perfect reliability on modern Edge AI accelerators.
|
https://arxiv.org/abs/2601.06516
|
Academic Papers
|
svg
|
2f44fffa9bbb21dc6bef937231e4d55416d09529eb71dc10751a9d0d8de0f480
|
2026-01-13T00:00:00-05:00
|
Bridging Robustness and Efficiency: Real-Time Low-Light Enhancement via Attention U-Net GAN
|
arXiv:2601.06518v1 Announce Type: new Abstract: Recent advancements in Low-Light Image Enhancement (LLIE) have focused heavily on Diffusion Probabilistic Models, which achieve high perceptual quality but suffer from significant computational latency (often exceeding 2-4 seconds per image). Conversely, traditional CNN-based baselines offer real-time inference but struggle with "over-smoothing," failing to recover fine structural details in extreme low-light conditions. This creates a practical gap in the literature: the lack of a model that provides generative-level texture recovery at edge-deployable speeds. In this paper, we address this trade-off by proposing a hybrid Attention U-Net GAN. We demonstrate that the heavy iterative sampling of diffusion models is not strictly necessary for texture recovery. Instead, by integrating Attention Gates into a lightweight U-Net backbone and training within a conditional adversarial framework, we can approximate the high-frequency fidelity of generative models in a single forward pass. Extensive experiments on the SID dataset show that our method achieves a best-in-class LPIPS score of 0.112 among efficient models, significantly outperforming efficient baselines (SID, EnlightenGAN) while maintaining an inference latency of 0.06s. This represents a 40x speedup over latent diffusion models, making our approach suitable for near real-time applications.
|
https://arxiv.org/abs/2601.06518
|
Academic Papers
|
svg
|
78896be96972779ff1f71eb87c3532e253e7f01597cb9c3e02550893971bc851
|
2026-01-13T00:00:00-05:00
|
MedRAGChecker: Claim-Level Verification for Biomedical Retrieval-Augmented Generation
|
arXiv:2601.06519v1 Announce Type: new Abstract: Biomedical retrieval-augmented generation (RAG) can ground LLM answers in medical literature, yet long-form outputs often contain isolated unsupported or contradictory claims with safety implications. We introduce MedRAGChecker, a claim-level verification and diagnostic framework for biomedical RAG. Given a question, retrieved evidence, and a generated answer, MedRAGChecker decomposes the answer into atomic claims and estimates claim support by combining evidence-grounded natural language inference (NLI) with biomedical knowledge-graph (KG) consistency signals. Aggregating claim decisions yields answer-level diagnostics that help disentangle retrieval and generation failures, including faithfulness, under-evidence, contradiction, and safety-critical error rates. To enable scalable evaluation, we distill the pipeline into compact biomedical models and use an ensemble verifier with class-specific reliability weighting. Experiments on four biomedical QA benchmarks show that MedRAGChecker reliably flags unsupported and contradicted claims and reveals distinct risk profiles across generators, particularly on safety-critical biomedical relations.
|
https://arxiv.org/abs/2601.06519
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.