id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
35be2a1badc9840da7bc92183b761b450414004a77d60667ea5e11a37b4192a6
|
2026-01-21T00:00:00-05:00
|
AgenticRed: Optimizing Agentic Systems for Automated Red-teaming
|
arXiv:2601.13518v1 Announce Type: new Abstract: While recent automated red-teaming methods show promise for systematically exposing model vulnerabilities, most existing approaches rely on human-specified workflows. This dependence on manually designed workflows suffers from human biases and makes exploring the broader design space expensive. We introduce AgenticRed, an automated pipeline that leverages LLMs' in-context learning to iteratively design and refine red-teaming systems without human intervention. Rather than optimizing attacker policies within predefined structures, AgenticRed treats red-teaming as a system design problem. Inspired by methods like Meta Agent Search, we develop a novel procedure for evolving agentic systems using evolutionary selection, and apply it to the problem of automatic red-teaming. Red-teaming systems designed by AgenticRed consistently outperform state-of-the-art approaches, achieving 96% attack success rate (ASR) on Llama-2-7B (36% improvement) and 98% on Llama-3-8B on HarmBench. Our approach exhibits strong transferability to proprietary models, achieving 100% ASR on GPT-3.5-Turbo and GPT-4o-mini, and 60% on Claude-Sonnet-3.5 (24% improvement). This work highlights automated system design as a powerful paradigm for AI safety evaluation that can keep pace with rapidly evolving models.
|
https://arxiv.org/abs/2601.13518
|
Academic Papers
|
svg
|
1f5e312d61e5640ed4d06f6c10a456cb39ec49d6d16b54a8e8113e75b207033d
|
2026-01-21T00:00:00-05:00
|
Sticky Help, Bounded Effects: Session-by-Session Analytics of Teacher Interventions in K-12 Classrooms
|
arXiv:2601.13520v1 Announce Type: new Abstract: Teachers' in-the-moment support is a limited resource in technology-supported classrooms, and teachers must decide whom to help and when during ongoing student work. However, less is known about how students' prior help history (whether they were helped earlier) and their engagement states (e.g., idle, struggle) shape teachers' decisions, and whether observed learning benefits associated with teacher help extend beyond the current class session. To address these questions, we first conducted interviews with nine K-12 mathematics teachers to identify candidate decision factors for teacher help. We then analyzed 1.4 million student-system interactions from 339 students across 14 classes in the MATHia intelligent tutoring system by linking teacher-logged help events with fine-grained engagement states. Mixed-effects models show that students who received help earlier were more likely to receive additional help later, even after accounting for current engagement state. Cross-lagged panel analyses further show that teacher help recurred across sessions, whereas idle behavior did not receive sustained attention over time. Finally, help coincided with immediate learning within sessions, but did not predict skill acquisition in later sessions, as estimated by additive factor modeling. These findings suggest that teacher help is "sticky" in that it recurs for previously supported students, while its measurable learning benefits in our data are largely session-bound. We discuss implications for designing real-time analytics that track attention coverage and highlight under-visited students to support a more equitable and effective allocation of teacher attention.
|
https://arxiv.org/abs/2601.13520
|
Academic Papers
|
svg
|
7265e322d003ca2e761251b234eae1c683b4837e6e20188bca97e2649063abcf
|
2026-01-21T00:00:00-05:00
|
StoTAM: Stochastic Alternating Minimization for Tucker-Structured Tensor Sensing
|
arXiv:2601.13522v1 Announce Type: new Abstract: Low-rank tensor sensing is a fundamental problem with broad applications in signal processing and machine learning. Among various tensor models, low-Tucker-rank tensors are particularly attractive for capturing multi-mode subspace structures in high-dimensional data. Existing recovery methods either operate on the full tensor variable with expensive tensor projections, or adopt factorized formulations that still rely on full-gradient computations, while most stochastic factorized approaches are restricted to tensor decomposition settings. In this work, we propose a stochastic alternating minimization algorithm that operates directly on the core tensor and factor matrices under a Tucker factorization. The proposed method avoids repeated tensor projections and enables efficient mini-batch updates on low-dimensional tensor factors. Numerical experiments on synthetic tensor sensing demonstrate that the proposed algorithm exhibits favorable convergence behavior in wall-clock time compared with representative stochastic tensor recovery baselines.
|
https://arxiv.org/abs/2601.13522
|
Academic Papers
|
svg
|
40bf3d19438daac4e15f49af02ab1c6709589aac9d5db1adef804818791c29df
|
2026-01-21T00:00:00-05:00
|
GO-MLVTON: Garment Occlusion-Aware Multi-Layer Virtual Try-On with Diffusion Models
|
arXiv:2601.13524v1 Announce Type: new Abstract: Existing Image-based virtual try-on (VTON) methods primarily focus on single-layer or multi-garment VTON, neglecting multi-layer VTON (ML-VTON), which involves dressing multiple layers of garments onto the human body with realistic deformation and layering to generate visually plausible outcomes. The main challenge lies in accurately modeling occlusion relationships between inner and outer garments to reduce interference from redundant inner garment features. To address this, we propose GO-MLVTON, the first multi-layer VTON method, introducing the Garment Occlusion Learning module to learn occlusion relationships and the StableDiffusion-based Garment Morphing & Fitting module to deform and fit garments onto the human body, producing high-quality multi-layer try-on results. Additionally, we present the MLG dataset for this task and propose a new metric named Layered Appearance Coherence Difference (LACD) for evaluation. Extensive experiments demonstrate the state-of-the-art performance of GO-MLVTON. Project page: https://upyuyang.github.io/go-mlvton/.
|
https://arxiv.org/abs/2601.13524
|
Academic Papers
|
svg
|
d32ecdb95ef81fad418862e539f0c0ee3effdf5ed6dea6624986e3f950273009
|
2026-01-21T00:00:00-05:00
|
More Than Efficiency: Embedding Compression Improves Domain Adaptation in Dense Retrieval
|
arXiv:2601.13525v1 Announce Type: new Abstract: Dense retrievers powered by pretrained embeddings are widely used for document retrieval but struggle in specialized domains due to the mismatches between the training and target domain distributions. Domain adaptation typically requires costly annotation and retraining of query-document pairs. In this work, we revisit an overlooked alternative: applying PCA to domain embeddings to derive lower-dimensional representations that preserve domain-relevant features while discarding non-discriminative components. Though traditionally used for efficiency, we demonstrate that this simple embedding compression can effectively improve retrieval performance. Evaluated across 9 retrievers and 14 MTEB datasets, PCA applied solely to query embeddings improves NDCG@10 in 75.4% of model-dataset pairs, offering a simple and lightweight method for domain adaptation.
|
https://arxiv.org/abs/2601.13525
|
Academic Papers
|
svg
|
ee13db51416415435411ec21fddc163816e1899f3da245b5058253e94d540f56
|
2026-01-21T00:00:00-05:00
|
Eliciting Harmful Capabilities by Fine-Tuning On Safeguarded Outputs
|
arXiv:2601.13528v1 Announce Type: new Abstract: Model developers implement safeguards in frontier models to prevent misuse, for example, by employing classifiers to filter dangerous outputs. In this work, we demonstrate that even robustly safeguarded models can be used to elicit harmful capabilities in open-source models through elicitation attacks. Our elicitation attacks consist of three stages: (i) constructing prompts in adjacent domains to a target harmful task that do not request dangerous information; (ii) obtaining responses to these prompts from safeguarded frontier models; (iii) fine-tuning open-source models on these prompt-output pairs. Since the requested prompts cannot be used to directly cause harm, they are not refused by frontier model safeguards. We evaluate these elicitation attacks within the domain of hazardous chemical synthesis and processing, and demonstrate that our attacks recover approximately 40% of the capability gap between the base open-source model and an unrestricted frontier model. We then show that the efficacy of elicitation attacks scales with the capability of the frontier model and the amount of generated fine-tuning data. Our work demonstrates the challenge of mitigating ecosystem level risks with output-level safeguards.
|
https://arxiv.org/abs/2601.13528
|
Academic Papers
|
svg
|
780099bd91cfd58022e3754192e4df970df525169679a550b60eed8c0a2f37f6
|
2026-01-21T00:00:00-05:00
|
The OncoReach Stylet for Brachytherapy: Design Evaluation and Pilot Study
|
arXiv:2601.13529v1 Announce Type: new Abstract: Cervical cancer accounts for a significant portion of the global cancer burden among women. Interstitial brachytherapy (ISBT) is a standard procedure for treating cervical cancer; it involves placing a radioactive source through a straight hollow needle within or in close proximity to the tumor and surrounding tissue. However, the use of straight needles limits surgical planning to a linear needle path. We present the OncoReach stylet, a handheld, tendon-driven steerable stylet designed for compatibility with standard ISBT 15- and 13-gauge needles. Building upon our prior work, we evaluated design parameters like needle gauge, spherical joint count and spherical joint placement, including an asymmetric disk design to identify a configuration that maximizes bending compliance while retaining axial stiffness. Free space experiments quantified tip deflection across configurations, and a two-tube Cosserat rod model accurately predicted the centerline shape of the needle for most trials. The best performing configuration was integrated into a reusable handheld prototype that enables manual actuation. A patient-derived, multi-composite phantom model of the uterus and pelvis was developed to conduct a pilot study of the OncoReach steerable stylet with one expert user. Results showed the ability to steer from less-invasive, medial entry points to reach the lateral-most targets, underscoring the significance of steerable stylets.
|
https://arxiv.org/abs/2601.13529
|
Academic Papers
|
svg
|
b73c5533d1e8ae17f1f8a0be94c99c35886e5ecb22bd2c59208a97c0a7c14340
|
2026-01-21T00:00:00-05:00
|
Reasoning While Recommending: Entropy-Guided Latent Reasoning in Generative Re-ranking Models
|
arXiv:2601.13533v1 Announce Type: new Abstract: Reinforcement learning plays a crucial role in generative re-ranking scenarios due to its exploration-exploitation capabilities, but existing generative methods mostly fail to adapt to the dynamic entropy changes in model difficulty during list generation, making it challenging to accurately capture complex preferences. Given that language models have achieved remarkable breakthroughs by integrating reasoning capabilities, we draw on this approach to introduce a latent reasoning mechanism, and experimental validation demonstrates that this mechanism effectively reduces entropy in the model's decision-making process. Based on these findings, we introduce the Entropy-Guided Latent Reasoning (EGLR) recommendation model, which has three core advantages. First, it abandons the "reason first, recommend later" paradigm to achieve "reasoning while recommending", specifically designed for the high-difficulty nature of list generation by enabling real-time reasoning during generation. Second, it implements entropy-guided variable-length reasoning using context-aware reasoning token alongside dynamic temperature adjustment, expanding exploration breadth in reasoning and boosting exploitation precision in recommending to achieve a more precisely adapted exploration-exploitation trade-off. Third, the model adopts a lightweight integration design with no complex independent modules or post-processing, enabling easy adaptation to existing models. Experimental results on two real-world datasets validate the model's effectiveness, and its notable advantage lies in being compatible with existing generative re-ranking models to enhance their performance. Further analyses also demonstrate its practical deployment value and research potential.
|
https://arxiv.org/abs/2601.13533
|
Academic Papers
|
svg
|
f75a274da7167dc3e78e5749e7ed53d9433847f7d0dd842b01f78a20539ebc76
|
2026-01-21T00:00:00-05:00
|
MN-TSG:Continuous Time Series Generation with Irregular Observations
|
arXiv:2601.13534v1 Announce Type: new Abstract: Time series generation (TSG) plays a critical role in a wide range of domains, such as healthcare. However, most existing methods assume regularly sampled observations and fixed output resolutions, which are often misaligned with real-world scenarios where data are irregularly sampled and sparsely observed. This mismatch is particularly problematic in applications such as clinical monitoring, where irregular measurements must support downstream tasks requiring continuous and high-resolution time series. Neural Controlled Differential Equations (NCDEs) have shown strong potential for modeling irregular time series, yet they still face challenges in capturing complex dynamic temporal patterns and supporting continuous TSG. To address these limitations, we propose MN-TSG, a novel framework that explores Mixture-of-Experts (MoE)-based NCDEs and integrates them with existing TSG models for irregular and continuous generation tasks. The core of MN-TSG lies in a MoE-NCDE architecture with dynamically parameterized expert functions and a decoupled design that facilitates more effective optimization of MoE dynamics. Furthermore, we leverage existing TSG models to learn the joint distribution over the mixture of experts and the generated time series. This enables the framework not only to generate new samples, but also to produce appropriate expert configurations tailored to each sample, thereby supporting refined continuous TSG. Extensive experiments on ten public and synthetic datasets demonstrate the effectiveness of MN-TSG, consistently outperforming strong TSG baselines on both irregular-to-regular and irregular-to-continuous generation tasks.
|
https://arxiv.org/abs/2601.13534
|
Academic Papers
|
svg
|
221b0b2172c2cc3de95247eca48afc78928990cd247c885146dce5dfe6843888
|
2026-01-21T00:00:00-05:00
|
Sparse Identification of Nonlinear Distributed-Delay Dynamics via the Linear Chain Trick
|
arXiv:2601.13536v1 Announce Type: new Abstract: The Sparse Identification of Nonlinear Dynamics (SINDy) framework has been frequently used to discover parsimonious differential equations governing natural and physical systems. This includes recent extensions to SINDy that enable the recovery of discrete delay differential equations, where delay terms are represented explicitly in the candidate library. However, such formulations cannot capture the distributed delays that naturally arise in biological, physical, and engineering systems. In the present work, we extend SINDy to identify distributed-delay differential equations by incorporating the Linear Chain Trick (LCT), which provides a finite-dimensional ordinary differential equation representing the distributed memory effects. Hence, SINDy can operate in an augmented state space using conventional sparse regression while preserving a clear interpretation of delayed influences via the chain trick. From time-series data, the proposed method jointly infers the governing equations, the mean delay, and the dispersion of the underlying delay distribution. We numerically verify the method on several models with distributed delay, including the logistic growth model and a Hes1--mRNA gene regulatory network model. We show that the proposed method accurately reconstructs distributed delay dynamics, remains robust under noise and sparse sampling, and provides a transparent, data-driven approach for discovering nonlinear systems with distributed-delay.
|
https://arxiv.org/abs/2601.13536
|
Academic Papers
|
svg
|
3c7cb3751b557a1b9c4a6629f5b96006413e749c232b454fa1b9ee2dc201e28f
|
2026-01-21T00:00:00-05:00
|
When Wording Steers the Evaluation: Framing Bias in LLM judges
|
arXiv:2601.13537v1 Announce Type: new Abstract: Large language models (LLMs) are known to produce varying responses depending on prompt phrasing, indicating that subtle guidance in phrasing can steer their answers. However, the impact of this framing bias on LLM-based evaluation, where models are expected to make stable and impartial judgments, remains largely underexplored. Drawing inspiration from the framing effect in psychology, we systematically investigate how deliberate prompt framing skews model judgments across four high-stakes evaluation tasks. We design symmetric prompts using predicate-positive and predicate-negative constructions and demonstrate that such framing induces significant discrepancies in model outputs. Across 14 LLM judges, we observe clear susceptibility to framing, with model families showing distinct tendencies toward agreement or rejection. These findings suggest that framing bias is a structural property of current LLM-based evaluation systems, underscoring the need for framing-aware protocols.
|
https://arxiv.org/abs/2601.13537
|
Academic Papers
|
svg
|
7a6d3516a1834fb9e92ce830cbc3626e1c35d2381d52378fb45ef2d5822295ef
|
2026-01-21T00:00:00-05:00
|
LongSpeech: A Scalable Benchmark for Transcription, Translation and Understanding in Long Speech
|
arXiv:2601.13539v1 Announce Type: new Abstract: Recent advances in audio-language models have demonstrated remarkable success on short, segment-level speech tasks. However, real-world applications such as meeting transcription, spoken document understanding, and conversational analysis require robust models capable of processing and reasoning over long-form audio. In this work, we present LongSpeech, a large-scale and scalable benchmark specifically designed to evaluate and advance the capabilities of speech models on long-duration audio. LongSpeech comprises over 100,000 speech segments, each approximately 10 minutes long, with rich annotations for ASR, speech translation, summarization, language detection, speaker counting, content separation, and question answering. We introduce a reproducible pipeline for constructing long-form speech benchmarks from diverse sources, enabling future extensions. Our initial experiments with state-of-the-art models reveal significant performance gaps, with models often specializing in one task at the expense of others and struggling with higher-level reasoning. These findings underscore the challenging nature of our benchmark. Our benchmark will be made publicly available to the research community.
|
https://arxiv.org/abs/2601.13539
|
Academic Papers
|
svg
|
d3924bfd788c9ac9a128864c7ce833fef88034339823fe6f1f314adcb26985cb
|
2026-01-21T00:00:00-05:00
|
A hybrid numerical method for a microscopic and macroscopic traffic flow model
|
arXiv:2601.13541v1 Announce Type: new Abstract: In this paper, we introduce a traffic flow model based on a microscopic follow-the-leader model, while enforcing maximal constraints on the density and velocity of the flow. The related macroscopic model can be represented in conservative formulation. By introducing an advected variable up with the flow, where p is the velocity offset, and u is the relative velocity, we reformulate the classical Aw-Rascle-Zhang (ARZ) model and the modified Aw-Rascle model to describe a realistic fundamental diagrams. The elementary waves are derived, and the Riemann problem is solved to validate the model's theoretical consistency. We further extend to a two-dimensional model. Numerical simulations are given for both one-and two-dimensional case by using the hybrid Godunov-Glimm scheme to verify the model's performance.
|
https://arxiv.org/abs/2601.13541
|
Academic Papers
|
svg
|
471da7ecb4d27ec985222ab277b4986822b7664ffc1a876663054507fc475432
|
2026-01-21T00:00:00-05:00
|
TruthTensor: Evaluating LLMs Human Imitation through Prediction Market Drift and Holistic Reasoning
|
arXiv:2601.13545v1 Announce Type: new Abstract: Evaluating language models and AI agents remains fundamentally challenging because static benchmarks fail to capture real-world uncertainty, distribution shift, and the gap between isolated task accuracy and human-aligned decision-making under evolving conditions. This paper introduces TruthTensor, a novel, reproducible evaluation paradigm that measures Large Language Models (LLMs) not only as prediction engines but as human-imitation systems operating in socially-grounded, high-entropy environments. Building on forward-looking, contamination-free tasks, our framework anchors evaluation to live prediction markets and combines probabilistic scoring to provide a holistic view of model behavior. TruthTensor complements traditional correctness metrics with drift-centric diagnostics and explicit robustness checks for reproducibility. It specify human vs. automated evaluation roles, annotation protocols, and statistical testing procedures to ensure interpretability and replicability of results. In experiments across 500+ real markets (political, economic, cultural, technological), TruthTensor demonstrates that models with similar forecast accuracy can diverge markedly in calibration, drift, and risk-sensitivity, underscoring the need to evaluate models along multiple axes (accuracy, calibration, narrative stability, cost, and resource efficiency). TruthTensor therefore operationalizes modern evaluation best practices, clear hypothesis framing, careful metric selection, transparent compute/cost reporting, human-in-the-loop validation, and open, versioned evaluation contracts, to produce defensible assessments of LLMs in real-world decision contexts. We publicly release TruthTensor at https://truthtensor.com
|
https://arxiv.org/abs/2601.13545
|
Academic Papers
|
svg
|
29f8efdc75dbe0f488039188660e25ff530df3ffc3508a63e6fe02a84efdcdbc
|
2026-01-21T00:00:00-05:00
|
ChatAD: Reasoning-Enhanced Time-Series Anomaly Detection with Multi-Turn Instruction Evolution
|
arXiv:2601.13546v1 Announce Type: new Abstract: LLM-driven Anomaly Detection (AD) helps enhance the understanding and explanatory abilities of anomalous behaviors in Time Series (TS). Existing methods face challenges of inadequate reasoning ability, deficient multi-turn dialogue capability, and narrow generalization. To this end, we 1) propose a multi-agent-based TS Evolution algorithm named TSEvol. On top of it, we 2) introduce the AD reasoning and multi-turn dialogue Dataset TSEData-20K and contribute the Chatbot family for AD, including ChatAD-Llama3-8B, Qwen2.5-7B, and Mistral-7B. Furthermore, 3) we propose the TS Kahneman-Tversky Optimization (TKTO) to enhance ChatAD's cross-task generalization capability. Lastly, 4) we propose a LLM-driven Learning-based AD Benchmark LLADBench to evaluate the performance of ChatAD and nine baselines across seven datasets and tasks. Our three ChatAD models achieve substantial gains, up to 34.50% in accuracy, 34.71% in F1, and a 37.42% reduction in false positives. Besides, via KTKO, our optimized ChatAD achieves competitive performance in reasoning and cross-task generalization on classification, forecasting, and imputation.
|
https://arxiv.org/abs/2601.13546
|
Academic Papers
|
svg
|
282372761012b7ee6e46a2c5adb0284d68f16aa457e3d65591eeb594b9b143be
|
2026-01-21T00:00:00-05:00
|
HateXScore: A Metric Suite for Evaluating Reasoning Quality in Hate Speech Explanations
|
arXiv:2601.13547v1 Announce Type: new Abstract: Hateful speech detection is a key component of content moderation, yet current evaluation frameworks rarely assess why a text is deemed hateful. We introduce \textsf{HateXScore}, a four-component metric suite designed to evaluate the reasoning quality of model explanations. It assesses (i) conclusion explicitness, (ii) faithfulness and causal grounding of quoted spans, (iii) protected group identification (policy-configurable), and (iv) logical consistency among these elements. Evaluated on six diverse hate speech datasets, \textsf{HateXScore} is intended as a diagnostic complement to reveal interpretability failures and annotation inconsistencies that are invisible to standard metrics like Accuracy or F1. Moreover, human evaluation shows strong agreement with \textsf{HateXScore}, validating it as a practical tool for trustworthy and transparent moderation. \textcolor{red}{Disclaimer: This paper contains sensitive content that may be disturbing to some readers.}
|
https://arxiv.org/abs/2601.13547
|
Academic Papers
|
svg
|
4ddfb6d93ee7a95841fd55c4c4e3e05a2f8ce24c5a3bc5d974ced87a5f0f8755
|
2026-01-21T00:00:00-05:00
|
Patterning: The Dual of Interpretability
|
arXiv:2601.13548v1 Announce Type: new Abstract: Mechanistic interpretability aims to understand how neural networks generalize beyond their training data by reverse-engineering their internal structures. We introduce patterning as the dual problem: given a desired form of generalization, determine what training data produces it. Our approach is based on susceptibilities, which measure how posterior expectation values of observables respond to infinitesimal shifts in the data distribution. Inverting this linear response relationship yields the data intervention that steers the model toward a target internal configuration. We demonstrate patterning in a small language model, showing that re-weighting training data along principal susceptibility directions can accelerate or delay the formation of structure, such as the induction circuit. In a synthetic parentheses balancing task where multiple algorithms achieve perfect training accuracy, we show that patterning can select which algorithm the model learns by targeting the local learning coefficient of each solution. These results establish that the same mathematical framework used to read internal structure can be inverted to write it.
|
https://arxiv.org/abs/2601.13548
|
Academic Papers
|
svg
|
cd6acab5bf8cb079d46f05859afeefd347a1ebac286470495ecee6482a1b1063
|
2026-01-21T00:00:00-05:00
|
DiffFace-Edit: A Diffusion-Based Facial Dataset for Forgery-Semantic Driven Deepfake Detection Analysis
|
arXiv:2601.13551v1 Announce Type: new Abstract: Generative models now produce imperceptible, fine-grained manipulated faces, posing significant privacy risks. However, existing AI-generated face datasets generally lack focus on samples with fine-grained regional manipulations. Furthermore, no researchers have yet studied the real impact of splice attacks, which occur between real and manipulated samples, on detectors. We refer to these as detector-evasive samples. Based on this, we introduce the DiffFace-Edit dataset, which has the following advantages: 1) It contains over two million AI-generated fake images. 2) It features edits across eight facial regions (e.g., eyes, nose) and includes a richer variety of editing combinations, such as single-region and multi-region edits. Additionally, we specifically analyze the impact of detector-evasive samples on detection models. We conduct a comprehensive analysis of the dataset and propose a cross-domain evaluation that combines IMDL methods. Dataset will be available at https://github.com/ywh1093/DiffFace-Edit.
|
https://arxiv.org/abs/2601.13551
|
Academic Papers
|
svg
|
64d52ee189bc1f96b1e2dd6a2de4bff016e72ea8dc26180735bfc817c98257fb
|
2026-01-21T00:00:00-05:00
|
LogicEnvGen: Task-Logic Driven Generation of Diverse Simulated Environments for Embodied AI
|
arXiv:2601.13556v1 Announce Type: new Abstract: Simulated environments play an essential role in embodied AI, functionally analogous to test cases in software engineering. However, existing environment generation methods often emphasize visual realism (e.g., object diversity and layout coherence), overlooking a crucial aspect: logical diversity from the testing perspective. This limits the comprehensive evaluation of agent adaptability and planning robustness in distinct simulated environments. To bridge this gap, we propose LogicEnvGen, a novel method driven by Large Language Models (LLMs) that adopts a top-down paradigm to generate logically diverse simulated environments as test cases for agents. Given an agent task, LogicEnvGen first analyzes its execution logic to construct decision-tree-structured behavior plans and then synthesizes a set of logical trajectories. Subsequently, it adopts a heuristic algorithm to refine the trajectory set, reducing redundant simulation. For each logical trajectory, which represents a potential task situation, LogicEnvGen correspondingly instantiates a concrete environment. Notably, it employs constraint solving for physical plausibility. Furthermore, we introduce LogicEnvEval, a novel benchmark comprising four quantitative metrics for environment evaluation. Experimental results verify the lack of logical diversity in baselines and demonstrate that LogicEnvGen achieves 1.04-2.61x greater diversity, significantly improving the performance in revealing agent faults by 4.00%-68.00%.
|
https://arxiv.org/abs/2601.13556
|
Academic Papers
|
svg
|
312b7f82a7a971abe9c493cb5ce7775c48f10892361c3db7c69fbceccb034824
|
2026-01-21T00:00:00-05:00
|
Leveraging ChatGPT and Other NLP Methods for Identifying Risk and Protective Behaviors in MSM: Social Media and Dating apps Text Analysis
|
arXiv:2601.13558v1 Announce Type: new Abstract: Men who have sex with men (MSM) are at elevated risk for sexually transmitted infections and harmful drinking compared to heterosexual men. Text data collected from social media and dating applications may provide new opportunities for personalized public health interventions by enabling automatic identification of risk and protective behaviors. In this study, we evaluated whether text from social media and dating apps can be used to predict sexual risk behaviors, alcohol use, and pre-exposure prophylaxis (PrEP) uptake among MSM. With participant consent, we collected textual data and trained machine learning models using features derived from ChatGPT embeddings, BERT embeddings, LIWC, and a dictionary-based risk term approach. The models achieved strong performance in predicting monthly binge drinking and having more than five sexual partners, with F1 scores of 0.78, and moderate performance in predicting PrEP use and heavy drinking, with F1 scores of 0.64 and 0.63. These findings demonstrate that social media and dating app text data can provide valuable insights into risk and protective behaviors and highlight the potential of large language model-based methods to support scalable and personalized public health interventions for MSM.
|
https://arxiv.org/abs/2601.13558
|
Academic Papers
|
svg
|
ab0f2fae08049d9264829b324f18be54fe5241e554062e8c0c17101034b44b50
|
2026-01-21T00:00:00-05:00
|
AgentGC: Evolutionary Learning-based Lossless Compression for Genomics Data with LLM-driven Multiple Agent
|
arXiv:2601.13559v1 Announce Type: new Abstract: Lossless compression has made significant advancements in Genomics Data (GD) storage, sharing and management. Current learning-based methods are non-evolvable with problems of low-level compression modeling, limited adaptability, and user-unfriendly interface. To this end, we propose AgentGC, the first evolutionary Agent-based GD Compressor, consisting of 3 layers with multi-agent named Leader and Worker. Specifically, the 1) User layer provides a user-friendly interface via Leader combined with LLM; 2) Cognitive layer, driven by the Leader, integrates LLM to consider joint optimization of algorithm-dataset-system, addressing the issues of low-level modeling and limited adaptability; and 3) Compression layer, headed by Worker, performs compression & decompression via a automated multi-knowledge learning-based compression framework. On top of AgentGC, we design 3 modes to support diverse scenarios: CP for compression-ratio priority, TP for throughput priority, and BM for balanced mode. Compared with 14 baselines on 9 datasets, the average compression ratios gains are 16.66%, 16.11%, and 16.33%, the throughput gains are 4.73x, 9.23x, and 9.15x, respectively.
|
https://arxiv.org/abs/2601.13559
|
Academic Papers
|
svg
|
1cbd13c82b4de45b383e6344f37ec4fe0a125e482c6c29200f288fec37e4b7cf
|
2026-01-21T00:00:00-05:00
|
Reasoning is a Modality
|
arXiv:2601.13562v1 Announce Type: new Abstract: The Abstraction and Reasoning Corpus (ARC) provides a compact laboratory for studying abstract reasoning, an ability central to human intelligence. Modern AI systems, including LLMs and ViTs, largely operate as sequence-of-behavior prediction machines: they match observable behaviors by modeling token statistics without a persistent, readable mental state. This creates a gap with human-like behavior: humans can explain an action by decoding internal state, while AI systems can produce fluent post-hoc rationalizations that are not grounded in such a state. We hypothesize that reasoning is a modality: reasoning should exist as a distinct channel separate from the low-level workspace on which rules are applied. To test this hypothesis, on solving ARC tasks as a visual reasoning problem, we designed a novel role-separated transformer block that splits global controller tokens from grid workspace tokens, enabling iterative rule execution. Trained and evaluated within the VARC vision-centric protocol, our method achieved 62.6% accuracy on ARC-1, surpassing average human performance (60.2%) and outperforming prior methods significantly. Qualitatively, our models exhibit more coherent rule-application structure than the dense ViT baseline, consistent with a shift away from plausible probability blobs toward controller-driven reasoning.
|
https://arxiv.org/abs/2601.13562
|
Academic Papers
|
svg
|
63c2973382c5fec7c79815235e8d7fec46b2d830bf25114c87c921d6dec661ad
|
2026-01-21T00:00:00-05:00
|
ButterflyMoE: Sub-Linear Ternary Experts via Structured Butterfly Orbits
|
arXiv:2601.13563v1 Announce Type: new Abstract: Linear memory scaling stores $N$ independent expert weight matrices requiring $\mathcal{O}(N \cdot d^2)$ memory, which exceeds edge devices memory budget. Current compression methods like quantization, pruning and low-rank factorization reduce constant factors but leave the scaling bottleneck unresolved. We introduce ButterflyMoE, a method that treats experts not as independent weight matrices but as geometric reorientations of a unified shared quantized substrate. Diversity among experts arises from viewing different angles of shared capacity, not from redundant storage. By applying learned rotations to a shared ternary prototype, each expert yields $\mathcal{O}(d^2 + N \cdot d \log d)$ memory -- sub-linear in the number of experts. The key insight: training these rotations with quantization reduces activation outliers and stabilizes extreme low bit training, where static methods collapse. Across language modeling benchmarks, ButterflyMoE achieves 150 times memory reduction at 256 experts with negligible accuracy loss. This allows 64 experts to fit on 4GB devices compared to standard MoE's 8 experts, showing geometric parametrization breaks linear scaling.
|
https://arxiv.org/abs/2601.13563
|
Academic Papers
|
svg
|
93aa8977ca18850cb6ca6c7c3be1233001d75eba61e6a698d8527d3b7791e4ee
|
2026-01-21T00:00:00-05:00
|
Multi-objective fluorescent molecule design with a data-physics dual-driven generative framework
|
arXiv:2601.13564v1 Announce Type: new Abstract: Designing fluorescent small molecules with tailored optical and physicochemical properties requires navigating vast, underexplored chemical space while satisfying multiple objectives and constraints. Conventional generate-score-screen approaches become impractical under such realistic design specifications, owing to their low search efficiency, unreliable generalizability of machine-learning prediction, and the prohibitive cost of quantum chemical calculation. Here we present LUMOS, a data-and-physics driven framework for inverse design of fluorescent molecules. LUMOS couples generator and predictor within a shared latent representation, enabling direct specification-to-molecule design and efficient exploration. Moreover, LUMOS combines neural networks with a fast time-dependent density functional theory (TD-DFT) calculation workflow to build a suite of complementary predictors spanning different trade-offs in speed, accuracy, and generalizability, enabling reliable property prediction across diverse scenarios. Finally, LUMOS employs a property-guided diffusion model integrated with multi-objective evolutionary algorithms, enabling de novo design and molecular optimization under multiple objectives and constraints. Across comprehensive benchmarks, LUMOS consistently outperforms baseline models in terms of accuracy, generalizability and physical plausibility for fluorescence property prediction, and demonstrates superior performance in multi-objective scaffold- and fragment-level molecular optimization. Further validation using TD-DFT and molecular dynamics (MD) simulations demonstrates that LUMOS can generate valid fluorophores that meet various target specifications. Overall, these results establish LUMOS as a data-physics dual-driven framework for general fluorophore inverse design.
|
https://arxiv.org/abs/2601.13564
|
Academic Papers
|
svg
|
897d528ccf31065f80a59192c500304bdb03260567850efaa5a9175ebd37cc46
|
2026-01-21T00:00:00-05:00
|
Learning Fine-Grained Correspondence with Cross-Perspective Perception for Open-Vocabulary 6D Object Pose Estimation
|
arXiv:2601.13565v1 Announce Type: new Abstract: Open-vocabulary 6D object pose estimation empowers robots to manipulate arbitrary unseen objects guided solely by natural language. However, a critical limitation of existing approaches is their reliance on unconstrained global matching strategies. In open-world scenarios, trying to match anchor features against the entire query image space introduces excessive ambiguity, as target features are easily confused with background distractors. To resolve this, we propose Fine-grained Correspondence Pose Estimation (FiCoP), a framework that transitions from noise-prone global matching to spatially-constrained patch-level correspondence. Our core innovation lies in leveraging a patch-to-patch correlation matrix as a structural prior to narrowing the matching scope, effectively filtering out irrelevant clutter to prevent it from degrading pose estimation. Firstly, we introduce an object-centric disentanglement preprocessing to isolate the semantic target from environmental noise. Secondly, a Cross-Perspective Global Perception (CPGP) module is proposed to fuse dual-view features, establishing structural consensus through explicit context reasoning. Finally, we design a Patch Correlation Predictor (PCP) that generates a precise block-wise association map, acting as a spatial filter to enforce fine-grained, noise-resilient matching. Experiments on the REAL275 and Toyota-Light datasets demonstrate that FiCoP improves Average Recall by 8.0% and 6.1%, respectively, compared to the state-of-the-art method, highlighting its capability to deliver robust and generalized perception for robotic agents operating in complex, unconstrained open-world environments. The source code will be made publicly available at https://github.com/zjjqinyu/FiCoP.
|
https://arxiv.org/abs/2601.13565
|
Academic Papers
|
svg
|
71224c3cd6e90dcac5c5225f78ad15c6585003aab6b82562f56720be0045dbcb
|
2026-01-21T00:00:00-05:00
|
Self-Improvement as Coherence Optimization: A Theoretical Account
|
arXiv:2601.13566v1 Announce Type: new Abstract: Can language models improve their accuracy without external supervision? Methods such as debate, bootstrap, and internal coherence maximization achieve this surprising feat, even matching golden finetuning performance. Yet why they work remains theoretically unclear. We show that they are all special cases of coherence optimization: finding a context-to-behavior mapping that's most compressible and jointly predictable. We prove that coherence optimization is equivalent to description-length regularization, and that among all such regularization schemes, it is optimal for semi-supervised learning when the regularizer is derived from a pretrained model. Our theory, supported by preliminary experiments, explains why feedback-free self-improvement works and predicts when it should succeed or fail.
|
https://arxiv.org/abs/2601.13566
|
Academic Papers
|
svg
|
ce1190f53143101a6d0cd4b372198093ecdd3d384cbc6eb9a5892f173c3b075c
|
2026-01-21T00:00:00-05:00
|
DRGW: Learning Disentangled Representations for Robust Graph Watermarking
|
arXiv:2601.13569v1 Announce Type: new Abstract: Graph-structured data is foundational to numerous web applications, and watermarking is crucial for protecting their intellectual property and ensuring data provenance. Existing watermarking methods primarily operate on graph structures or entangled graph representations, which compromise the transparency and robustness of watermarks due to the information coupling in representing graphs and uncontrollable discretization in transforming continuous numerical representations into graph structures. This motivates us to propose DRGW, the first graph watermarking framework that addresses these issues through disentangled representation learning. Specifically, we design an adversarially trained encoder that learns an invariant structural representation against diverse perturbations and derives a statistically independent watermark carrier, ensuring both robustness and transparency of watermarks. Meanwhile, we devise a graph-aware invertible neural network to provide a lossless channel for watermark embedding and extraction, guaranteeing high detectability and transparency of watermarks. Additionally, we develop a structure-aware editor that resolves the issue of latent modifications into discrete graph edits, ensuring robustness against structural perturbations. Experiments on diverse benchmark datasets demonstrate the superior effectiveness of DRGW.
|
https://arxiv.org/abs/2601.13569
|
Academic Papers
|
svg
|
647cdae994eeceeaf8e3964134b8d3f52dbe6c7a117ba8e614d1a03c2f7f8cd6
|
2026-01-21T00:00:00-05:00
|
GeoDynamics: A Geometric State-Space Neural Network for Understanding Brain Dynamics on Riemannian Manifolds
|
arXiv:2601.13570v1 Announce Type: new Abstract: State-space models (SSMs) have become a cornerstone for unraveling brain dynamics, revealing how latent neural states evolve over time and give rise to observed signals. By combining the flexibility of deep learning with the principled dynamical structure of SSMs, recent studies have achieved powerful fits to functional neuroimaging data. However, most existing approaches still view the brain as a set of loosely connected regions or impose oversimplified network priors, falling short of a truly holistic and self-organized dynamical system perspective. Brain functional connectivity (FC) at each time point naturally forms a symmetric positive definite (SPD) matrix, which resides on a curved Riemannian manifold rather than in Euclidean space. Capturing the trajectories of these SPD matrices is key to understanding how coordinated networks support cognition and behavior. To this end, we introduce GeoDynamics, a geometric state-space neural network that tracks latent brain-state trajectories directly on the high-dimensional SPD manifold. GeoDynamics embeds each connectivity matrix into a manifold-aware recurrent framework, learning smooth and geometry-respecting transitions that reveal task-driven state changes and early markers of Alzheimer's disease, Parkinson's disease, and autism. Beyond neuroscience, we validate GeoDynamics on human action recognition benchmarks (UTKinect, Florence, HDM05), demonstrating its scalability and robustness in modeling complex spatiotemporal dynamics across diverse domains.
|
https://arxiv.org/abs/2601.13570
|
Academic Papers
|
svg
|
558fb85c29468313509329e04d7bf88cd5f748069c9a782509cc138467ede12e
|
2026-01-21T00:00:00-05:00
|
Stochastic Dynamic Pricing of Electric Vehicle Charging with Heterogeneous User Behavior: A Stackelberg Game Framework
|
arXiv:2601.13571v1 Announce Type: new Abstract: The rapid adoption of electric vehicles (EVs) introduces complex spatiotemporal demand management challenges for charging station operators (CSOs), exacerbated by demand imbalances, behavioral heterogeneity, and system uncertainty. Traditional dynamic pricing models, often relying on deterministic EV-CS pairings and network equilibrium assumptions, frequently oversimplify user behavior and lack scalability. This study proposes a stochastic, behaviorally heterogeneous dynamic pricing framework formulated as a bi-level Stackelberg game. The upper level optimizes time-varying pricing to maximize system-wide utility, while the lower level models decentralized EV users via a multinomial logit (MNL) choice model incorporating price sensitivity, battery aging, risk attitudes, and network travel costs. Crucially, the model avoids network equilibrium constraints to enhance scalability, with congestion effects represented via queuing-theoretic approximations. To efficiently solve the resulting large-scale optimization problem, a rolling-horizon approach combining the Dynamic Probabilistic Sensitivity Analysis-guided Cross-Entropy Method (PSA-CEM) with the Method of Successive Averages (MSA) is implemented. A real-world case study in Clayton, Melbourne, validates the framework using 22 charging stations. Simulation results demonstrate that the proposed mechanism substantially reduces queuing penalties and improves user utility compared to fixed and time-of-use pricing. The framework provides a robust, scalable tool for strategic EV charging management, balancing realism with computational efficiency.
|
https://arxiv.org/abs/2601.13571
|
Academic Papers
|
svg
|
0685bce678f8eb69050aa2b9b59086839018711f67e5536319484d3ffde06323
|
2026-01-21T00:00:00-05:00
|
Behavior Knowledge Merge in Reinforced Agentic Models
|
arXiv:2601.13572v1 Announce Type: new Abstract: Reinforcement learning (RL) is central to post-training, particularly for agentic models that require specialized reasoning behaviors. In this setting, model merging offers a practical mechanism for integrating multiple RL-trained agents from different tasks into a single generalist model. However, existing merging methods are designed for supervised fine-tuning (SFT), and they are suboptimal to preserve task-specific capabilities on RL-trained agentic models. The root is a task-vector mismatch between RL and SFT: on-policy RL induces task vectors that are highly sparse and heterogeneous, whereas SFT-style merging implicitly assumes dense and globally comparable task vectors. When standard global averaging is applied under this mismatch, RL's non-overlapping task vectors that encode critical task-specific behaviors are reduced and parameter updates are diluted. To address this issue, we propose Reinforced Agent Merging (RAM), a distribution-aware merging framework explicitly designed for RL-trained agentic models. RAM disentangles shared and task-specific unique parameter updates, averaging shared components while selectively preserving and rescaling unique ones to counteract parameter update dilution. Experiments across multiple agent domains and model architectures demonstrate that RAM not only surpasses merging baselines, but also unlocks synergistic potential among agents to achieve performance superior to that of specialized agents in their domains.
|
https://arxiv.org/abs/2601.13572
|
Academic Papers
|
svg
|
3a3d6d57a1ffbfbac241f77023d1299bd99b0b9b9d8433405fb71d82f3c6e8c7
|
2026-01-21T00:00:00-05:00
|
TRGCN: A Hybrid Framework for Social Network Rumor Detection
|
arXiv:2601.13573v1 Announce Type: new Abstract: Accurate and efficient rumor detection is critical for information governance, particularly in the context of the rapid spread of misinformation on social networks. Traditional rumor detection relied primarily on manual analysis. With the continuous advancement of technology, machine learning and deep learning approaches for rumor identification have gradually emerged and gained prominence. However, previous approaches often struggle to simultaneously capture both the sequential and the global structural relationships among topological nodes within a social network. To tackle this issue, we introduce a hybrid model for detecting rumors that integrates a Graph Convolutional Network (GCN) with a Transformer architecture, aiming to leverage the complementary strengths of structural and semantic feature extraction. Positional encoding helps preserve the sequential order of these nodes within the propagation structure. The use of Multi-head attention mechanisms enables the model to capture features across diverse representational subspaces, thereby enhancing both the richness and depth of text comprehension. This integration allows the framework to concurrently identify the key propagation network of rumors, the textual content, the long-range dependencies, and the sequence among propagation nodes. Experimental evaluations on publicly available datasets, including Twitter 15 and Twitter 16, demonstrate that our proposed fusion model significantly outperforms both standalone models and existing mainstream methods in terms of accuracy. These results validate the effectiveness and superiority of our approach for the rumor detection task.
|
https://arxiv.org/abs/2601.13573
|
Academic Papers
|
svg
|
2d8273275e3b50ae0b06144da03e5a973f33b997a4c92d1dd8e30a8a5685dbc6
|
2026-01-21T00:00:00-05:00
|
Highly Deformable Proprioceptive Membrane for Real-Time 3D Shape Reconstruction
|
arXiv:2601.13574v1 Announce Type: new Abstract: Reconstructing the three-dimensional (3D) geometry of object surfaces is essential for robot perception, yet vision-based approaches are generally unreliable under low illumination or occlusion. This limitation motivates the design of a proprioceptive membrane that conforms to the surface of interest and infers 3D geometry by reconstructing its own deformation. Conventional shape-aware membranes typically rely on resistive, capacitive, or magneto-sensitive mechanisms. However, these methods often encounter challenges such as structural complexity, limited compliance during large-scale deformation, and susceptibility to electromagnetic interference. This work presents a soft, flexible, and stretchable proprioceptive silicone membrane based on optical waveguide sensing. The membrane sensor integrates edge-mounted LEDs and centrally distributed photodiodes (PDs), interconnected via liquid-metal traces embedded within a multilayer elastomeric composite. Rich deformation-dependent light intensity signals are decoded by a data-driven model to recover the membrane geometry as a 3D point cloud. On a customized 140 mm square membrane, real-time reconstruction of large-scale out-of-plane deformation is achieved at 90 Hz with an average reconstruction error of 1.3 mm, measured by Chamfer distance, while maintaining accuracy for indentations up to 25 mm. The proposed framework provides a scalable, robust, and low-profile solution for global shape perception in deformable robotic systems.
|
https://arxiv.org/abs/2601.13574
|
Academic Papers
|
svg
|
18a1c7ef1c0d5f104cee74c5318a0dac2a2e50df843da2af2334e9368a38272c
|
2026-01-21T00:00:00-05:00
|
Comparing Without Saying: A Dataset and Benchmark for Implicit Comparative Opinion Mining from Same-User Reviews
|
arXiv:2601.13575v1 Announce Type: new Abstract: Existing studies on comparative opinion mining have mainly focused on explicit comparative expressions, which are uncommon in real-world reviews. This leaves implicit comparisons - here users express preferences across separate reviews - largely underexplored. We introduce SUDO, a novel dataset for implicit comparative opinion mining from same-user reviews, allowing reliable inference of user preferences even without explicit comparative cues. SUDO comprises 4,150 annotated review pairs (15,191 sentences) with a bi-level structure capturing aspect-level mentions and review-level preferences. We benchmark this task using two baseline architectures: traditional machine learning- and language model-based baselines. Experimental results show that while the latter outperforms the former, overall performance remains moderate, revealing the inherent difficulty of the task and establishing SUDO as a challenging and valuable benchmark for future research.
|
https://arxiv.org/abs/2601.13575
|
Academic Papers
|
svg
|
fe89e113c309503b20744a5dab99d117bd954d9f4f7a34ba88551f857435519a
|
2026-01-21T00:00:00-05:00
|
FG-OrIU: Towards Better Forgetting via Feature-Gradient Orthogonality for Incremental Unlearning
|
arXiv:2601.13578v1 Announce Type: new Abstract: Incremental unlearning (IU) is critical for pre-trained models to comply with sequential data deletion requests, yet existing methods primarily suppress parameters or confuse knowledge without explicit constraints on both feature and gradient level, resulting in \textit{superficial forgetting} where residual information remains recoverable. This incomplete forgetting risks security breaches and disrupts retention balance, especially in IU scenarios. We propose FG-OrIU (\textbf{F}eature-\textbf{G}radient \textbf{Or}thogonality for \textbf{I}ncremental \textbf{U}nlearning), the first framework unifying orthogonal constraints on both features and gradients level to achieve deep forgetting, where the forgetting effect is irreversible. FG-OrIU decomposes feature spaces via Singular Value Decomposition (SVD), separating forgetting and remaining class features into distinct subspaces. It then enforces dual constraints: feature orthogonal projection on both forgetting and remaining classes, while gradient orthogonal projection prevents the reintroduction of forgotten knowledge and disruption to remaining classes during updates. Additionally, dynamic subspace adaptation merges newly forgetting subspaces and contracts remaining subspaces, ensuring a stable balance between removal and retention across sequential unlearning tasks. Extensive experiments demonstrate the effectiveness of our method.
|
https://arxiv.org/abs/2601.13578
|
Academic Papers
|
svg
|
4fb23107d20c9d163abd0ef80c7baa57ea27b344a3860977edccc3e05fb12ff7
|
2026-01-21T00:00:00-05:00
|
A Kubernetes custom scheduler based on reinforcement learning for compute-intensive pods
|
arXiv:2601.13579v1 Announce Type: new Abstract: With the rise of cloud computing and lightweight containers, Docker has emerged as a leading technology for rapid service deployment, with Kubernetes responsible for pod orchestration. However, for compute-intensive workloads-particularly web services executing containerized machine-learning training-the default Kubernetes scheduler does not always achieve optimal placement. To address this, we propose two custom, reinforcement-learning-based schedulers, SDQN and SDQN-n, both built on the Deep Q-Network (DQN) framework. In compute-intensive scenarios, these models outperform the default Kubernetes scheduler as well as Transformer-and LSTM-based alternatives, reducing average CPU utilization per cluster node by 10%, and by over 20% when using SDQN-n. Moreover, our results show that SDQN-n approach of consolidating pods onto fewer nodes further amplifies resource savings and helps advance greener, more energy-efficient data centers.Therefore, pod scheduling must employ different strategies tailored to each scenario in order to achieve better performance.Since the reinforcement-learning components of the SDQN and SDQN-n architectures proposed in this paper can be easily tuned by adjusting their parameters, they can accommodate the requirements of various future scenarios.
|
https://arxiv.org/abs/2601.13579
|
Academic Papers
|
svg
|
004a462c5e48f6c875e43be625bee94a9331db54487cf089eb94aa30a35056cc
|
2026-01-21T00:00:00-05:00
|
Neural Organ Transplantation (NOT): Checkpoint-Based Modular Adaptation for Transformer Models
|
arXiv:2601.13580v1 Announce Type: new Abstract: We introduce Neural Organ Transplantation (NOT), a modular adaptation framework that enables trained transformer layers to function as reusable transferable checkpoints for domain adaptation. Unlike conventional fine-tuning approaches that tightly couple trained parameters to specific model instances and training data, NOT extracts contiguous layer subsets ("donor organs") from pre-trained models, trains them independently on domain-specific data, and saves them as standalone checkpoint files that can be transplanted into compatible recipient models without access to the original training data. Through experiments on three decoder-only transformer architectures spanning 124M to 20B parameters (GPT-2, TinyLlama, and GPT-OSS), we demonstrate that donor transplantation substantially outperforms existing adaptation methods, achieving an order-of-magnitude improvement in perplexity over LoRA while training significantly faster. The method exhibits position dependence, with early insertion positions yielding optimal results. Cross-domain transfer at billion-parameter scale reveals unexpected regularization benefits. These findings demonstrate that transformer middle layers can support efficient modular transfer for decoder-only architectures, enabling privacy-preserving expertise sharing through checkpoint distribution. We note that this approach is currently limited to decoder-only models; preliminary experiments on encoder-based architectures show reduced effectiveness.
|
https://arxiv.org/abs/2601.13580
|
Academic Papers
|
svg
|
7d67fb4386cf5f6373e4e6d3da1605966b2ca409a92a5950d29cc601a17a8563
|
2026-01-21T00:00:00-05:00
|
SCRIPTMIND: Crime Script Inference and Cognitive Evaluation for LLM-based Social Engineering Scam Detection System
|
arXiv:2601.13581v1 Announce Type: new Abstract: Social engineering scams increasingly employ personalized, multi-turn deception, exposing the limits of traditional detection methods. While Large Language Models (LLMs) show promise in identifying deception, their cognitive assistance potential remains underexplored. We propose ScriptMind, an integrated framework for LLM-based scam detection that bridges automated reasoning and human cognition. It comprises three components: the Crime Script Inference Task (CSIT) for scam reasoning, the Crime Script-Aware Inference Dataset (CSID) for fine-tuning small LLMs, and the Cognitive Simulation-based Evaluation of Social Engineering Defense (CSED) for assessing real-time cognitive impact. Using 571 Korean phone scam cases, we built 22,712 structured scammer-sequence training instances. Experimental results show that the 11B small LLM fine-tuned with ScriptMind outperformed GPT-4o by 13%, achieving superior performance over commercial models in detection accuracy, false-positive reduction, scammer utterance prediction, and rationale quality. Moreover, in phone scam simulation experiments, it significantly enhanced and sustained users' suspicion levels, improving their cognitive awareness of scams. ScriptMind represents a step toward human-centered, cognitively adaptive LLMs for scam defense.
|
https://arxiv.org/abs/2601.13581
|
Academic Papers
|
svg
|
6a2dbda471725dbebd5ec9e86c45baaba5c0db2cdcab194eaeb7907764246047
|
2026-01-21T00:00:00-05:00
|
Nonlinear fractional-periodic boundary value problems with Hilfer fractional derivative: existence and numerical approximations of solutions
|
arXiv:2601.13584v1 Announce Type: new Abstract: We prove conditions for existence of analytical solutions for boundary value problems with the Hilfer fractional derivative, generalizing the commonly used Riemann-Liouville and Caputo operators. The boundary values, referred to in this paper as fractional-periodic, are fractional integral conditions generalizing recurrent solution values for the non-Caputo case of the Hilfer fractional derivative. Analytical solutions to the studied problem are obtained using a perturbation of the corresponding initial value problem with enforced boundary conditions. In general, solutions to the boundary value problem are singular for $t\downarrow 0$. To overcome this singularity, we construct a sequence of converging solutions in a weighted continuous function space. We present a Bernstein splines-based implementation to numerically approximate solutions. We prove convergence of the numerical method, providing convergence criteria and asymptotic convergence rates. Numerical examples show empirical convergence results corresponding with the theoretical bounds. Moreover, the method is able to approximate the singular behavior of solutions and is demonstrated to converge for nonlinear problems. Finally, we apply a grid search to obtain correspondence to the original, non-perturbed system.
|
https://arxiv.org/abs/2601.13584
|
Academic Papers
|
svg
|
d63aa81468911fd345e64e1ebd92a78d34e0c5e106ad8d172f11c6c4a938e3f0
|
2026-01-21T00:00:00-05:00
|
TREX: Tokenizer Regression for Optimal Data Mixture
|
arXiv:2601.13588v1 Announce Type: new Abstract: Building effective tokenizers for multilingual Large Language Models (LLMs) requires careful control over language-specific data mixtures. While a tokenizer's compression performance critically affects the efficiency of LLM training and inference, existing approaches rely on heuristics or costly large-scale searches to determine optimal language ratios. We introduce Tokenizer Regression for Optimal Data MiXture (TREX), a regression-based framework that efficiently predicts the optimal data mixture for tokenizer training. TREX trains small-scale proxy tokenizers on random mixtures, gathers their compression statistics, and learns to predict compression performance from data mixtures. This learned model enables scalable mixture search before large-scale tokenizer training, mitigating the accuracy-cost trade-off in multilingual tokenizer design. Tokenizers trained with TReX's predicted mixtures outperform mixtures based on LLaMA3 and uniform distributions by up to 12% in both inand out-of-distribution compression efficiency, demonstrating strong scalability, robustness, and practical effectiveness.
|
https://arxiv.org/abs/2601.13588
|
Academic Papers
|
svg
|
c209bb63cb09408dd893d1f50cee77acafe6db700160b6629d987de941beb71f
|
2026-01-21T00:00:00-05:00
|
Motion-to-Response Content Generation via Multi-Agent AI System with Real-Time Safety Verification
|
arXiv:2601.13589v1 Announce Type: new Abstract: This paper proposes a multi-agent artificial intelligence system that generates response-oriented media content in real time based on audio-derived emotional signals. Unlike conventional speech emotion recognition studies that focus primarily on classification accuracy, our approach emphasizes the transformation of inferred emotional states into safe, age-appropriate, and controllable response content through a structured pipeline of specialized AI agents. The proposed system comprises four cooperative agents: (1) an Emotion Recognition Agent with CNN-based acoustic feature extraction, (2) a Response Policy Decision Agent for mapping emotions to response modes, (3) a Content Parameter Generation Agent for producing media control parameters, and (4) a Safety Verification Agent enforcing age-appropriateness and stimulation constraints. We introduce an explicit safety verification loop that filters generated content before output, ensuring compliance with predefined rules. Experimental results on public datasets demonstrate that the system achieves 73.2% emotion recognition accuracy, 89.4% response mode consistency, and 100% safety compliance while maintaining sub-100ms inference latency suitable for on-device deployment. The modular architecture enables interpretability and extensibility, making it applicable to child-adjacent media, therapeutic applications, and emotionally responsive smart devices.
|
https://arxiv.org/abs/2601.13589
|
Academic Papers
|
svg
|
09584cdea2c218acfe65081321fb94118116fd2aefe80e4388829e60f175e141
|
2026-01-21T00:00:00-05:00
|
Vulnerability of LLMs' Belief Systems? LLMs Belief Resistance Check Through Strategic Persuasive Conversation Interventions
|
arXiv:2601.13590v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly employed in various question-answering tasks. However, recent studies showcase that LLMs are susceptible to persuasion and could adopt counterfactual beliefs. We present a systematic evaluation of LLM susceptibility to persuasion under the Source--Message--Channel--Receiver (SMCR) communication framework. Across five mainstream Large Language Models (LLMs) and three domains (factual knowledge, medical QA, and social bias), we analyze how different persuasive strategies influence belief stability over multiple interaction turns. We further examine whether meta-cognition prompting (i.e., eliciting self-reported confidence) affects resistance to persuasion. Results show that smaller models exhibit extreme compliance, with over 80% of belief changes occurring at the first persuasive turn (average end turn of 1.1--1.4). Contrary to expectations, meta-cognition prompting increases vulnerability by accelerating belief erosion rather than enhancing robustness. Finally, we evaluate adversarial fine-tuning as a defense. While GPT-4o-mini achieves near-complete robustness (98.6%) and Mistral~7B improves substantially (35.7% $\rightarrow$ 79.3%), Llama models remain highly susceptible (<14%) even when fine-tuned on their own failure cases. Together, these findings highlight substantial model-dependent limits of current robustness interventions and offer guidance for developing more trustworthy LLMs.
|
https://arxiv.org/abs/2601.13590
|
Academic Papers
|
svg
|
edece1ad031b3d580681bcc9b6f677bcfd424c557abb0e6b246022563ec195fc
|
2026-01-21T00:00:00-05:00
|
DSAEval: Evaluating Data Science Agents on a Wide Range of Real-World Data Science Problems
|
arXiv:2601.13591v1 Announce Type: new Abstract: Recent LLM-based data agents aim to automate data science tasks ranging from data analysis to deep learning. However, the open-ended nature of real-world data science problems, which often span multiple taxonomies and lack standard answers, poses a significant challenge for evaluation. To address this, we introduce DSAEval, a benchmark comprising 641 real-world data science problems grounded in 285 diverse datasets, covering both structured and unstructured data (e.g., vision and text). DSAEval incorporates three distinctive features: (1) Multimodal Environment Perception, which enables agents to interpret observations from multiple modalities including text and vision; (2) Multi-Query Interactions, which mirror the iterative and cumulative nature of real-world data science projects; and (3) Multi-Dimensional Evaluation, which provides a holistic assessment across reasoning, code, and results. We systematically evaluate 11 advanced agentic LLMs using DSAEval. Our results show that Claude-Sonnet-4.5 achieves the strongest overall performance, GPT-5.2 is the most efficient, and MiMo-V2-Flash is the most cost-effective. We further demonstrate that multimodal perception consistently improves performance on vision-related tasks, with gains ranging from 2.04% to 11.30%. Overall, while current data science agents perform well on structured data and routine data anlysis workflows, substantial challenges remain in unstructured domains. Finally, we offer critical insights and outline future research directions to advance the development of data science agents.
|
https://arxiv.org/abs/2601.13591
|
Academic Papers
|
svg
|
579d9ff5f794ebaf789fb4b293a2cab8c41892b3cf16ce25b488e7628320ca0d
|
2026-01-21T00:00:00-05:00
|
Machine learning based radiative parameterization scheme and its performance in operational reforecast experiments
|
arXiv:2601.13592v1 Announce Type: new Abstract: Radiation is typically the most time-consuming physical process in numerical models. One solution is to use machine learning methods to simulate the radiation process to improve computational efficiency. From an operational standpoint, this study investigates critical limitations inherent to hybrid forecasting frameworks that embed deep neural networks into numerical prediction models, with a specific focus on two fundamental bottlenecks: coupling compatibility and long-term integration stability. A residual convolutional neural network is employed to approximate the Rapid Radiative Transfer Model for General Circulation Models (RRTMG) within the global operational system of China Meteorological Administration. We adopted an offline training and online coupling approach. First, a comprehensive dataset is generated through model simulations, encompassing all atmospheric columns both with and without cloud cover. To ensure the stability of the hybrid model, the dataset is enhanced via experience replay, and additional output constraints based on physical significance are imposed. Meanwhile, a LibTorch-based coupling method is utilized, which is more suitable for real-time operational computations. The hybrid model is capable of performing ten-day integrated forecasts as required. A two-month operational reforecast experiment demonstrates that the machine learning emulator achieves accuracy comparable to that of the traditional physical scheme, while accelerating the computation speed by approximately eightfold.
|
https://arxiv.org/abs/2601.13592
|
Academic Papers
|
svg
|
f9d0fcda86bfd6ff73a3da7d1ae9933e575e3a8f4fa8b2aa2a538376e0373de7
|
2026-01-21T00:00:00-05:00
|
AI IDEs or Autonomous Agents? Measuring the Impact of Coding Agents on Software Development
|
arXiv:2601.13597v1 Announce Type: new Abstract: Large language model (LLM)-based coding agents increasingly act as autonomous contributors that generate and merge pull requests, yet their real-world effects on software projects are unclear, especially relative to widely adopted IDE-based AI assistants. We present a longitudinal causal study of agent adoption in open-source repositories using staggered difference-in-differences with matched controls. Using the AIDev dataset, we define adoption as the first agent-generated pull request and analyze monthly repository-level outcomes spanning development velocity (commits, lines added) and software quality (static-analysis warnings, cognitive complexity, duplication, and comment density). Results show large, front-loaded velocity gains only when agents are the first observable AI tool in a project; repositories with prior AI IDE usage experience minimal or short-lived throughput benefits. In contrast, quality risks are persistent across settings, with static-analysis warnings and cognitive complexity rising roughly 18% and 35%, indicating sustained agent-induced complexity debt even when velocity advantages fade. These heterogeneous effects suggest diminishing returns to AI assistance and highlight the need for quality safeguards, provenance tracking, and selective deployment of autonomous agents. Our findings establish an empirical basis for understanding how agentic and IDE-based tools interact, and motivate research on balancing acceleration with maintainability in AI-integrated development workflows.
|
https://arxiv.org/abs/2601.13597
|
Academic Papers
|
svg
|
b25b97733a33d97b431a1c12c14b8b53bf82ae3f92e54ebadad7a6c1ecf9a2d3
|
2026-01-21T00:00:00-05:00
|
Diffusion In Diffusion: Breaking the Autoregressive Bottleneck in Block Diffusion Models
|
arXiv:2601.13599v1 Announce Type: new Abstract: Block diffusion language models, operating as semi-autoregressive paradigms, combine the strengths of both autoregressive and diffusion paradigms. However, their strict unidirectional block dependencies introduce irreversibility and sacrifice the global planning capabilities for which diffusion models are renowned. In order to address these issues, we propose Diffusion in Diffusion, a draft-then-refine framework designed to overcome the irreversibility and myopia problems inherent in block diffusion models. Our approach first employs block diffusion to generate rapid drafts using small blocks, then refines these drafts through global bidirectional diffusion with a larger bidirectional receptive field. We utilise snapshot confidence remasking to identify the most critical tokens that require modification, and apply mix-scale training to expand the block diffusion model's global capabilities. Empirical results demonstrate that our approach sets a new benchmark for discrete diffusion models on the OpenWebText dataset. Using just 26% of the fine-tuning budget of baseline models, we reduce generative perplexity from 25.7 to 21.9, significantly narrowing the performance gap with autoregressive models.
|
https://arxiv.org/abs/2601.13599
|
Academic Papers
|
svg
|
a3194684b950f9892290e4ddd75bb70e342aa467f11e0624bbe271775d91c7eb
|
2026-01-21T00:00:00-05:00
|
Foundations of Global Consistency Checking with Noisy LLM Oracles
|
arXiv:2601.13600v1 Announce Type: new Abstract: Ensuring that collections of natural-language facts are globally consistent is essential for tasks such as fact-checking, summarization, and knowledge base construction. While Large Language Models (LLMs) can assess the consistency of small subsets of facts, their judgments are noisy, and pairwise checks are insufficient to guarantee global coherence. We formalize this problem and show that verifying global consistency requires exponentially many oracle queries in the worst case. To make the task practical, we propose an adaptive divide-and-conquer algorithm that identifies minimal inconsistent subsets (MUSes) of facts and optionally computes minimal repairs through hitting-sets. Our approach has low-degree polynomial query complexity. Experiments with both synthetic and real LLM oracles show that our method efficiently detects and localizes inconsistencies, offering a scalable framework for linguistic consistency verification with LLM-based evaluators.
|
https://arxiv.org/abs/2601.13600
|
Academic Papers
|
svg
|
b40cced59f3bbcd3738b3fca5a217447df2183bae240daabb3e57e2edca1e8b9
|
2026-01-21T00:00:00-05:00
|
An Elementary Approach to Scheduling in Generative Diffusion Models
|
arXiv:2601.13602v1 Announce Type: new Abstract: An elementary approach to characterizing the impact of noise scheduling and time discretization in generative diffusion models is developed. Considering a simplified model where the source distribution is multivariate Gaussian with a given covariance matrix, the explicit closed-form evolution trajectory of the distributions across reverse sampling steps is derived, and consequently, the Kullback-Leibler (KL) divergence between the source distribution and the reverse sampling output is obtained. The effect of the number of time discretization steps on the convergence of this KL divergence is studied via the Euler-Maclaurin expansion. An optimization problem is formulated, and its solution noise schedule is obtained via calculus of variations, shown to follow a tangent law whose coefficient is determined by the eigenvalues of the source covariance matrix. For an alternative scenario, more realistic in practice, where pretrained models have been obtained for some given noise schedules, the KL divergence also provides a measure to compare different time discretization strategies in reverse sampling. Experiments across different datasets and pretrained models demonstrate that the time discretization strategy selected by our approach consistently outperforms baseline and search-based strategies, particularly when the budget on the number of function evaluations is very tight.
|
https://arxiv.org/abs/2601.13602
|
Academic Papers
|
svg
|
87fdf16dd45995fec6f8ea96ff27be5daff4d66315ab07aa751909f59f2fae3a
|
2026-01-21T00:00:00-05:00
|
DCCVT: Differentiable Clipped Centroidal Voronoi Tessellation
|
arXiv:2601.13603v1 Announce Type: new Abstract: While Marching Cubes (MC) and Marching Tetrahedra (MTet) are widely adopted in 3D reconstruction pipelines due to their simplicity and efficiency, their differentiable variants remain suboptimal for mesh extraction. This often limits the quality of 3D meshes reconstructed from point clouds or images in learning-based frameworks. In contrast, clipped CVTs offer stronger theoretical guarantees and yield higher-quality meshes. However, the lack of a differentiable formulation has prevented their integration into modern machine learning pipelines. To bridge this gap, we propose DCCVT, a differentiable algorithm that extracts high-quality 3D meshes from noisy signed distance fields (SDFs) using clipped CVTs. We derive a fully differentiable formulation for computing clipped CVTs and demonstrate its integration with deep learning-based SDF estimation to reconstruct accurate 3D meshes from input point clouds. Our experiments with synthetic data demonstrate the superior ability of DCCVT against state-of-the-art methods in mesh quality and reconstruction fidelity. https://wylliamcantincharawi.dev/DCCVT.github.io/
|
https://arxiv.org/abs/2601.13603
|
Academic Papers
|
svg
|
88dbe43aeae8e99036d9b5f04f458a797ddbcb1964a96dbd919174f82ec29fac
|
2026-01-21T00:00:00-05:00
|
Optimizing Parallel Schemes with Lyapunov Exponents and kNN-LLE Estimation
|
arXiv:2601.13604v1 Announce Type: new Abstract: Inverse parallel schemes remain indispensable tools for computing the roots of nonlinear systems, yet their dynamical behavior can be unexpectedly rich, ranging from strong contraction to oscillatory or chaotic transients depending on the choice of algorithmic parameters and initial states. A unified analytical-data-driven methodology for identifying, measuring, and reducing such instabilities in a family of uni-parametric inverse parallel solvers is presented in this study. On the theoretical side, we derive stability and bifurcation characterizations of the underlying iterative maps, identifying parameter regions associated with periodic or chaotic behavior. On the computational side, we introduce a micro-series pipeline based on kNN-driven estimation of the local largest Lyapunov exponent (LLE), applied to scalar time series derived from solver trajectories. The resulting sliding-window Lyapunov profiles provide fine-grained, real-time diagnostics of contractive or unstable phases and reveal transient behaviors not captured by coarse linearized analysis. Leveraging this correspondence, we introduce a Lyapunov-informed parameter selection strategy that identifies solver settings associated with stable behavior, particularly when the estimated LLE indicates persistent instability. Comprehensive experiments on ensembles of perturbed initial guesses demonstrate close agreement between the theoretical stability diagrams and empirical Lyapunov profiles, and show that the proposed adaptive mechanism significantly improves robustness. The study establishes micro-series Lyapunov analysis as a practical, interpretable tool for constructing self-stabilizing root-finding schemes and opens avenues for extending such diagnostics to higher-dimensional or noise-contaminated problems.
|
https://arxiv.org/abs/2601.13604
|
Academic Papers
|
svg
|
b216c6d3b7465f6c523ad626beeac82b988998bdd8d9bf5309a480a0fe34bf47
|
2026-01-21T00:00:00-05:00
|
Outage Identification from Electricity Market Data: Quickest Change Detection Approach
|
arXiv:2601.13605v1 Announce Type: new Abstract: Power system outages expose market participants to significant financial risk unless promptly detected and hedged. We develop an outage identification method from public market signals grounded in the parametric quickest change detection (QCD) theory. Parametric QCD operates on stochastic data streams, distinguishing pre- and post-change regimes using the ratio of their respective probability density functions. To derive the density functions for normal and post-outage market signals, we exploit multi-parametric programming to decompose complex market signals into parametric random variables with a known density. These densities are then used to construct a QCD-based statistic that triggers an alarm as soon as the statistic exceeds an appropriate threshold. Numerical experiments on a stylized PJM testbed demonstrate rapid line outage identification from public streams of electricity demand and price data.
|
https://arxiv.org/abs/2601.13605
|
Academic Papers
|
svg
|
d0a1719cff52b92b18f01b6b972ebcbdf2abfce7f55af0600ea9ea6344b18460
|
2026-01-21T00:00:00-05:00
|
ChartVerse: Scaling Chart Reasoning via Reliable Programmatic Synthesis from Scratch
|
arXiv:2601.13606v1 Announce Type: new Abstract: Chart reasoning is a critical capability for Vision Language Models (VLMs). However, the development of open-source models is severely hindered by the lack of high-quality training data. Existing datasets suffer from a dual challenge: synthetic charts are often simplistic and repetitive, while the associated QA pairs are prone to hallucinations and lack the reasoning depth required for complex tasks. To bridge this gap, we propose ChartVerse, a scalable framework designed to synthesize complex charts and reliable reasoning data from scratch. (1) To address the bottleneck of simple patterns, we first introduce Rollout Posterior Entropy (RPE), a novel metric that quantifies chart complexity. Guided by RPE, we develop complexity-aware chart coder to autonomously synthesize diverse, high-complexity charts via executable programs. (2) To guarantee reasoning rigor, we develop truth-anchored inverse QA synthesis. Diverging from standard generation, we adopt an answer-first paradigm: we extract deterministic answers directly from the source code, generate questions conditional on these anchors, and enforce strict consistency verification. To further elevate difficulty and reasoning depth, we filter samples based on model fail-rate and distill high-quality Chain-of-Thought (CoT) reasoning. We curate ChartVerse-SFT-600K and ChartVerse-RL-40K using Qwen3-VL-30B-A3B-Thinking as the teacher. Experimental results demonstrate that ChartVerse-8B achieves state-of-the-art performance, notably surpassing its teacher and rivaling the stronger Qwen3-VL-32B-Thinking.
|
https://arxiv.org/abs/2601.13606
|
Academic Papers
|
svg
|
dc22077c2a074ff46ef2648735d0e62a86c715c8351ed92bcd73e644ea0676a6
|
2026-01-21T00:00:00-05:00
|
When Reasoning Leaks Membership: Membership Inference Attack on Black-box Large Reasoning Models
|
arXiv:2601.13607v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have rapidly gained prominence for their strong performance in solving complex tasks. Many modern black-box LRMs expose the intermediate reasoning traces through APIs to improve transparency (e.g., Gemini-2.5 and Claude-sonnet). Despite their benefits, we find that these traces can leak membership signals, creating a new privacy threat even without access to token logits used in prior attacks. In this work, we initiate the first systematic exploration of Membership Inference Attacks (MIAs) on black-box LRMs. Our preliminary analysis shows that LRMs produce confident, recall-like reasoning traces on familiar training member samples but more hesitant, inference-like reasoning traces on non-members. The representations of these traces are continuously distributed in the semantic latent space, spanning from familiar to unfamiliar samples. Building on this observation, we propose BlackSpectrum, the first membership inference attack framework targeting the black-box LRMs. The key idea is to construct a recall-inference axis in the semantic latent space, based on representations derived from the exposed traces. By locating where a query sample falls along this axis, the attacker can obtain a membership score and predict how likely it is to be a member of the training data. Additionally, to address the limitations of outdated datasets unsuited to modern LRMs, we provide two new datasets to support future research, arXivReasoning and BookReasoning. Empirically, exposing reasoning traces significantly increases the vulnerability of LRMs to membership inference attacks, leading to large gains in attack performance. Our findings highlight the need for LRM companies to balance transparency in intermediate reasoning traces with privacy preservation.
|
https://arxiv.org/abs/2601.13607
|
Academic Papers
|
svg
|
650fefab858915399ae5f0eb83541d7aeafd330c9d3ebeca7d5390cab0050b56
|
2026-01-21T00:00:00-05:00
|
Fisher-Informed Parameterwise Aggregation for Federated Learning with Heterogeneous Data
|
arXiv:2601.13608v1 Announce Type: new Abstract: Federated learning aggregates model updates from distributed clients, but standard first order methods such as FedAvg apply the same scalar weight to all parameters from each client. Under non-IID data, these uniformly weighted updates can be strongly misaligned across clients, causing client drift and degrading the global model. Here we propose Fisher-Informed Parameterwise Aggregation (FIPA), a second-order aggregation method that replaces client-level scalar weights with parameter-specific Fisher Information Matrix (FIM) weights, enabling true parameter-level scaling that captures how each client's data uniquely influences different parameters. With low-rank approximation, FIPA remains communication- and computation-efficient. Across nonlinear function regression, PDE learning, and image classification, FIPA consistently improves over averaging-based aggregation, and can be effectively combined with state-of-the-art client-side optimization algorithms to further improve image classification accuracy. These results highlight the benefits of FIPA for federated learning under heterogeneous data distributions.
|
https://arxiv.org/abs/2601.13608
|
Academic Papers
|
svg
|
a15be4ba9c5c9ee19e3256ecd676078d7c6f466e28d8eacbe837e0ebd251ca0e
|
2026-01-21T00:00:00-05:00
|
Balancing Fairness and High Match Rates in Reciprocal Recommender Systems: A Nash Social Welfare Approach
|
arXiv:2601.13609v1 Announce Type: new Abstract: Matching platforms, such as online dating services and job recommendations, have become increasingly prevalent. For the success of these platforms, it is crucial to design reciprocal recommender systems (RRSs) that not only increase the total number of matches but also avoid creating unfairness among users. In this paper, we investigate the fairness of RRSs on matching platforms. From the perspective of fair division, we define the users' opportunities to be recommended and establish the fairness concept of envy-freeness in the allocation of these opportunities. We first introduce the Social Welfare (SW) method, which approximately maximizes the number of matches, and show that it leads to significant unfairness in recommendation opportunities, illustrating the trade-off between fairness and match rates. To address this challenge, we propose the Nash Social Welfare (NSW) method, which alternately optimizes two NSW functions and achieves nearly envy-free recommendations. We further generalize the SW and NSW method to the $\alpha$-SW method, which balances the trade-off between fairness and high match rates. Additionally, we develop a computationally efficient approximation algorithm for the SW/NSW/$\alpha$-SW methods based on the Sinkhorn algorithm. Through extensive experiments on both synthetic datasets and two real-world datasets, we demonstrate the practical effectiveness of our approach.
|
https://arxiv.org/abs/2601.13609
|
Academic Papers
|
svg
|
06c69a3236967dc09023baac92ee3ea3588a398dad7511068347915effcaced7
|
2026-01-21T00:00:00-05:00
|
Secure Multi-Path Routing with All-or-Nothing Transform for Network-on-Chip Architectures
|
arXiv:2601.13610v1 Announce Type: new Abstract: Ensuring Network-on-Chip (NoC) security is crucial to design trustworthy NoC-based System-on-Chip (SoC) architectures. While there are various threats that exploit on-chip communication vulnerabilities, eavesdropping attacks via malicious nodes are among the most common and stealthy. Although encryption can secure packets for confidentiality, it may introduce unacceptable overhead for resource-constrained SoCs. In this paper, we propose a lightweight confidentiality-preserving framework that utilizes a quasi-group based All-Or-Nothing Transform (AONT) combined with secure multi-path routing in NoC-based SoCs. By applying AONT to each packet and distributing its transformed blocks across multiple non-overlapping routes, we ensure that no intermediate router can reconstruct the original data without all blocks. Extensive experimental evaluation demonstrates that our method effectively mitigates eavesdropping attacks by malicious routers with negligible area and performance overhead. Our results also reveal that AONT-based multi-path routing can provide 7.3x reduction in overhead compared to traditional encryption for securing against eavesdropping attacks.
|
https://arxiv.org/abs/2601.13610
|
Academic Papers
|
svg
|
8b0ba169fcfa5e67eab4fed8bf9d07b45684c8587f97ddcd5a70a077fa4d0a63
|
2026-01-21T00:00:00-05:00
|
PINA: Prompt Injection Attack against Navigation Agents
|
arXiv:2601.13612v1 Announce Type: new Abstract: Navigation agents powered by large language models (LLMs) convert natural language instructions into executable plans and actions. Compared to text-based applications, their security is far more critical: a successful prompt injection attack does not just alter outputs but can directly misguide physical navigation, leading to unsafe routes, mission failure, or real-world harm. Despite this high-stakes setting, the vulnerability of navigation agents to prompt injection remains largely unexplored. In this paper, we propose PINA, an adaptive prompt optimization framework tailored to navigation agents under black-box, long-context, and action-executable constraints. Experiments on indoor and outdoor navigation agents show that PINA achieves high attack success rates with an average ASR of 87.5%, surpasses all baselines, and remains robust under ablation and adaptive-attack conditions. This work provides the first systematic investigation of prompt injection attacks in navigation and highlights their urgent security implications for embodied LLM agents.
|
https://arxiv.org/abs/2601.13612
|
Academic Papers
|
svg
|
a0a9d9ae4398503c96ddb949eac48612e2eee9c8890b86e4e6cfbc741f3df36b
|
2026-01-21T00:00:00-05:00
|
CauScientist: Teaching LLMs to Respect Data for Causal Discovery
|
arXiv:2601.13614v1 Announce Type: new Abstract: Causal discovery is fundamental to scientific understanding and reliable decision-making. Existing approaches face critical limitations: purely data-driven methods suffer from statistical indistinguishability and modeling assumptions, while recent LLM-based methods either ignore statistical evidence or incorporate unverified priors that can mislead result. To this end, we propose CauScientist, a collaborative framework that synergizes LLMs as hypothesis-generating "data scientists" with probabilistic statistics as rigorous "verifiers". CauScientist employs hybrid initialization to select superior starting graphs, iteratively refines structures through LLM-proposed modifications validated by statistical criteria, and maintains error memory to guide efficient search space. Experiments demonstrate that CauScientist substantially outperforms purely data-driven baselines, achieving up to 53.8% F1 score improvement and enhancing recall from 35.0% to 100.0%. Notably, while standalone LLM performance degrades with graph complexity, CauScientist reduces structural hamming distance (SHD) by 44.0% compared to Qwen3-32B on 37-node graphs. Our project page is at https://github.com/OpenCausaLab/CauScientist.
|
https://arxiv.org/abs/2601.13614
|
Academic Papers
|
svg
|
11b8d0d28912b5275622c4ecff3089ce776f700a7eca1bd650abdf580e450ee8
|
2026-01-21T00:00:00-05:00
|
Resilient Hierarchical Power Control for Hybrid GFL/GFM Microgrids Under Mixed Cyber-Attacks and Physical Constraints
|
arXiv:2601.13615v1 Announce Type: new Abstract: Hybrid microgrids integrating Grid-Following (GFL) and Grid-Forming (GFM) inverters present complex control challenges arising from the decoupling between long-term economic dispatch and real-time dynamic regulation, as well as the distinct physical limitations of heterogeneous inverters under cyber uncertainties. This paper proposes a Resilient Hierarchical Power Control (RHPC) strategy to unify these conflicting requirements within a cohesive framework. A standardized power increment mechanism is developed to bridge the tertiary and secondary layers, ensuring that real-time load fluctuations are compensated strictly according to the optimal economic ratios derived from the tertiary layer. To address the strict active power saturation constraints of GFL units, a dynamic activation scheme coupled with projection operators is introduced, which actively isolates saturated nodes from the consensus loop to prevent integrator wind-up and preserve the stability of the GFM backbone. Furthermore, the proposed framework incorporates a multi-scale attention mechanism and LSTM-based predictors into the secondary control protocol, endowing the system with robustness against unbounded False Data Injection (FDI) attacks and packet losses. Rigorous theoretical analysis confirms that the system achieves Uniformly Ultimately Bounded (UUB) convergence, and simulations on a modified IEEE 33-bus system demonstrate that the proposed strategy significantly improves power sharing accuracy and operational resilience in both grid-connected and islanded modes compared to conventional methods.
|
https://arxiv.org/abs/2601.13615
|
Academic Papers
|
svg
|
cd0839b96325b8231fcc23084cca21a1089ce0a8ecd4c5d3e384658eb554c9ae
|
2026-01-21T00:00:00-05:00
|
Reflections over the Sea: Reconfigurable Intelligent Surface for Maritime Self-Powered Communications
|
arXiv:2601.13618v1 Announce Type: new Abstract: Maritime communication is becoming a vital component of 6G networks, driven by the rapid expansion of the maritime economy. However, existing technologies face critical challenges in signal coverage, availability, and robustness, especially under harsh sea conditions. This paper proposes a novel framework for the maritime Internet-of-Things (IoT) communications that leverages the reconfigurable intelligent surface (RIS) mounted on offshore infrastructures, such as wind turbines, to enhance coverage and reliability. To capture dynamic maritime environment, a near-ocean-surface channel model is developed considering the impact of sea waves. In addition, a wave energy harvesting (EH) system is designed to self-power IoT sensors for data acquisition, processing, and transmission. To support real-time adaptation, channel state information is continuously measured to optimize RIS reflection parameters and maximize multi-user communication rates. Simulation results show that the proposed system significantly improves IoT communication performance by over 20%, under harsh sea conditions.
|
https://arxiv.org/abs/2601.13618
|
Academic Papers
|
svg
|
526ff3eae24b97c67c8a3f83a4682e5c0c87aefaf004f2e3dc8c693876e8b396
|
2026-01-21T00:00:00-05:00
|
CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models
|
arXiv:2601.13622v1 Announce Type: new Abstract: Recent advancements in Large Vision-Language Models (LVLMs) have pushed them closer to becoming general-purpose assistants. Despite their strong performance, LVLMs still struggle with vision-centric tasks such as image classification, underperforming compared to their base vision encoders, which are often CLIP-based models. To address this limitation, we propose Context-Aware Image Representation Prioritization via Ensemble (CARPE), a novel, model-agnostic framework which introduces vision-integration layers and a context-aware ensemble strategy to identify when to prioritize image representations or rely on the reasoning capabilities of the language model. This design enhances the model's ability to adaptively weight visual and textual modalities and enables the model to capture various aspects of image representations, leading to consistent improvements in generalization across classification and vision-language benchmarks. Extensive experiments demonstrate that CARPE not only improves performance on image classification benchmarks but also enhances results across various vision-language benchmarks. Finally, CARPE is designed to be effectively integrated with most open-source LVLMs that consist of a vision encoder and a language model, ensuring its adaptability across diverse architectures.
|
https://arxiv.org/abs/2601.13622
|
Academic Papers
|
svg
|
26ee5add32c8e849d7d16a9762f4dca5cf75fbc201fa32f352d518736cf21400
|
2026-01-21T00:00:00-05:00
|
PRIMAL: Processing-In-Memory Based Low-Rank Adaptation for LLM Inference Accelerator
|
arXiv:2601.13628v1 Announce Type: new Abstract: This paper presents PRIMAL, a processing-in-memory (PIM) based large language model (LLM) inference accelerator with low-rank adaptation (LoRA). PRIMAL integrates heterogeneous PIM processing elements (PEs), interconnected by 2D-mesh inter-PE computational network (IPCN). A novel SRAM reprogramming and power gating (SRPG) scheme enables pipelined LoRA updates and sub-linear power scaling by overlapping reconfiguration with computation and gating idle resources. PRIMAL employs optimized spatial mapping and dataflow orchestration to minimize communication overhead, and achieves $1.5\times$ throughput and $25\times$ energy efficiency over NVIDIA H100 with LoRA rank 8 (Q,V) on Llama-13B.
|
https://arxiv.org/abs/2601.13628
|
Academic Papers
|
svg
|
3ee8d9ea5f6281a8a820293509462ecf85c8fa028da35c1653f937bf427e50e7
|
2026-01-21T00:00:00-05:00
|
Activation-Space Anchored Access Control for Multi-Class Permission Reasoning in Large Language Models
|
arXiv:2601.13630v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed over knowledge bases for efficient knowledge retrieval and question answering. However, LLMs can inadvertently answer beyond a user's permission scope, leaking sensitive content, thus making it difficult to deploy knowledge-base QA under fine-grained access control requirements. In this work, we identify a geometric regularity in intermediate activations: for the same query, representations induced by different permission scopes cluster distinctly and are readily separable. Building on this separability, we propose Activation-space Anchored Access Control (AAAC), a training-free framework for multi-class permission control. AAAC constructs an anchor bank, with one permission anchor per class, from a small offline sample set and requires no fine-tuning. At inference time, a multi-anchor steering mechanism redirects each query's activations toward the anchor-defined authorized region associated with the current user, thereby suppressing over-privileged generations by design. Finally, extensive experiments across three LLM families demonstrate that AAAC reduces permission violation rates by up to 86.5% and prompt-based attack success rates by 90.7%, while improving response usability with minor inference overhead compared to baselines.
|
https://arxiv.org/abs/2601.13630
|
Academic Papers
|
svg
|
7ff9d50b580cb47d456362ac5e0e9aa1fb10cdd25f5bde9412fdf58d06a1ed1e
|
2026-01-21T00:00:00-05:00
|
ContiguousKV: Accelerating LLM Prefill with Granularity-Aligned KV Cache Management
|
arXiv:2601.13631v1 Announce Type: new Abstract: Efficiently serving Large Language Models (LLMs) with persistent Prefix Key-Value (KV) Cache is critical for applications like conversational search and multi-turn dialogue. Serving a request requires loading the pre-computed prefix KV cache and generating the first token, defined as the Re-Prefill Phase. Offloading this shared prefix cache to secondary storage is essential for memory scalability. Re-Prefill with offloading suffers from severe I/O bottlenecks in two aspects. First, semantic-aware KV cache pruning algorithms select important tokens in fine granularity, while systems manage I/O in coarse, fixed-size blocks, causing severe read amplification. Second, the sequential dependency between identifying important tokens and loading KV cache creates idle I/O and compute bubbles, under-utilizing system resources. This paper proposes \textit{ContiguousKV}, a high-performance prefix KV cache offloading system that bridges algorithmic semantics with I/O efficiency to accelerate the Re-Prefill phase. We first introduce \textit{ContiguousChunk}, a unified data management granularity that aligns KV cache pruning with I/O operations. All the mechanisms critical for I/O performance are performed at the granularity of ContiguousChunk, thereby eliminating read amplification. By exploiting the high similarity in important ContiguousChunk indices across layers, we propose intra- and inter-period asynchronous prefetching to break the sequential dependency between I/O and compute, effectively eliminating idle bubbles. Finally, we propose attention-guided cache management to retain semantically critical prefix data in memory. Evaluations on Qwen2.5 series models show that ContiguousKV achieves a 3.85x speedup in the Re-Prefill phase over the state-of-the-art offloading system IMPRESS, while maintaining high output quality.
|
https://arxiv.org/abs/2601.13631
|
Academic Papers
|
svg
|
99dc864be260eade8e2f0ebc85d1ac90a132720a8f4cdf3d759571b01afc84d4
|
2026-01-21T00:00:00-05:00
|
Resilient Routing: Risk-Aware Dynamic Routing in Smart Logistics via Spatiotemporal Graph Learning
|
arXiv:2601.13632v1 Announce Type: new Abstract: With the rapid development of the e-commerce industry, the logistics network is experiencing unprecedented pressure. The traditional static routing strategy most time cannot tolerate the traffic congestion and fluctuating retail demand. In this paper, we propose a Risk-Aware Dynamic Routing(RADR) framework which integrates Spatiotemporal Graph Neural Networks (ST-GNN) with combinatorial optimization. We first construct a logistics topology graph by using the discrete GPS data using spatial clustering methods. Subsequently, a hybrid deep learning model combining Graph Convolutional Network (GCN) and Gated Recurrent Unit (GRU) is adopted to extract spatial correlations and temporal dependencies for predicting future congestion risks. These prediction results are then integrated into a dynamic edge weight mechanism to perform path planning. We evaluated the framework on the Smart Logistics Dataset 2024, which contains real-world Internet of Things(IoT) sensor data. The experimental results show that the RADR algorithm significantly enhances the resilience of the supply chain. Particularly in the case study of high congestion scenarios, our method reduces the potential congestion risk exposure by 19.3% while only increasing the transportation distance by 2.1%. This empirical evidence confirms that the proposed data-driven approach can effectively balance delivery efficiency and operational safety.
|
https://arxiv.org/abs/2601.13632
|
Academic Papers
|
svg
|
cc7aaa344c03c287fec0998338ab8fb0b1d6d34eba2e35ea0aae2d4844d51a9e
|
2026-01-21T00:00:00-05:00
|
Scaling Test-time Inference for Visual Grounding
|
arXiv:2601.13633v1 Announce Type: new Abstract: Visual grounding is an essential capability of Visual Language Models (VLMs) to understand the real physical world. Previous state-of-the-art grounding visual language models usually have large model sizes, making them heavy for deployment and slow for inference. However, we notice that the sizes of visual encoders are nearly the same for small and large VLMs and the major difference is the sizes of the language models. Small VLMs fall behind larger VLMs in grounding because of the difference in language understanding capability rather than visual information handling. To mitigate the gap, we introduce 'Efficient visual Grounding language Models' (EGM): a method to scale the test-time computation (#generated tokens). Scaling the test-time computation of a small model is deployment-friendly, and yields better end-to-end latency as the cost of each token is much cheaper compared to directly running a large model. On the RefCOCO benchmark, our EGM-Qwen3-VL-8B demonstrates 91.4 IoU with an average of 737ms (5.9x faster) latency while Qwen3-VL-235B demands 4,320ms to achieve 90.5 IoU. To validate our approach's generality, we further set up a new amodal grounding setting that requires the model to predict both the visible and occluded parts of the objects. Experiments show our method can consistently and significantly improve the vanilla grounding and amodal grounding capabilities of small models to be on par with or outperform the larger models, thereby improving the efficiency for visual grounding.
|
https://arxiv.org/abs/2601.13633
|
Academic Papers
|
svg
|
8f6ee034a192dd2b558089813d51cb26411de942760f45c7b256e474eeb44437
|
2026-01-21T00:00:00-05:00
|
Direct Finite-Time Contraction (Step-Log) Profiling--Driven Optimization of Parallel Schemes for Nonlinear Problems on Multicore Architectures
|
arXiv:2601.13637v1 Announce Type: new Abstract: Efficient computation of all distinct solutions of nonlinear problems is essential in many scientific and engineering applications. Although high-order parallel iterative schemes offer fast convergence, their practical performance is often limited by sensitivity to internal parameters and the lack of reproducible tuning procedures. Classical parameter selection tools based on analytical conditions and dynamical-system diagnostics can be problem-dependent and computationally demanding, which motivates lightweight data-driven alternatives. In this study, we propose a parameterized single-step bi-parametric parallel Weierstrass-type scheme with third-order convergence together with a training-free tuning framework based on Direct finite-time contraction (step-log) profiling. The approach extracts Lyapunov-like finite-time contraction information directly from solver trajectories via step norms and step-log ratios, aggregates the resulting profiles over micro-launch ensembles, and ranks parameter candidates using two compact scores: the stability minimum S_min and the stability moment S_mom. Numerical results demonstrate consistent improvements in convergence rate, stability, and robustness across diverse nonlinear test problems, establishing the proposed profiling-based strategy as an efficient and reproducible alternative to classical parameter tuning methods.
|
https://arxiv.org/abs/2601.13637
|
Academic Papers
|
svg
|
59e22356ae6d3bd9c3b1f12ca83cc99a2dcf9b453422e025729fa552bd8e7076
|
2026-01-21T00:00:00-05:00
|
A General One-Shot Multimodal Active Perception Framework for Robotic Manipulation: Learning to Predict Optimal Viewpoint
|
arXiv:2601.13639v1 Announce Type: new Abstract: Active perception in vision-based robotic manipulation aims to move the camera toward more informative observation viewpoints, thereby providing high-quality perceptual inputs for downstream tasks. Most existing active perception methods rely on iterative optimization, leading to high time and motion costs, and are tightly coupled with task-specific objectives, which limits their transferability. In this paper, we propose a general one-shot multimodal active perception framework for robotic manipulation. The framework enables direct inference of optimal viewpoints and comprises a data collection pipeline and an optimal viewpoint prediction network. Specifically, the framework decouples viewpoint quality evaluation from the overall architecture, supporting heterogeneous task requirements. Optimal viewpoints are defined through systematic sampling and evaluation of candidate viewpoints, after which large-scale training datasets are constructed via domain randomization. Moreover, a multimodal optimal viewpoint prediction network is developed, leveraging cross-attention to align and fuse multimodal features and directly predict camera pose adjustments. The proposed framework is instantiated in robotic grasping under viewpoint-constrained environments. Experimental results demonstrate that active perception guided by the framework significantly improves grasp success rates. Notably, real-world evaluations achieve nearly double the grasp success rate and enable seamless sim-to-real transfer without additional fine-tuning, demonstrating the effectiveness of the proposed framework.
|
https://arxiv.org/abs/2601.13639
|
Academic Papers
|
svg
|
3f65e8512b2ac2b54fec40309e18bec7a5528cd93825d2a67682f7996234a5e3
|
2026-01-21T00:00:00-05:00
|
Towards Token-Level Text Anomaly Detection
|
arXiv:2601.13644v1 Announce Type: new Abstract: Despite significant progress in text anomaly detection for web applications such as spam filtering and fake news detection, existing methods are fundamentally limited to document-level analysis, unable to identify which specific parts of a text are anomalous. We introduce token-level anomaly detection, a novel paradigm that enables fine-grained localization of anomalies within text. We formally define text anomalies at both document and token-levels, and propose a unified detection framework that operates across multiple levels. To facilitate research in this direction, we collect and annotate three benchmark datasets spanning spam, reviews and grammar errors with token-level labels. Experimental results demonstrate that our framework get better performance than other 6 baselines, opening new possibilities for precise anomaly localization in text. All the codes and data are publicly available on https://github.com/charles-cao/TokenCore.
|
https://arxiv.org/abs/2601.13644
|
Academic Papers
|
svg
|
69417aee5951b8462b856ba50c327ce44389b878cc87383ea2f5327f5d4315f1
|
2026-01-21T00:00:00-05:00
|
Quadratic Upper Bound for Boosting Robustness
|
arXiv:2601.13645v1 Announce Type: new Abstract: Fast adversarial training (FAT) aims to enhance the robustness of models against adversarial attacks with reduced training time, however, FAT often suffers from compromised robustness due to insufficient exploration of adversarial space. In this paper, we develop a loss function to mitigate the problem of degraded robustness under FAT. Specifically, we derive a quadratic upper bound (QUB) on the adversarial training (AT) loss function and propose to utilize the bound with existing FAT methods. Our experimental results show that applying QUB loss to the existing methods yields significant improvement of robustness. Furthermore, using various metrics, we demonstrate that this improvement is likely to result from the smoothened loss landscape of the resulting model.
|
https://arxiv.org/abs/2601.13645
|
Academic Papers
|
svg
|
064e534a6717ce79bdbaeb6b2a1a07dc0af2ed9f0ca9fe33a989ef4f21371f92
|
2026-01-21T00:00:00-05:00
|
Fusion Segment Transformer: Bi-Directional Attention Guided Fusion Network for AI-Generated Music Detection
|
arXiv:2601.13647v1 Announce Type: new Abstract: With the rise of generative AI technology, anyone can now easily create and deploy AI-generated music, which has heightened the need for technical solutions to address copyright and ownership issues. While existing works mainly focused on short-audio, the challenge of full-audio detection, which requires modeling long-term structure and context, remains insufficiently explored. To address this, we propose an improved version of the Segment Transformer, termed the Fusion Segment Transformer. As in our previous work, we extract content embeddings from short music segments using diverse feature extractors. Furthermore, we enhance the architecture for full-audio AI-generated music detection by introducing a Gated Fusion Layer that effectively integrates content and structural information, enabling the capture of long-term context. Experiments on the SONICS and AIME datasets show that our approach outperforms the previous model and recent baselines, achieving state-of-the-art results in AI-generated music detection.
|
https://arxiv.org/abs/2601.13647
|
Academic Papers
|
svg
|
6cdff121bba36986d34ce41ae5d7e12d9b47a2b0f1a4d0b038893bfb32d6b9a1
|
2026-01-21T00:00:00-05:00
|
Fairness or Fluency? An Investigation into Language Bias of Pairwise LLM-as-a-Judge
|
arXiv:2601.13649v1 Announce Type: new Abstract: Recent advances in Large Language Models (LLMs) have incentivized the development of LLM-as-a-judge, an application of LLMs where they are used as judges to decide the quality of a certain piece of text given a certain context. However, previous studies have demonstrated that LLM-as-a-judge can be biased towards different aspects of the judged texts, which often do not align with human preference. One of the identified biases is language bias, which indicates that the decision of LLM-as-a-judge can differ based on the language of the judged texts. In this paper, we study two types of language bias in pairwise LLM-as-a-judge: (1) performance disparity between languages when the judge is prompted to compare options from the same language, and (2) bias towards options written in major languages when the judge is prompted to compare options of two different languages. We find that for same-language judging, there exist significant performance disparities across language families, with European languages consistently outperforming African languages, and this bias is more pronounced in culturally-related subjects. For inter-language judging, we observe that most models favor English answers, and that this preference is influenced more by answer language than question language. Finally, we investigate whether language bias is in fact caused by low-perplexity bias, a previously identified bias of LLM-as-a-judge, and we find that while perplexity is slightly correlated with language bias, language bias cannot be fully explained by perplexity only.
|
https://arxiv.org/abs/2601.13649
|
Academic Papers
|
svg
|
7c250b7c5427dee3167706d2c1e8a6fe3e30ece594aebc1327c312ba6f30deb8
|
2026-01-21T00:00:00-05:00
|
Face-Voice Association with Inductive Bias for Maximum Class Separation
|
arXiv:2601.13651v1 Announce Type: new Abstract: Face-voice association is widely studied in multimodal learning and is approached representing faces and voices with embeddings that are close for a same person and well separated from those of others. Previous work achieved this with loss functions. Recent advancements in classification have shown that the discriminative ability of embeddings can be strengthened by imposing maximum class separation as inductive bias. This technique has never been used in the domain of face-voice association, and this work aims at filling this gap. More specifically, we develop a method for face-voice association that imposes maximum class separation among multimodal representations of different speakers as an inductive bias. Through quantitative experiments we demonstrate the effectiveness of our approach, showing that it achieves SOTA performance on two task formulation of face-voice association. Furthermore, we carry out an ablation study to show that imposing inductive bias is most effective when combined with losses for inter-class orthogonality. To the best of our knowledge, this work is the first that applies and demonstrates the effectiveness of maximum class separation as an inductive bias in multimodal learning; it hence paves the way to establish a new paradigm.
|
https://arxiv.org/abs/2601.13651
|
Academic Papers
|
svg
|
bb8392305e031acedf9adc44d8868413dbd7971b2897e534a6ecf7e883d3f743
|
2026-01-21T00:00:00-05:00
|
TimeART: Towards Agentic Time Series Reasoning via Tool-Augmentation
|
arXiv:2601.13653v1 Announce Type: new Abstract: Time series data widely exist in real-world cyber-physical systems. Though analyzing and interpreting them contributes to significant values, e.g, disaster prediction and financial risk control, current workflows mainly rely on human data scientists, which requires significant labor costs and lacks automation. To tackle this, we introduce TimeART, a framework fusing the analytical capability of strong out-of-the-box tools and the reasoning capability of Large Language Models (LLMs), which serves as a fully agentic data scientist for Time Series Question Answering (TSQA). To teach the LLM-based Time Series Reasoning Models (TSRMs) strategic tool-use, we also collect a 100k expert trajectory corpus called TimeToolBench. To enhance TSRMs' generalization capability, we then devise a four-stage training strategy, which boosts TSRMs through learning from their own early experiences and self-reflections. Experimentally, we train an 8B TSRM on TimeToolBench and equip it with the TimeART framework, and it achieves consistent state-of-the-art performance on multiple TSQA tasks, which pioneers a novel approach towards agentic time series reasoning.
|
https://arxiv.org/abs/2601.13653
|
Academic Papers
|
svg
|
04fd3b6373fc793858b3952f4041c5d0a0ec72232a81c624c217fb6612c803c3
|
2026-01-21T00:00:00-05:00
|
Why Does the LLM Stop Computing: An Empirical Study of User-Reported Failures in Open-Source LLMs
|
arXiv:2601.13655v1 Announce Type: new Abstract: The democratization of open-source Large Language Models (LLMs) allows users to fine-tune and deploy models on local infrastructure but exposes them to a First Mile deployment landscape. Unlike black-box API consumption, the reliability of user-managed orchestration remains a critical blind spot. To bridge this gap, we conduct the first large-scale empirical study of 705 real-world failures from the open-source DeepSeek, Llama, and Qwen ecosystems. Our analysis reveals a paradigm shift: white-box orchestration relocates the reliability bottleneck from model algorithmic defects to the systemic fragility of the deployment stack. We identify three key phenomena: (1) Diagnostic Divergence: runtime crashes distinctively signal infrastructure friction, whereas incorrect functionality serves as a signature for internal tokenizer defects. (2) Systemic Homogeneity: Root causes converge across divergent series, confirming reliability barriers are inherent to the shared ecosystem rather than specific architectures. (3) Lifecycle Escalation: Barriers escalate from intrinsic configuration struggles during fine-tuning to compounded environmental incompatibilities during inference. Supported by our publicly available dataset, these insights provide actionable guidance for enhancing the reliability of the LLM landscape.
|
https://arxiv.org/abs/2601.13655
|
Academic Papers
|
svg
|
4f14980e5fa4e8acecb9885a2cc56ca9e95c2f5ed46eff58e1d81f69a283d52e
|
2026-01-21T00:00:00-05:00
|
Communication-Free Collective Navigation for a Swarm of UAVs via LiDAR-Based Deep Reinforcement Learning
|
arXiv:2601.13657v1 Announce Type: new Abstract: This paper presents a deep reinforcement learning (DRL) based controller for collective navigation of unmanned aerial vehicle (UAV) swarms in communication-denied environments, enabling robust operation in complex, obstacle-rich environments. Inspired by biological swarms where informed individuals guide groups without explicit communication, we employ an implicit leader-follower framework. In this paradigm, only the leader possesses goal information, while follower UAVs learn robust policies using only onboard LiDAR sensing, without requiring any inter-agent communication or leader identification. Our system utilizes LiDAR point clustering and an extended Kalman filter for stable neighbor tracking, providing reliable perception independent of external positioning systems. The core of our approach is a DRL controller, trained in GPU-accelerated Nvidia Isaac Sim, that enables followers to learn complex emergent behaviors - balancing flocking and obstacle avoidance - using only local perception. This allows the swarm to implicitly follow the leader while robustly addressing perceptual challenges such as occlusion and limited field-of-view. The robustness and sim-to-real transfer of our approach are confirmed through extensive simulations and challenging real-world experiments with a swarm of five UAVs, which successfully demonstrated collective navigation across diverse indoor and outdoor environments without any communication or external localization.
|
https://arxiv.org/abs/2601.13657
|
Academic Papers
|
svg
|
7a85dcb5e6867faac267a9dfac062515f30a771748b8f3a35872901293d91b41
|
2026-01-21T00:00:00-05:00
|
Beyond Known Facts: Generating Unseen Temporal Knowledge to Address Data Contamination in LLM Evaluation
|
arXiv:2601.13658v1 Announce Type: new Abstract: The automatic extraction of information is important for populating large web knowledge bases such as Wikidata. The temporal version of that task, temporal knowledge graph extraction (TKGE), involves extracting temporally grounded facts from text, represented as semantic quadruples (subject, relation, object, timestamp). Many recent systems take advantage of large language models (LLMs), which are becoming a new cornerstone of the web due to their performance on many tasks across the natural language processing (NLP) field. Despite the importance of TKGE, existing datasets for training and evaluation remain scarce, and contamination of evaluation data is an unaddressed issue, potentially inflating LLMs' perceived performance due to overlaps between training and evaluation sets. To mitigate these challenges, we propose a novel synthetic evaluation dataset constructed from predicted future, previously unseen temporal facts, thereby eliminating contamination and enabling robust and unbiased benchmarking. Our dataset creation involves a two-step approach: (1) Temporal Knowledge Graph Forecasting (TKGF) generates plausible future quadruples, which are subsequently filtered to adhere to the original knowledge base schema; (2) LLMs perform quadruple-to-text generation, creating semantically aligned textual descriptions. We benchmark Extract, Define and Canonicalize (EDC), a state-of-the-art LLM-based extraction framework, demonstrating that LLM performance decreases when evaluated on our dataset compared to a dataset of known facts. We publicly release our dataset consisting of 4.2K future quadruples and corresponding textual descriptions, along with the generation methodology, enabling continuous creation of unlimited future temporal datasets to serve as long-term, contamination-free benchmarks for TKGE.
|
https://arxiv.org/abs/2601.13658
|
Academic Papers
|
svg
|
c2c86e068a61c108dff7902d1d2dce225626e9b16b4b994a381d3aa1d05c6dc2
|
2026-01-21T00:00:00-05:00
|
Temporal-Spatial Decouple before Act: Disentangled Representation Learning for Multimodal Sentiment Analysis
|
arXiv:2601.13659v1 Announce Type: new Abstract: Multimodal Sentiment Analysis integrates Linguistic, Visual, and Acoustic. Mainstream approaches based on modality-invariant and modality-specific factorization or on complex fusion still rely on spatiotemporal mixed modeling. This ignores spatiotemporal heterogeneity, leading to spatiotemporal information asymmetry and thus limited performance. Hence, we propose TSDA, Temporal-Spatial Decouple before Act, which explicitly decouples each modality into temporal dynamics and spatial structural context before any interaction. For every modality, a temporal encoder and a spatial encoder project signals into separate temporal and spatial body. Factor-Consistent Cross-Modal Alignment then aligns temporal features only with their temporal counterparts across modalities, and spatial features only with their spatial counterparts. Factor specific supervision and decorrelation regularization reduce cross factor leakage while preserving complementarity. A Gated Recouple module subsequently recouples the aligned streams for task. Extensive experiments show that TSDA outperforms baselines. Ablation analysis studies confirm the necessity and interpretability of the design.
|
https://arxiv.org/abs/2601.13659
|
Academic Papers
|
svg
|
a1a639a71a47f724d05c957eddd0a3159de0a991f80fd8981fac22a3c72fa8ae
|
2026-01-21T00:00:00-05:00
|
Reinforcement Learning for Opportunistic Routing in Software-Defined LEO-Terrestrial Systems
|
arXiv:2601.13662v1 Announce Type: new Abstract: The proliferation of large-scale low Earth orbit (LEO) satellite constellations is driving the need for intelligent routing strategies that can effectively deliver data to terrestrial networks under rapidly time-varying topologies and intermittent gateway visibility. Leveraging the global control capabilities of a geostationary (GEO)-resident software-defined networking (SDN) controller, we introduce opportunistic routing, which aims to minimize delivery delay by forwarding packets to any currently available ground gateways rather than fixed destinations. This makes it a promising approach for achieving low-latency and robust data delivery in highly dynamic LEO networks. Specifically, we formulate a constrained stochastic optimization problem and employ a residual reinforcement learning framework to optimize opportunistic routing for reducing transmission delay. Simulation results over multiple days of orbital data demonstrate that our method achieves significant improvements in queue length reduction compared to classical backpressure and other well-known queueing algorithms.
|
https://arxiv.org/abs/2601.13662
|
Academic Papers
|
svg
|
9b10853dea962c39b0469fa3372119d53329ffccf566a49b56ff56495cbd3ff6
|
2026-01-21T00:00:00-05:00
|
On the stability, complexity, and distribution of similarity classes of the longest edge bisection process for triangles
|
arXiv:2601.13663v1 Announce Type: new Abstract: The Longest Edge Bisection (LEB) of a triangle is performed by joining the midpoint of its longest edge to the opposite vertex. Applying this procedure iteratively produces an infinite family of triangles. Surprisingly, a classical result of Adler (1983) shows that for any initial triangle, this infinite family falls into finitely many similarity classes. While the set of classes is finite, we show that a far smaller, stable subset of ``fat'' triangles, called {\bf terminal quadruples}, effectively dominates the final mesh structure. We prove the following asymptotic area distribution result: for every initial triangle, the portion of area occupied by terminal quadruples tends to one, with the convergence occurring at an exponential rate. In fact, we provide the precise distribution of triangles in every step. We introduce the {\bf bisection graph} and use spectral methods to establish this result. Given this dominance, we provide a complete characterization of triangles possessing a single terminal quadruple, while conversely exhibiting a sequence of triangles with an unbounded number of terminal quadruples. Furthermore, we reveal several fundamental geometric properties of the points of a terminal quadruple, laying the groundwork for studying the geometric distribution of the entire orbit. Our analysis leverages the hyperbolic geometry framework of Perdomo and Plaza (2014) and refines their techniques.
|
https://arxiv.org/abs/2601.13663
|
Academic Papers
|
svg
|
b99002644d5e7f2854558e9f43a69d54769912f03fe918f3f671ac288cdb391c
|
2026-01-21T00:00:00-05:00
|
VIAFormer: Voxel-Image Alignment Transformer for High-Fidelity Voxel Refinement
|
arXiv:2601.13664v1 Announce Type: new Abstract: We propose VIAFormer, a Voxel-Image Alignment Transformer model designed for Multi-view Conditioned Voxel Refinement--the task of repairing incomplete noisy voxels using calibrated multi-view images as guidance. Its effectiveness stems from a synergistic design: an Image Index that provides explicit 3D spatial grounding for 2D image tokens, a Correctional Flow objective that learns a direct voxel-refinement trajectory, and a Hybrid Stream Transformer that enables robust cross-modal fusion. Experiments show that VIAFormer establishes a new state of the art in correcting both severe synthetic corruptions and realistic artifacts on the voxel shape obtained from powerful Vision Foundation Models. Beyond benchmarking, we demonstrate VIAFormer as a practical and reliable bridge in real-world 3D creation pipelines, paving the way for voxel-based methods to thrive in large-model, big-data wave.
|
https://arxiv.org/abs/2601.13664
|
Academic Papers
|
svg
|
1992e357b34ae1e2486e321cb80609afadb2188edb20d21d5b6364a35e38fe19
|
2026-01-21T00:00:00-05:00
|
Transformer based Multi-task Fusion Network for Food Spoilage Detection and Shelf life Forecasting
|
arXiv:2601.13665v1 Announce Type: new Abstract: Food wastage is one of the critical challenges in the agricultural supply chain, and accurate and effective spoilage detection can help to reduce it. Further, it is highly important to forecast the spoilage information. This aids the longevity of the supply chain management in the agriculture field. This motivated us to propose fusion based architectures by combining CNN with LSTM and DeiT transformer for the following multi-tasks simultaneously: (i) vegetable classification, (ii) food spoilage detection, and (iii) shelf life forecasting. We developed a dataset by capturing images of vegetables from their fresh state until they were completely spoiled. From the experimental analysis it is concluded that the proposed fusion architectures CNN+CNN-LSTM and CNN+DeiT Transformer outperformed several deep learning models such as CNN, VGG16, ResNet50, Capsule Networks, and DeiT Transformers. Overall, CNN + DeiT Transformer yielded F1-score of 0.98 and 0.61 in vegetable classification and spoilage detection respectively and mean squared error (MSE) and symmetric mean absolute percentage error (SMAPE) of 3.58, and 41.66% respectively in spoilage forecasting. Further, the reliability of the fusion models was validated on noisy images and integrated with LIME to visualize the model decisions.
|
https://arxiv.org/abs/2601.13665
|
Academic Papers
|
svg
|
51e718b95b340bf31a44ede5e960d8c94cf392a1b868810f20a3cba521a8cbca
|
2026-01-21T00:00:00-05:00
|
CommunityBench: Benchmarking Community-Level Alignment across Diverse Groups and Tasks
|
arXiv:2601.13669v1 Announce Type: new Abstract: Large language models (LLMs) alignment ensures model behaviors reflect human value. Existing alignment strategies primarily follow two paths: one assumes a universal value set for a unified goal (i.e., one-size-fits-all), while the other treats every individual as unique to customize models (i.e., individual-level). However, assuming a monolithic value space marginalizes minority norms, while tailoring individual models is prohibitively expensive. Recognizing that human society is organized into social clusters with high intra-group value alignment, we propose community-level alignment as a "middle ground". Practically, we introduce CommunityBench, the first large-scale benchmark for community-level alignment evaluation, featuring four tasks grounded in Common Identity and Common Bond theory. With CommunityBench, we conduct a comprehensive evaluation of various foundation models on CommunityBench, revealing that current LLMs exhibit limited capacity to model community-specific preferences. Furthermore, we investigate the potential of community-level alignment in facilitating individual modeling, providing a promising direction for scalable and pluralistic alignment.
|
https://arxiv.org/abs/2601.13669
|
Academic Papers
|
svg
|
4a9355177df80dc1d6d457549ca86f19f5308df0443bbe80452640477e93c106
|
2026-01-21T00:00:00-05:00
|
The Orchestration of Multi-Agent Systems: Architectures, Protocols, and Enterprise Adoption
|
arXiv:2601.13671v1 Announce Type: new Abstract: Orchestrated multi-agent systems represent the next stage in the evolution of artificial intelligence, where autonomous agents collaborate through structured coordination and communication to achieve complex, shared objectives. This paper consolidates and formalizes the technical composition of such systems, presenting a unified architectural framework that integrates planning, policy enforcement, state management, and quality operations into a coherent orchestration layer. Another primary contribution of this work is the in-depth technical delineation of two complementary communication protocols - the Model Context Protocol, which standardizes how agents access external tools and contextual data, and the Agent2Agent protocol, which governs peer coordination, negotiation, and delegation. Together, these protocols establish an interoperable communication substrate that enables scalable, auditable, and policy-compliant reasoning across distributed agent collectives. Beyond protocol design, the paper details how orchestration logic, governance frameworks, and observability mechanisms collectively sustain system coherence, transparency, and accountability. By synthesizing these elements into a cohesive technical blueprint, this paper provides comprehensive treatments of orchestrated multi-agent systems - bridging conceptual architectures with implementation-ready design principles for enterprise-scale AI ecosystems.
|
https://arxiv.org/abs/2601.13671
|
Academic Papers
|
svg
|
812e4725986c0e628c722aff3f55a2766f8ef79e66a96311e03c3fa5acd48c2c
|
2026-01-21T00:00:00-05:00
|
Autoregressive deep learning for real-time simulation of soft tissue dynamics during virtual neurosurgery
|
arXiv:2601.13676v1 Announce Type: new Abstract: Accurate simulation of brain deformation is a key component for developing realistic, interactive neurosurgical simulators, as complex nonlinear deformations must be captured to ensure realistic tool-tissue interactions. However, traditional numerical solvers often fall short in meeting real-time performance requirements. To overcome this, we introduce a deep learning-based surrogate model that efficiently simulates transient brain deformation caused by continuous interactions between surgical instruments and the virtual brain geometry. Building on Universal Physics Transformers, our approach operates directly on large-scale mesh data and is trained on an extensive dataset generated from nonlinear finite element simulations, covering a broad spectrum of temporal instrument-tissue interaction scenarios. To reduce the accumulation of errors in autoregressive inference, we propose a stochastic teacher forcing strategy applied during model training. Specifically, training consists of short stochastic rollouts in which the proportion of ground truth inputs is gradually decreased in favor of model-generated predictions. Our results show that the proposed surrogate model achieves accurate and efficient predictions across a range of transient brain deformation scenarios, scaling to meshes with up to 150,000 nodes. The introduced stochastic teacher forcing technique substantially improves long-term rollout stability, reducing the maximum prediction error from 6.7 mm to 3.5 mm. We further integrate the trained surrogate model into an interactive neurosurgical simulation environment, achieving runtimes below 10 ms per simulation step on consumer-grade inference hardware. Our proposed deep learning framework enables rapid, smooth and accurate biomechanical simulations of dynamic brain tissue deformation, laying the foundation for realistic surgical training environments.
|
https://arxiv.org/abs/2601.13676
|
Academic Papers
|
svg
|
2eb70f492db27edfd691b9d2e13263005307cb9d3e703c033c26cea4d62f56f8
|
2026-01-21T00:00:00-05:00
|
Finally Outshining the Random Baseline: A Simple and Effective Solution for Active Learning in 3D Biomedical Imaging
|
arXiv:2601.13677v1 Announce Type: new Abstract: Active learning (AL) has the potential to drastically reduce annotation costs in 3D biomedical image segmentation, where expert labeling of volumetric data is both time-consuming and expensive. Yet, existing AL methods are unable to consistently outperform improved random sampling baselines adapted to 3D data, leaving the field without a reliable solution. We introduce Class-stratified Scheduled Power Predictive Entropy (ClaSP PE), a simple and effective query strategy that addresses two key limitations of standard uncertainty-based AL methods: class imbalance and redundancy in early selections. ClaSP PE combines class-stratified querying to ensure coverage of underrepresented structures and log-scale power noising with a decaying schedule to enforce query diversity in early-stage AL and encourage exploitation later. In our evaluation on 24 experimental settings using four 3D biomedical datasets within the comprehensive nnActive benchmark, ClaSP PE is the only method that generally outperforms improved random baselines in terms of both segmentation quality with statistically significant gains, whilst remaining annotation efficient. Furthermore, we explicitly simulate the real-world application by testing our method on four previously unseen datasets without manual adaptation, where all experiment parameters are set according to predefined guidelines. The results confirm that ClaSP PE robustly generalizes to novel tasks without requiring dataset-specific tuning. Within the nnActive framework, we present compelling evidence that an AL method can consistently outperform random baselines adapted to 3D segmentation, in terms of both performance and annotation efficiency in a realistic, close-to-production scenario. Our open-source implementation and clear deployment guidelines make it readily applicable in practice. Code is at https://github.com/MIC-DKFZ/nnActive.
|
https://arxiv.org/abs/2601.13677
|
Academic Papers
|
svg
|
636ae61cc826d040c85895c97ef9daf42ab0e0a5634f6f6f072a74ad729ea977
|
2026-01-21T00:00:00-05:00
|
Ultra-Lightweight Network for Ship-Radiated Sound Classification on Embedded Deployment
|
arXiv:2601.13679v1 Announce Type: new Abstract: This letter presents ShuffleFAC, a lightweight acoustic model for ship-radiated sound classification in resource-constrained maritime monitoring systems. ShuffleFAC integrates Frequency-Aware convolution into an efficiency-oriented backbone using separable convolution, point-wise group convolution, and channel shuffle, enabling frequency-sensitive feature extraction with low computational cost. Experiments on the DeepShip dataset show that ShuffleFAC achieves competitive performance with substantially reduced complexity. In particular, ShuffleFAC ($\gamma=16$) attains a macro F1-score of 71.45 $\pm$ 1.18% using 39K parameters and 3.06M MACs, and achieves an inference latency of 6.05 $\pm$ 0.95ms on a Raspberry Pi. Compared with MicroNet0, it improves macro F1-score by 1.82 % while reducing model size by 9.7x and latency by 2.5x. These results indicate that ShuffleFAC is suitable for real-time embedded UATR.
|
https://arxiv.org/abs/2601.13679
|
Academic Papers
|
svg
|
b6feb1ef7e6903964f41a788768aa72f6aba7ff1edfb8385ca63cf4821542516
|
2026-01-21T00:00:00-05:00
|
ORCA -- An Automated Threat Analysis Pipeline for O-RAN Continuous Development
|
arXiv:2601.13681v1 Announce Type: new Abstract: The Open-Radio Access Network (O-RAN) integrates numerous software components in a cloud-like deployment, opening the radio access network to previously unconsidered security threats. With the ever-evolving threat landscape, integrating security practices through a DevSecOps approach is essential for fast and secure releases. Current vulnerability assessment practices often rely on manual, labor-intensive, and subjective investigations, leading to inconsistencies in the threat analysis. To mitigate these issues, we establish an automated pipeline that leverages Natural Language Processing (NLP) to minimize human intervention and associated biases. By mapping real-world vulnerabilities to predefined threat lists with a standardized input format, our approach is the first to enable iterative, quantitative, and efficient assessments, generating reliable threat scores for both individual vulnerabilities and entire system components within O-RAN. We illustrate the effectiveness of our framework through an example implementation for O-RAN, showcasing how continuous security testing can integrate into automated testing pipelines to address the unique security challenges of this paradigm shift in telecommunications.
|
https://arxiv.org/abs/2601.13681
|
Academic Papers
|
svg
|
b4f48c35acb62e963cfcd9d6d8dc7c11febcefe5b4ae10a1258016be23b74c43
|
2026-01-21T00:00:00-05:00
|
CodeContests-O: Powering LLMs via Feedback-Driven Iterative Test Case Generation
|
arXiv:2601.13682v1 Announce Type: new Abstract: The rise of reasoning models necessitates large-scale verifiable data, for which programming tasks serve as an ideal source. However, while competitive programming platforms provide abundant problems and solutions, high-quality test cases for verification remain scarce. Existing approaches attempt to synthesize test cases using Large Language Models (LLMs), but rely solely on the model's intrinsic generation capabilities without external feedback, frequently resulting in insufficiently diverse cases. To address this limitation, we propose a $\textbf{Feedback-Driven Iterative Framework}$ for comprehensive test case construction. Specifically, our method leverages the LLM to generate initial test cases, executes them against known correct and incorrect solutions, and utilizes the failed results as feedback to guide the LLM in refining the test cases toward high fidelity and discriminability. We then apply this method to the CodeContests dataset to construct an optimized high-quality derivative, $\textbf{CodeContests-O}$. Evaluating against the entire pool of solutions ($1.1 \times 10^7$ in total), our dataset achieves an average True Positive Rate (TPR) of $89.37\%$ and True Negative Rate (TNR) of $90.89\%$, significantly outperforming the CodeContests and CodeContests+ by margins of $4.32\%$ and $9.37\%$, respectively. Furthermore, fine-tuning the Qwen2.5-7B model on CodeContests-O results in a $9.52\%$ improvement on LiveCodeBench (Pass@1). Experiments demonstrate the effectiveness of our framework and the quality of CodeContests-O. To support reproducibility and facilitate future research, we release the $\href{https://github.com/cai-jianfeng/CodeContests-O}{code}$ and $\href{https://huggingface.co/datasets/caijanfeng/CodeContests-O}{dataset}$.
|
https://arxiv.org/abs/2601.13682
|
Academic Papers
|
svg
|
747f3c289fe9c8108f284642b2edea2ad7231e1296860b9c7ac9d96cae3f8089
|
2026-01-21T00:00:00-05:00
|
Dynamic Differential Linear Attention: Enhancing Linear Diffusion Transformer for High-Quality Image Generation
|
arXiv:2601.13683v1 Announce Type: new Abstract: Diffusion transformers (DiTs) have emerged as a powerful architecture for high-fidelity image generation, yet the quadratic cost of self-attention poses a major scalability bottleneck. To address this, linear attention mechanisms have been adopted to reduce computational cost; unfortunately, the resulting linear diffusion transformers (LiTs) models often come at the expense of generative performance, frequently producing over-smoothed attention weights that limit expressiveness. In this work, we introduce Dynamic Differential Linear Attention (DyDiLA), a novel linear attention formulation that enhances the effectiveness of LiTs by mitigating the oversmoothing issue and improving generation quality. Specifically, the novelty of DyDiLA lies in three key designs: (i) dynamic projection module, which facilitates the decoupling of token representations by learning with dynamically assigned knowledge; (ii) dynamic measure kernel, which provides a better similarity measurement to capture fine-grained semantic distinctions between tokens by dynamically assigning kernel functions for token processing; and (iii) token differential operator, which enables more robust query-to-key retrieval by calculating the differences between the tokens and their corresponding information redundancy produced by dynamic measure kernel. To capitalize on DyDiLA, we introduce a refined LiT, termed DyDi-LiT, that systematically incorporates our advancements. Extensive experiments show that DyDi-LiT consistently outperforms current state-of-the-art (SOTA) models across multiple metrics, underscoring its strong practical potential.
|
https://arxiv.org/abs/2601.13683
|
Academic Papers
|
svg
|
2323360c5a11d7875d3aa1736de5b26da596b17bb42f98ac7fbd65b8d2df2c66
|
2026-01-21T00:00:00-05:00
|
HeteroCache: A Dynamic Retrieval Approach to Heterogeneous KV Cache Compression for Long-Context LLM Inference
|
arXiv:2601.13684v1 Announce Type: new Abstract: The linear memory growth of the KV cache poses a significant bottleneck for LLM inference in long-context tasks. Existing static compression methods often fail to preserve globally important information, principally because they overlook the attention drift phenomenon where token significance evolves dynamically. Although recent dynamic retrieval approaches attempt to address this issue, they typically suffer from coarse-grained caching strategies and incur high I/O overhead due to frequent data transfers. To overcome these limitations, we propose HeteroCache, a training-free dynamic compression framework. Our method is built on two key insights: attention heads exhibit diverse temporal heterogeneity, and there is significant spatial redundancy among heads within the same layer. Guided by these insights, HeteroCache categorizes heads based on stability and redundancy. Consequently, we apply a fine-grained weighting strategy that allocates larger cache budgets to heads with rapidly shifting attention to capture context changes, thereby addressing the inefficiency of coarse-grained strategies. Furthermore, we employ a hierarchical storage mechanism in which a subset of representative heads monitors attention shift, and trigger an asynchronous, on-demand retrieval of contexts from the CPU, effectively hiding I/O latency. Finally, experiments demonstrate that HeteroCache achieves state-of-the-art performance on multiple long-context benchmarks and accelerates decoding by up to $3\times$ compared to the original model in the 224K context. Our code will be open-source.
|
https://arxiv.org/abs/2601.13684
|
Academic Papers
|
svg
|
bb02fd702e9d6b3220fda992a3522538ee6c8f71a0d9c2e0828bd9dcbd8581ee
|
2026-01-21T00:00:00-05:00
|
Understanding Mental States to Guide Social Influence in Multi-Person Group Dialogue
|
arXiv:2601.13687v1 Announce Type: new Abstract: Existing dynamic Theory of Mind (ToM) benchmarks mostly place language models in a passive role: the model reads a sequence of connected scenarios and reports what people believe, feel, intend, and do as these states change. In real social interaction, ToM is also used for action: a speaker plans what to say in order to shift another person's mental-state trajectory toward a goal. We introduce SocialMindChange, a benchmark that moves from tracking minds to changing minds in social interaction. Each instance defines a social context with 4 characters and five connected scenes. The model plays one character and generates dialogue across the five scenes to reach the target while remaining consistent with the evolving states of all participants. SocialMindChange also includes selected higher-order states. Using a structured four-step framework, we construct 1,200 social contexts, covering 6000 scenarios and over 90,000 questions, each validated for realism and quality. Evaluations on ten state-of-the-art LLMs show that their average performance is 54.2% below human performance. This gap suggests that current LLMs still struggle to maintain and change mental-state representations across long, linked interactions.
|
https://arxiv.org/abs/2601.13687
|
Academic Papers
|
svg
|
c39aa7792359e200b969b1ffedb17245a1db8398f321417353d2eef825b3459b
|
2026-01-21T00:00:00-05:00
|
Criminator: An Easy-to-Use XR "Crime Animator" for Rapid Reconstruction and Analysis of Dynamic Crime Scenes
|
arXiv:2601.13689v1 Announce Type: new Abstract: Law enforcement authorities are increasingly interested in 3D modelling for virtual crime scene reconstruction, enabling offline analysis without the cost and contamination risk of on-site investigation. Past work has demonstrated spatial relationships through static modelling but validating the sequence of events in dynamic scenarios is crucial for solving a case. Yet, animation tools are not well suited to crime scene reconstruction, and complex for non-experts in 3D modelling/animation. Through a co-design process with criminology experts, we designed "Criminator"-a methodological framework and XR tool that simplifies animation authoring. We evaluated this tool with participants trained in criminology (n=6) and untrained individuals (n=12). Both groups were able to successfully complete the character animation tasks and provided high usability ratings for observation tasks. Criminator has potential for hypothesis testing, demonstration, sense-making, and training. Challenges remain in how such a tool fits into the entire judicial process, with questions about including animations as evidence.
|
https://arxiv.org/abs/2601.13689
|
Academic Papers
|
svg
|
5d76faa53f6fd592cb223c398f603f0694cfca99eeb404221e0624d0b03fc4d2
|
2026-01-21T00:00:00-05:00
|
Dr. Assistant: Enhancing Clinical Diagnostic Inquiry via Structured Diagnostic Reasoning Data and Reinforcement Learning
|
arXiv:2601.13690v1 Announce Type: new Abstract: Clinical Decision Support Systems (CDSSs) provide reasoning and inquiry guidance for physicians, yet they face notable challenges, including high maintenance costs and low generalization capability. Recently, Large Language Models (LLMs) have been widely adopted in healthcare due to their extensive knowledge reserves, retrieval, and communication capabilities. While LLMs show promise and excel at medical benchmarks, their diagnostic reasoning and inquiry skills are constrained. To mitigate this issue, we propose (1) Clinical Diagnostic Reasoning Data (CDRD) structure to capture abstract clinical reasoning logic, and a pipeline for its construction, and (2) the Dr. Assistant, a clinical diagnostic model equipped with clinical reasoning and inquiry skills. Its training involves a two-stage process: SFT, followed by RL with a tailored reward function. We also introduce a benchmark to evaluate both diagnostic reasoning and inquiry. Our experiments demonstrate that the Dr. Assistant outperforms open-source models and achieves competitive performance to closed-source models, providing an effective solution for clinical diagnostic inquiry guidance.
|
https://arxiv.org/abs/2601.13690
|
Academic Papers
|
svg
|
95a09957c776347d0ebc1858bdafa3a97b80dd6383a58f82476f5ca1d1a06c5a
|
2026-01-21T00:00:00-05:00
|
Generative Intent Prediction Agentic AI empowered Edge Service Function Chain Orchestration
|
arXiv:2601.13694v1 Announce Type: new Abstract: With the development of artificial intelligence (AI), Agentic AI (AAI) based on large language models (LLMs) is gradually being applied to network management. However, in edge network environments, high user mobility and implicit service intents pose significant challenges to the passive and reactive management of traditional AAI. To address the limitations of existing approaches in handling dynamic demands and predicting users' implicit intents, in this paper we propose an edge service function chain (SFC) orchestration framework empowered by a Generative Intent Prediction Agent (GIPA). Our GIPA aims to shift the paradigm from passive execution to proactive prediction and orchestration. First, we construct a multidimensional intent space that includes functional preferences, QoS sensitivity, and resource requirements, enabling the mapping from unstructured natural language to quantifiable physical resource demands. Second, to cope with the complexity and randomness of intent sequences, we design an intent prediction model based on a Generative Diffusion Model (GDM), which reconstructs users' implicit intents from multidimensional context through a reverse denoising process. Finally, the predicted implicit intents are embedded as global prompts into the SFC orchestration model to guide the network in proactively and ahead-of-time optimizing SFC deployment strategies. Experiment results show that GIPA outperforms existing baseline methods in highly concurrent and highly dynamic scenarios.
|
https://arxiv.org/abs/2601.13694
|
Academic Papers
|
svg
|
2cc461cc68827f51c9c88098ad29169c96b93c23d3c072a093e2021c1fe20135
|
2026-01-21T00:00:00-05:00
|
OptiSQL: Executable SQL Generation from Optical TokensOptiSQL: Executable SQL Generation from Optical Tokens
|
arXiv:2601.13695v1 Announce Type: new Abstract: Executable SQL generation is typically studied in text-to-SQL settings, where tables are provided as fully linearized textual schemas and contents. While effective, this formulation assumes access to structured text and incurs substantial token overhead, which is misaligned with many real-world scenarios where tables appear as visual artifacts in documents or webpages. We investigate whether compact optical representations can serve as an efficient interface for executable semantic parsing. We present OptiSQL, a vision-driven framework that generates executable SQL directly from table images and natural language questions using compact optical tokens. OptiSQL leverages an OCR-oriented visual encoder to compress table structure and content into a small set of optical tokens and fine-tunes a pretrained decoder for SQL generation while freezing the encoder to isolate representation sufficiency. Experiments on a visualized version of Spider 2.0-Snow show that OptiSQL retains strong execution accuracy while reducing table input tokens by an order of magnitude. Robustness analyses further demonstrate that optical tokens preserve essential structural information under visual perturbations.
|
https://arxiv.org/abs/2601.13695
|
Academic Papers
|
svg
|
ff527609b3785e9b8434c633837b832d41cd60b540d51e066aa7082868ded765
|
2026-01-21T00:00:00-05:00
|
Uncertainty-Aware Gradient Signal-to-Noise Data Selection for Instruction Tuning
|
arXiv:2601.13697v1 Announce Type: new Abstract: Instruction tuning is a standard paradigm for adapting large language models (LLMs), but modern instruction datasets are large, noisy, and redundant, making full-data fine-tuning costly and often unnecessary. Existing data selection methods either build expensive gradient datastores or assign static scores from a weak proxy, largely ignoring evolving uncertainty, and thus missing a key source of LLM interpretability. We propose GRADFILTERING, an objective-agnostic, uncertainty-aware data selection framework that utilizes a small GPT-2 proxy with a LoRA ensemble and aggregates per-example gradients into a Gradient Signal-to-Noise Ratio (G-SNR) utility. Our method matches or surpasses random subsets and strong baselines in most LLM-as-a-judge evaluations as well as in human assessment. Moreover, GRADFILTERING-selected subsets converge faster than competitive filters under the same compute budget, reflecting the benefit of uncertainty-aware scoring.
|
https://arxiv.org/abs/2601.13697
|
Academic Papers
|
svg
|
61e66af673dc7707961c82c8c7a2b7ebf2b9c22ffa239c5225bc5bc7ae81b68c
|
2026-01-21T00:00:00-05:00
|
Does Privacy Always Harm Fairness? Data-Dependent Trade-offs via Chernoff Information Neural Estimation
|
arXiv:2601.13698v1 Announce Type: new Abstract: Fairness and privacy are two vital pillars of trustworthy machine learning. Despite extensive research on these individual topics, the relationship between fairness and privacy has received significantly less attention. In this paper, we utilize the information-theoretic measure Chernoff Information to highlight the data-dependent nature of the relationship among the triad of fairness, privacy, and accuracy. We first define Noisy Chernoff Difference, a tool that allows us to analyze the relationship among the triad simultaneously. We then show that for synthetic data, this value behaves in 3 distinct ways (depending on the distribution of the data). We highlight the data distributions involved in these cases and explore their fairness and privacy implications. Additionally, we show that Noisy Chernoff Difference acts as a proxy for the steepness of the fairness-accuracy curves. Finally, we propose a method for estimating Chernoff Information on data from unknown distributions and utilize this framework to examine the triad dynamic on real datasets. This work builds towards a unified understanding of the fairness-privacy-accuracy relationship and highlights its data-dependent nature.
|
https://arxiv.org/abs/2601.13698
|
Academic Papers
|
svg
|
b878e4dd96390b6ead61ddacffec77a55265cb9f106382447303c1336586c4c4
|
2026-01-21T00:00:00-05:00
|
DistilMOS: Layer-Wise Self-Distillation For Self-Supervised Learning Model-Based MOS Prediction
|
arXiv:2601.13700v1 Announce Type: new Abstract: With the advancement of self-supervised learning (SSL), fine-tuning pretrained SSL models for mean opinion score (MOS) prediction has achieved state-of-the-art performance. However, during fine-tuning, these SSL-based MOS prediction models often suffer from catastrophic forgetting of the pretrained knowledge and tend to overfit the training set, resulting in poor generalization performance. In this study, we propose DistilMOS, a novel method that learns to predict not only MOS but also token IDs obtained by clustering the hidden representations of each layer in the pretrained SSL model. These layer-wise token targets serve as self-distillation signals that enables the MOS prediction model to extract rich internal knowledge from SSL models, enhancing both prediction accuracy and generalization capability. Experimental evaluations demonstrate that our method significantly outperforms standard SSL-based MOS prediction models on both in-domain and out-of-domain evaluations, verifying the effectiveness and practicality of the proposed method.
|
https://arxiv.org/abs/2601.13700
|
Academic Papers
|
svg
|
66a1b5ab948eac38304cca557e34136320cd6725aa76f72f9699795b8f1892e3
|
2026-01-21T00:00:00-05:00
|
IGAA: Intent-Driven General Agentic AI for Edge Services Scheduling using Generative Meta Learning
|
arXiv:2601.13702v1 Announce Type: new Abstract: Agentic AI (AAI), which extends Large Language Models with enhanced reasoning capabilities, has emerged as a promising paradigm for autonomous edge service scheduling. However, user mobility creates highly dynamic service demands in edge networks, and existing service scheduling agents often lack generalization capabilities for new scenarios. Therefore, this paper proposes a novel Intent-Driven General Agentic AI (IGAA) framework. Leveraging a meta-learning paradigm, IGAA enables AAI to continuously learn from prior service scheduling experiences to achieve generalized scheduling capabilities. Particularly, IGAA incorporates three core mechanisms. First, we design a Network-Service-Intent matrix mapping method to allow agents to simulate novel scenarios and generate training datasets. Second, we present an easy-to-hard generalization learning scheme with two customized algorithms, namely Resource Causal Effect-aware Transfer Learning (RCETL) and Action Potential Optimality-aware Transfer Learning (APOTL). These algorithms help IGAA adapt to new scenarios. Furthermore, to prevent catastrophic forgetting during continual IGAA learning, we propose a Generative Intent Replay (GIR) mechanism that synthesizes historical service data to consolidate prior capabilities. Finally, to mitigate the effect of LLM hallucinations on scenario simulation, we incorporate a scenario evaluation and correction model to guide agents in generating rational scenarios and datasets. Extensive experiments demonstrate IGAA's strong generalization and scalability. Specifically, IGAA enables rapid adaptation by transferring learned policies to analogous new ones, such as applying latency-sensitive patterns from real-time computing to optimize novel Internet of Vehicles (IoV) services. Compared to scenario-specific methods, IGAA maintains the intent-satisfaction rate gap within 3.81%.
|
https://arxiv.org/abs/2601.13702
|
Academic Papers
|
svg
|
04fda3cdef46e5c6ef08f93082cc2143a0d9f92cc83972048a75716bd80a1dc7
|
2026-01-21T00:00:00-05:00
|
Performance and Complexity Trade-off Optimization of Speech Models During Training
|
arXiv:2601.13704v1 Announce Type: new Abstract: In speech machine learning, neural network models are typically designed by choosing an architecture with fixed layer sizes and structure. These models are then trained to maximize performance on metrics aligned with the task's objective. While the overall architecture is usually guided by prior knowledge of the task, the sizes of individual layers are often chosen heuristically. However, this approach does not guarantee an optimal trade-off between performance and computational complexity; consequently, post hoc methods such as weight quantization or model pruning are typically employed to reduce computational cost. This occurs because stochastic gradient descent (SGD) methods can only optimize differentiable functions, while factors influencing computational complexity, such as layer sizes and floating-point operations per second (FLOP/s), are non-differentiable and require modifying the model structure during training. We propose a reparameterization technique based on feature noise injection that enables joint optimization of performance and computational complexity during training using SGD-based methods. Unlike traditional pruning methods, our approach allows the model size to be dynamically optimized for a target performance-complexity trade-off, without relying on heuristic criteria to select which weights or structures to remove. We demonstrate the effectiveness of our method through three case studies, including a synthetic example and two practical real-world applications: voice activity detection and audio anti-spoofing. The code related to our work is publicly available to encourage further research.
|
https://arxiv.org/abs/2601.13704
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.