id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
8fb98cd307b6406379b862ba603bd2b4fd36168d690b6a118532e8b733d350a6
2026-01-13T00:00:00-05:00
Fine-grained Verbal Attack Detection via a Hierarchical Divide-and-Conquer Framework
arXiv:2601.06907v1 Announce Type: new Abstract: In the digital era, effective identification and analysis of verbal attacks are essential for maintaining online civility and ensuring social security. However, existing research is limited by insufficient modeling of conversational structure and contextual dependency, particularly in Chinese social media where implicit attacks are prevalent. Current attack detection studies often emphasize general semantic understanding while overlooking user response relationships, hindering the identification of implicit and context-dependent attacks. To address these challenges, we present the novel "Hierarchical Attack Comment Detection" dataset and propose a divide-and-conquer, fine-grained framework for verbal attack recognition based on spatiotemporal information. The proposed dataset explicitly encodes hierarchical reply structures and chronological order, capturing complex interaction patterns in multi-turn discussions. Building on this dataset, the framework decomposes attack detection into hierarchical subtasks, where specialized lightweight models handle explicit detection, implicit intent inference, and target identification under constrained context. Extensive experiments on the proposed dataset and benchmark intention detection datasets show that smaller models using our framework significantly outperform larger monolithic models relying on parameter scaling, demonstrating the effectiveness of structured task decomposition.
https://arxiv.org/abs/2601.06907
Academic Papers
svg
cd89ae624c1565bfe89790c5a595f1c9f7ec199a25b7b8196e3e8347b2110ae4
2026-01-13T00:00:00-05:00
UDPNet: Unleashing Depth-based Priors for Robust Image Dehazing
arXiv:2601.06909v1 Announce Type: new Abstract: Image dehazing has witnessed significant advancements with the development of deep learning models. However, a few methods predominantly focus on single-modal RGB features, neglecting the inherent correlation between scene depth and haze distribution. Even those that jointly optimize depth estimation and image dehazing often suffer from suboptimal performance due to inadequate utilization of accurate depth information. In this paper, we present UDPNet, a general framework that leverages depth-based priors from large-scale pretrained depth estimation model DepthAnything V2 to boost existing image dehazing models. Specifically, our architecture comprises two typical components: the Depth-Guided Attention Module (DGAM) adaptively modulates features via lightweight depth-guided channel attention, and the Depth Prior Fusion Module (DPFM) enables hierarchical fusion of multi-scale depth map features by dual sliding-window multi-head cross-attention mechanism. These modules ensure both computational efficiency and effective integration of depth priors. Moreover, the intrinsic robustness of depth priors empowers the network to dynamically adapt to varying haze densities, illumination conditions, and domain gaps across synthetic and real-world data. Extensive experimental results demonstrate the effectiveness of our UDPNet, outperforming the state-of-the-art methods on popular dehazing datasets, such as 0.85 dB PSNR improvement on the SOTS dataset, 1.19 dB on the Haze4K dataset and 1.79 dB PSNR on the NHR dataset. Our proposed solution establishes a new benchmark for depth-aware dehazing across various scenarios. Pretrained models and codes will be released at our project https://github.com/Harbinzzy/UDPNet.
https://arxiv.org/abs/2601.06909
Academic Papers
svg
7e889658642e303130a54c70a17ec4c0081660f24c6dae3d6d9b30e6ef491a43
2026-01-13T00:00:00-05:00
PenForge: On-the-Fly Expert Agent Construction for Automated Penetration Testing
arXiv:2601.06910v1 Announce Type: new Abstract: Penetration testing is essential for identifying vulnerabilities in web applications before real adversaries can exploit them. Recent work has explored automating this process with Large Language Model (LLM)-powered agents, but existing approaches either rely on a single generic agent that struggles in complex scenarios or narrowly specialized agents that cannot adapt to diverse vulnerability types. We therefore introduce PenForge, a framework that dynamically constructs expert agents during testing rather than relying on those prepared beforehand. By integrating automated reconnaissance of potential attack surfaces with agents instantiated on the fly for context-aware exploitation, PenForge achieves a 30.0% exploit success rate (12/40) on CVE-Bench in the particularly challenging zero-day setting, which is a 3 times improvement over the state-of-the-art. Our analysis also identifies three opportunities for future work: (1) supplying richer tool-usage knowledge to improve exploitation effectiveness; (2) extending benchmarks to include more vulnerabilities and attack types; and (3) fostering developer trust by incorporating explainable mechanisms and human review. As an emerging result with substantial potential impact, PenForge embodies the early-stage yet paradigm-shifting idea of on-the-fly agent construction, marking its promise as a step toward scalable and effective LLM-driven penetration testing.
https://arxiv.org/abs/2601.06910
Academic Papers
svg
e78432f2d67db998b7753ec90477f611d0ee36758d8b063dab8a4facc9ac9663
2026-01-13T00:00:00-05:00
Distributional Clarity: The Hidden Driver of RL-Friendliness in Large Language Models
arXiv:2601.06911v1 Announce Type: new Abstract: Language model families exhibit striking disparity in their capacity to benefit from reinforcement learning: under identical training, models like Qwen achieve substantial gains, while others like Llama yield limited improvements. Complementing data-centric approaches, we reveal that this disparity reflects a hidden structural property: \textbf{distributional clarity} in probability space. Through a three-stage analysis-from phenomenon to mechanism to interpretation-we uncover that RL-friendly models exhibit intra-class compactness and inter-class separation in their probability assignments to correct vs. incorrect responses. We quantify this clarity using the \textbf{Silhouette Coefficient} ($S$) and demonstrate that (1) high $S$ correlates strongly with RL performance; (2) low $S$ is associated with severe logic errors and reasoning instability. To confirm this property, we introduce a Silhouette-Aware Reweighting strategy that prioritizes low-$S$ samples during training. Experiments across six mathematical benchmarks show consistent improvements across all model families, with gains up to 5.9 points on AIME24. Our work establishes distributional clarity as a fundamental, trainable property underlying RL-Friendliness.
https://arxiv.org/abs/2601.06911
Academic Papers
svg
fde9d5b2960aa654a8ee9b526dff14a22ff26c26b7d4c228d1eba586ec9f3263
2026-01-13T00:00:00-05:00
Tractable Multinomial Logit Contextual Bandits with Non-Linear Utilities
arXiv:2601.06913v1 Announce Type: new Abstract: We study the multinomial logit (MNL) contextual bandit problem for sequential assortment selection. Although most existing research assumes utility functions to be linear in item features, this linearity assumption restricts the modeling of intricate interactions between items and user preferences. A recent work (Zhang & Luo, 2024) has investigated general utility function classes, yet its method faces fundamental trade-offs between computational tractability and statistical efficiency. To address this limitation, we propose a computationally efficient algorithm for MNL contextual bandits leveraging the upper confidence bound principle, specifically designed for non-linear parametric utility functions, including those modeled by neural networks. Under a realizability assumption and a mild geometric condition on the utility function class, our algorithm achieves a regret bound of $\tilde{O}(\sqrt{T})$, where $T$ denotes the total number of rounds. Our result establishes that sharp $\tilde{O}(\sqrt{T})$-regret is attainable even with neural network-based utilities, without relying on strong assumptions such as neural tangent kernel approximations. To the best of our knowledge, our proposed method is the first computationally tractable algorithm for MNL contextual bandits with non-linear utilities that provably attains $\tilde{O}(\sqrt{T})$ regret. Comprehensive numerical experiments validate the effectiveness of our approach, showing robust performance not only in realizable settings but also in scenarios with model misspecification.
https://arxiv.org/abs/2601.06913
Academic Papers
svg
bc92eb3d57c8cd1abea4e5c5dc70c7ae4bd468f01e921fbfe042d5962632a957
2026-01-13T00:00:00-05:00
Towards Compositional Generalization in LLMs for Smart Contract Security: A Case Study on Reentrancy Vulnerabilities
arXiv:2601.06914v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate remarkable capabilities in natural language understanding and generation. Despite being trained on large-scale, high-quality data, LLMs still fail to outperform traditional static analysis tools in specialized domains like smart contract vulnerability detection. To address this issue, this paper proposes a post-training algorithm based on atomic task decomposition and fusion. This algorithm aims to achieve combinatorial generalization under limited data by decomposing complex reasoning tasks. Specifically, we decompose the reentrancy vulnerability detection task into four linearly independent atomic tasks: identifying external calls, identifying state updates, identifying data dependencies between external calls and state updates, and determining their data flow order. These tasks form the core components of our approach. By training on synthetic datasets, we generate three compiler-verified datasets. We then employ the Slither tool to extract structural information from the control flow graph and data flow graph, which is used to fine-tune the LLM's adapter. Experimental results demonstrate that low-rank normalization fusion with the LoRA adapter improves the LLM's reentrancy vulnerability detection accuracy to 98.2%, surpassing state-of-the-art methods. On 31 real-world contracts, the algorithm achieves a 20% higher recall than traditional analysis tools.
https://arxiv.org/abs/2601.06914
Academic Papers
svg
978a324a3d0fbbff00c1e065575413f273c415c5056390b74c4d98eb42b8cf33
2026-01-13T00:00:00-05:00
Active Learning Strategies for Efficient Machine-Learned Interatomic Potentials Across Diverse Material Systems
arXiv:2601.06916v1 Announce Type: new Abstract: Efficient discovery of new materials demands strategies to reduce the number of costly first-principles calculations required to train predictive machine learning models. We develop and validate an active learning framework that iteratively selects informative training structures for machine-learned interatomic potentials (MLIPs) from large, heterogeneous materials databases, specifically the Materials Project and OQMD. Our framework integrates compositional and property-based descriptors with a neural network ensemble model, enabling real-time uncertainty quantification via Query-by-Committee. We systematically compare four selection strategies: random sampling (baseline), uncertainty-based sampling, diversity-based sampling (k-means clustering with farthest-point refinement), and a hybrid approach balancing both objectives. Experiments across four representative material systems (elemental carbon, silicon, iron, and a titanium-oxide compound) with 5 random seeds per configuration demonstrate that diversity sampling consistently achieves competitive or superior performance, with particularly strong advantages on complex systems like titanium-oxide (10.9% improvement, p=0.008). Our results show that intelligent data selection strategies can achieve target accuracy with 5-13% fewer labeled samples compared to random baselines. The entire pipeline executes on Google Colab in under 4 hours per system using less than 8 GB of RAM, thereby democratizing MLIP development for researchers globally with limited computational resources. Our open-source code and detailed experimental configurations are available on GitHub. This multi-system evaluation establishes practical guidelines for data-efficient MLIP training and highlights promising future directions including integration with symmetry-aware neural network architectures.
https://arxiv.org/abs/2601.06916
Academic Papers
svg
c476ea1c19e6f557329220b04168f2eb73fc72800fc2ffaa6c7fb1825471f45e
2026-01-13T00:00:00-05:00
Calibrating Agent-Based Financial Markets Simulators with Pretrainable Automatic Posterior Transformation-Based Surrogates
arXiv:2601.06920v1 Announce Type: new Abstract: Calibrating Agent-Based Models (ABMs) is an important optimization problem for simulating the complex social systems, where the goal is to identify the optimal parameter of a given ABM by minimizing the discrepancy between the simulated data and the real-world observations. Unfortunately, it suffers from the extensive computational costs of iterative evaluations, which involves the expensive simulation with the candidate parameter. While Surrogate-Assisted Evolutionary Algorithms (SAEAs) have been widely adopted to alleviate the computational burden, existing methods face two key limitations: 1) surrogating the original evaluation function is hard due the nonlinear yet multi-modal nature of the ABMs, and 2) the commonly used surrogates cannot share the optimization experience among multiple calibration tasks, making the batched calibration less effective. To address these issues, this work proposes Automatic posterior transformation with Negatively Correlated Search and Adaptive Trust-Region (ANTR). ANTR first replaces the traditional surrogates with a pretrainable neural density estimator that directly models the posterior distribution of the parameters given observed data, thereby aligning the optimization objective with parameter-space accuracy. Furthermore, we incorporate a diversity-preserving search strategy to prevent premature convergence and an adaptive trust-region method to efficiently allocate computational resources. We take two representative ABM-based financial market simulators as the test bench as due to the high non-linearity. Experiments demonstrate that the proposed ANTR significantly outperforms conventional metaheuristics and state-of-the-art SAEAs in both calibration accuracy and computational efficiency, particularly in batch calibration scenarios across multiple market conditions.
https://arxiv.org/abs/2601.06920
Academic Papers
svg
84f6dae7c0a4cbb41a6a30dff91cbee30052dfd1ef19d65fd404fd79767b2e88
2026-01-13T00:00:00-05:00
TreePS-RAG: Tree-based Process Supervision for Reinforcement Learning in Agentic RAG
arXiv:2601.06922v1 Announce Type: new Abstract: Agentic retrieval-augmented generation (RAG) formulates question answering as a multi-step interaction between reasoning and information retrieval, and has recently been advanced by reinforcement learning (RL) with outcome-based supervision. While effective, relying solely on sparse final rewards limits step-wise credit assignment and provides weak guidance for intermediate reasoning and actions. Recent efforts explore process-level supervision, but typically depend on offline constructed training data, which risks distribution shift, or require costly intermediate annotations. We present TreePS-RAG, an online, tree-based RL framework for agentic RAG that enables step-wise credit assignment while retaining standard outcome-only rewards. Our key insight is to model agentic RAG reasoning as a rollout tree, where each reasoning step naturally maps to a node. This tree structure allows step utility to be estimated via Monte Carlo estimation over its descendant outcomes, yielding fine-grained process advantages without requiring intermediate labels. To make this paradigm practical, we introduce an efficient online tree construction strategy that preserves exploration diversity under a constrained computational budget. With a rollout cost comparable to strong baselines like Search-R1, experiments on seven multi-hop and general QA benchmarks across multiple model scales show that TreePS-RAG consistently and significantly outperforms both outcome-supervised and leading process-supervised RL methods.
https://arxiv.org/abs/2601.06922
Academic Papers
svg
3579246e4c0136be3b3302a3113fe3e22932d2d405563a57349ea0bc6fa6892e
2026-01-13T00:00:00-05:00
Caching Yields up to 5x Spectral Efficiency in Multi-Beam Satellite Communications
arXiv:2601.06925v1 Announce Type: new Abstract: This paper examines the integration of vector coded caching (VCC) into multi-beam satellite communications (SATCOM) systems and demonstrates that even limited receiver-side caching can substantially enhance spectral efficiency. By leveraging cached content to suppress interference, VCC enables the concurrent transmission of multiple precoded signal vectors that would otherwise require separate transmission resources. This leads to a multiplicative improvement in resource utilization in SATCOM. To characterize this performance, we model the satellite-to-ground channel using Rician-shadowed fading and after incorporating practical considerations such as matched-filter precoding, channel state information (CSI) acquisition overhead as well as CSI imperfections at the transmitter, we here derive closed-form expressions for the average sum rate and spectral efficiency gain of VCC in SATCOM. Our analysis, tightly validated through numerical simulations, reveals that VCC can yield spectral efficiency gains of 300% to 550% over traditional multi-user MISO SATCOM with the same resources. These gains -- which have nothing to do with multicasting, prefetching gains nor file popularity -- highlight VCC as a pure physical-layer solution for future high-throughput SATCOM systems, significantly narrowing the performance gap between satellite and wired networks.
https://arxiv.org/abs/2601.06925
Academic Papers
svg
4d5c901a7e7128b0048fda1a95bb8cbb79338dca60ce46567bcb1c494c4d23c4
2026-01-13T00:00:00-05:00
RenderFlow: Single-Step Neural Rendering via Flow Matching
arXiv:2601.06928v1 Announce Type: new Abstract: Conventional physically based rendering (PBR) pipelines generate photorealistic images through computationally intensive light transport simulations. Although recent deep learning approaches leverage diffusion model priors with geometry buffers (G-buffers) to produce visually compelling results without explicit scene geometry or light simulation, they remain constrained by two major limitations. First, the iterative nature of the diffusion process introduces substantial latency. Second, the inherent stochasticity of these generative models compromises physical accuracy and temporal consistency. In response to these challenges, we propose a novel, end-to-end, deterministic, single-step neural rendering framework, RenderFlow, built upon a flow matching paradigm. To further strengthen both rendering quality and generalization, we propose an efficient and effective module for sparse keyframe guidance. Our method significantly accelerates the rendering process and, by optionally incorporating sparsely rendered keyframes as guidance, enhances both the physical plausibility and overall visual quality of the output. The resulting pipeline achieves near real-time performance with photorealistic rendering quality, effectively bridging the gap between the efficiency of modern generative models and the precision of traditional physically based rendering. Furthermore, we demonstrate the versatility of our framework by introducing a lightweight, adapter-based module that efficiently repurposes the pretrained forward model for the inverse rendering task of intrinsic decomposition.
https://arxiv.org/abs/2601.06928
Academic Papers
svg
3b70393e20ce82e13a56b2770eb0bcc2b1fb9aba8b6907f08bfbd36a17a5ac7c
2026-01-13T00:00:00-05:00
Measuring Social Bias in Vision-Language Models with Face-Only Counterfactuals from Real Photos
arXiv:2601.06931v1 Announce Type: new Abstract: Vision-Language Models (VLMs) are increasingly deployed in socially consequential settings, raising concerns about social bias driven by demographic cues. A central challenge in measuring such social bias is attribution under visual confounding: real-world images entangle race and gender with correlated factors such as background and clothing, obscuring attribution. We propose a \textbf{face-only counterfactual evaluation paradigm} that isolates demographic effects while preserving real-image realism. Starting from real photographs, we generate counterfactual variants by editing only facial attributes related to race and gender, keeping all other visual factors fixed. Based on this paradigm, we construct \textbf{FOCUS}, a dataset of 480 scene-matched counterfactual images across six occupations and ten demographic groups, and propose \textbf{REFLECT}, a benchmark comprising three decision-oriented tasks: two-alternative forced choice, multiple-choice socioeconomic inference, and numeric salary recommendation. Experiments on five state-of-the-art VLMs reveal that demographic disparities persist under strict visual control and vary substantially across task formulations. These findings underscore the necessity of controlled, counterfactual audits and highlight task design as a critical factor in evaluating social bias in multimodal models.
https://arxiv.org/abs/2601.06931
Academic Papers
svg
5bd4076f7d0d76963a87c78fd7483a834fef5d649d23681176a17dac0306817e
2026-01-13T00:00:00-05:00
Symphonym: Universal Phonetic Embeddings for Cross-Script Toponym Matching via Teacher-Student Distillation
arXiv:2601.06932v1 Announce Type: new Abstract: Linking place names across languages and writing systems is a fundamental challenge in digital humanities and geographic information retrieval. Existing approaches rely on language-specific phonetic algorithms or transliteration rules that fail when names cross script boundaries -- no string metric can determine that "Moscow" when rendered in Cyrillic or Arabic refer to the same city. I present Symphonym, a neural embedding system that maps toponyms from 20 writing systems into a unified 128-dimensional phonetic space. A Teacher network trained on articulatory phonetic features (via Epitran and PanPhon) produces target embeddings, while a Student network learns to approximate these from raw characters. At inference, only the lightweight Student (1.7M parameters) is required, enabling deployment without runtime phonetic conversion. Training uses a three-phase curriculum on 57 million toponyms from GeoNames, Wikidata, and the Getty Thesaurus of Geographic Names. Phase 1 trains the Teacher on 467K phonetically-grounded triplets. Phase 2 aligns the Student to Teacher outputs across 23M samples, achieving 96.6% cosine similarity. Phase 3 fine-tunes on 3.3M hard negative triplets -- negatives sharing prefix and script with the anchor but referring to different places -- to sharpen discrimination. Evaluation on the MEHDIE Hebrew-Arabic benchmark achieves 89.2% Recall@1, outperforming Levenshtein (81.5%) and Jaro-Winkler (78.5%). The system is optimised for cross-script matching; same-script variants can be handled by complementary string methods. Symphonym will enable fuzzy phonetic reconciliation and search across the World Historical Gazetteer's 67 million toponyms. Code and models are publicly available.
https://arxiv.org/abs/2601.06932
Academic Papers
svg
da47af0d22a2a940069f004c84b34b42e29f635121a5f720f463692da5bb61b7
2026-01-13T00:00:00-05:00
mind_call: A Dataset for Mental Health Function Calling with Large Language Models
arXiv:2601.06937v1 Announce Type: new Abstract: Large Language Model (LLM)-based systems increasingly rely on function calling to enable structured and controllable interaction with external data sources, yet existing datasets do not address mental health-oriented access to wearable sensor data. This paper presents a synthetic function-calling dataset designed for mental health assistance grounded in wearable health signals such as sleep, physical activity, cardiovascular measures, stress indicators, and metabolic data. The dataset maps diverse natural language queries to standardized API calls derived from a widely adopted health data schema. Each sample includes a user query, a query category, an explicit reasoning step, a normalized temporal parameter, and a target function. The dataset covers explicit, implicit, behavioral, symptom-based, and metaphorical expressions, which reflect realistic mental health-related user interactions. This resource supports research on intent grounding, temporal reasoning, and reliable function invocation in LLM-based mental health agents and is publicly released to promote reproducibility and future work.
https://arxiv.org/abs/2601.06937
Academic Papers
svg
dc2604aeb0ef5a736ea3aa2cb5357f85bf811d4d5fdbdff08a4b5b1d7e782cc7
2026-01-13T00:00:00-05:00
Forgetting Similar Samples: Can Machine Unlearning Do it Better?
arXiv:2601.06938v1 Announce Type: new Abstract: Machine unlearning, a process enabling pre-trained models to remove the influence of specific training samples, has attracted significant attention in recent years. Although extensive research has focused on developing efficient machine unlearning strategies, we argue that these methods mainly aim at removing samples rather than removing samples' influence on the model, thus overlooking the fundamental definition of machine unlearning. In this paper, we first conduct a comprehensive study to evaluate the effectiveness of existing unlearning schemes when the training dataset includes many samples similar to those targeted for unlearning. Specifically, we evaluate: Do existing unlearning methods truly adhere to the original definition of machine unlearning and effectively eliminate all influence of target samples when similar samples are present in the training dataset? Our extensive experiments, conducted on four carefully constructed datasets with thorough analysis, reveal a notable gap between the expected and actual performance of most existing unlearning methods for image and language models, even for the retraining-from-scratch baseline. Additionally, we also explore potential solutions to enhance current unlearning approaches.
https://arxiv.org/abs/2601.06938
Academic Papers
svg
a17841e109b3b7fdef3238c822fdac543fb8519f963a80955923b1bbfcca45a5
2026-01-13T00:00:00-05:00
VISTA: Knowledge-Driven Interpretable Vessel Trajectory Imputation via Large Language Models
arXiv:2601.06940v1 Announce Type: new Abstract: The Automatic Identification System provides critical information for maritime navigation and safety, yet its trajectories are often incomplete due to signal loss or deliberate tampering. Existing imputation methods emphasize trajectory recovery, paying limited attention to interpretability and failing to provide underlying knowledge that benefits downstream tasks such as anomaly detection and route planning. We propose knowledge-driven interpretable vessel trajectory imputation (VISTA), the first trajectory imputation framework that offers interpretability while simultaneously providing underlying knowledge to support downstream analysis. Specifically, we first define underlying knowledge as a combination of Structured Data-derived Knowledge (SDK) distilled from AIS data and Implicit LLM Knowledge acquired from large-scale Internet corpora. Second, to manage and leverage the SDK effectively at scale, we develop a data-knowledge-data loop that employs a Structured Data-derived Knowledge Graph for SDK extraction and knowledge-driven trajectory imputation. Third, to efficiently process large-scale AIS data, we introduce a workflow management layer that coordinates the end-to-end pipeline, enabling parallel knowledge extraction and trajectory imputation with anomaly handling and redundancy elimination. Experiments on two large AIS datasets show that VISTA is capable of state-of-the-art imputation accuracy and computational efficiency, improving over state-of-the-art baselines by 5%-94% and reducing time cost by 51%-93%, while producing interpretable knowledge cues that benefit downstream tasks. The source code and implementation details of VISTA are publicly available.
https://arxiv.org/abs/2601.06940
Academic Papers
svg
6f4f27f50de377b693764b529fd9b62d38617adb4ed768cd85d2e9f4f13f2106
2026-01-13T00:00:00-05:00
Towards Operational Streamflow Forecasting in the Limpopo River Basin using Long Short-Term Memory Networks
arXiv:2601.06941v1 Announce Type: new Abstract: Robust hydrological simulation is key for sustainable development, water management strategies, and climate change adaptation. In recent years, deep learning methods have been demonstrated to outperform mechanistic models at the task of hydrological discharge simulation. Adoption of these methods has been catalysed by the proliferation of large sample hydrology datasets, consisting of the observed discharge and meteorological drivers, along with geological and topographical catchment descriptors. Deep learning methods infer rainfall-runoff characteristics that have been shown to generalise across catchments, benefitting from the data diversity in large datasets. Despite this, application to catchments in Africa has been limited. The lack of adoption of deep learning methodologies is primarily due to sparsity or lack of the spatiotemporal observational data required to enable downstream model training. We therefore investigate the application of deep learning models, including LSTMs, for hydrological discharge simulation in the transboundary Limpopo River basin, emphasising application to data scarce regions. We conduct a number of computational experiments primarily focused on assessing the impact of varying the LSTM model input data on performance. Results confirm that data constraints remain the largest obstacle to deep learning applications across African river basins. We further outline the impact of human influence on data-driven modelling which is a commonly overlooked aspect of data-driven large-sample hydrology approaches and investigate solutions for model adaptation under smaller datasets. Additionally, we include recommendations for future efforts towards seasonal hydrological discharge prediction and direct comparison or inclusion of SWAT model outputs, as well as architectural improvements.
https://arxiv.org/abs/2601.06941
Academic Papers
svg
cb765676aaa309f74d6973fd3c39c6e90cebc16643c26feddf1a6cc781a647bc
2026-01-13T00:00:00-05:00
Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning
arXiv:2601.06943v1 Announce Type: new Abstract: In real-world video question answering scenarios, videos often provide only localized visual cues, while verifiable answers are distributed across the open web; models therefore need to jointly perform cross-frame clue extraction, iterative retrieval, and multi-hop reasoning-based verification. To bridge this gap, we construct the first video deep research benchmark, VideoDR. VideoDR centers on video-conditioned open-domain video question answering, requiring cross-frame visual anchor extraction, interactive web retrieval, and multi-hop reasoning over joint video-web evidence; through rigorous human annotation and quality control, we obtain high-quality video deep research samples spanning six semantic domains. We evaluate multiple closed-source and open-source multimodal large language models under both the Workflow and Agentic paradigms, and the results show that Agentic is not consistently superior to Workflow: its gains depend on a model's ability to maintain the initial video anchors over long retrieval chains. Further analysis indicates that goal drift and long-horizon consistency are the core bottlenecks. In sum, VideoDR provides a systematic benchmark for studying video agents in open-web settings and reveals the key challenges for next-generation video deep research agents.
https://arxiv.org/abs/2601.06943
Academic Papers
svg
e720addf670c06976a0321e5392390ef063e93e38b949b7a48e509254c06ec3e
2026-01-13T00:00:00-05:00
SketchJudge: A Diagnostic Benchmark for Grading Hand-drawn Diagrams with Multimodal Large Language Models
arXiv:2601.06944v1 Announce Type: new Abstract: While Multimodal Large Language Models (MLLMs) have achieved remarkable progress in visual understanding, they often struggle when faced with the unstructured and ambiguous nature of human-generated sketches. This limitation is particularly pronounced in the underexplored task of visual grading, where models should not only solve a problem but also diagnose errors in hand-drawn diagrams. Such diagnostic capabilities depend on complex structural, semantic, and metacognitive reasoning. To bridge this gap, we introduce SketchJudge, a novel benchmark tailored for evaluating MLLMs as graders of hand-drawn STEM diagrams. SketchJudge encompasses 1,015 hand-drawn student responses across four domains: geometry, physics, charts, and flowcharts, featuring diverse stylistic variations and distinct error types. Evaluations on SketchJudge demonstrate that even advanced MLLMs lag significantly behind humans, validating the benchmark's effectiveness in exposing the fragility of current vision-language alignment in symbolic and noisy contexts. All data, code, and evaluation scripts are publicly available at https://github.com/yuhangsu82/SketchJudge.
https://arxiv.org/abs/2601.06944
Academic Papers
svg
c2c4ea55d6ce1f528dc9cf5268cc412064cce50bf15d8f031ff7ec6310ad05dd
2026-01-13T00:00:00-05:00
Optimal Extended Formulations from Optimal Dynamic Programming Algorithms
arXiv:2601.06947v1 Announce Type: new Abstract: Vertex Subset Problems (VSPs) are a class of combinatorial optimization problems on graphs where the goal is to find a subset of vertices satisfying a predefined condition. Two prominent approaches for solving VSPs are dynamic programming over tree-like structures, such as tree decompositions or clique decompositions, and linear programming. In this work, we establish a sharp connection between both approaches by showing that if a vertex-subset problem $\Pi$ admits a solution-preserving dynamic programming algorithm that produces tables of size at most $\alpha(k,n)$ when processing a tree decomposition of width at most $k$ of an $n$-vertex graph $G$, then the polytope $P_{\Pi}(G)$ defined as the convex-hull of solutions of $\Pi$ in $G$ has extension complexity at most $O(\alpha(k,n)\cdot n)$. Additionally, this upper bound is optimal under the exponential time hypothesis (ETH). On the one hand, our results imply that ETH-optimal solution-preserving dynamic programming algorithms for combinatorial problems yield optimal-size parameterized extended formulations for the solution polytopes associated with instances of these problems. On the other hand, unconditional lower bounds obtained in the realm of the theory of extended formulations yield unconditional lower bounds on the table complexity of solution-preserving dynamic programming algorithms.
https://arxiv.org/abs/2601.06947
Academic Papers
svg
594604ad39ba0e11fa768155cb0172f366c7f10b02a3f955dfff1f45ebd5e76c
2026-01-13T00:00:00-05:00
Operational Runtime Behavior Mining for Open-Source Supply Chain Security
arXiv:2601.06948v1 Announce Type: new Abstract: Open-source software (OSS) is a critical component of modern software systems, yet supply chain security remains challenging in practice due to unavailable or obfuscated source code. Consequently, security teams often rely on runtime observations collected from sandboxed executions to investigate suspicious third-party components. We present HeteroGAT-Rank, an industry-oriented runtime behavior mining system that supports analyst-in-the-loop supply chain threat investigation. The system models execution-time behaviors of OSS packages as lightweight heterogeneous graphs and applies attention-based graph learning to rank behavioral patterns that are most relevant for security analysis. Rather than aiming for fully automated detection, HeteroGAT-Rank surfaces actionable runtime signals - such as file, network, and command activities - to guide manual investigation and threat hunting. To operate at ecosystem scale, the system decouples offline behavior mining from online analysis and integrates parallel graph construction for efficient processing across multiple ecosystems. An evaluation on a large-scale OSS execution dataset shows that HeteroGAT-Rank effectively highlights meaningful and interpretable behavioral indicators aligned with real-world vulnerability and attack trends, supporting practical security workflows under realistic operational constraints.
https://arxiv.org/abs/2601.06948
Academic Papers
svg
a1ce1debb6b8454ac3aad8b529f03a24a33daa00cdc5b365276bd756effc3fd1
2026-01-13T00:00:00-05:00
X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests
arXiv:2601.06953v1 Announce Type: new Abstract: Competitive programming presents great challenges for Code LLMs due to its intensive reasoning demands and high logical complexity. However, current Code LLMs still rely heavily on real-world data, which limits their scalability. In this paper, we explore a fully synthetic approach: training Code LLMs with entirely generated tasks, solutions, and test cases, to empower code reasoning models without relying on real-world data. To support this, we leverage feature-based synthesis to propose a novel data synthesis pipeline called SynthSmith. SynthSmith shows strong potential in producing diverse and challenging tasks, along with verified solutions and tests, supporting both supervised fine-tuning and reinforcement learning. Based on the proposed synthetic SFT and RL datasets, we introduce the X-Coder model series, which achieves a notable pass rate of 62.9 avg@8 on LiveCodeBench v5 and 55.8 on v6, outperforming DeepCoder-14B-Preview and AReal-boba2-14B despite having only 7B parameters. In-depth analysis reveals that scaling laws hold on our synthetic dataset, and we explore which dimensions are more effective to scale. We further provide insights into code-centric reinforcement learning and highlight the key factors that shape performance through detailed ablations and analysis. Our findings demonstrate that scaling high-quality synthetic data and adopting staged training can greatly advance code reasoning, while mitigating reliance on real-world coding data.
https://arxiv.org/abs/2601.06953
Academic Papers
svg
338ef2cefdde2eeafb76f7969f8832f7fdacd7ee6dc6bc82b99766c66f70f8b6
2026-01-13T00:00:00-05:00
Arithmetic Complexity of Solutions of the Dirichlet Problem
arXiv:2601.06954v1 Announce Type: new Abstract: The classical Dirichlet problem on the unit disk can be solved by different numerical approaches. The two most common and popular approaches are the integration of the associated Poisson integral and, by applying Dirichlet's principle, solving a particular minimization problem. For practical use, these procedures need to be implemented on concrete computing platforms. This paper studies the realization of these procedures on Turing machines, the fundamental model for any digital computer. We show that on this computing platform both approaches to solve Dirichlet's problem yield generally a solution that is not Turing computable, even if the boundary function is computable. Then the paper provides a precise characterization of this non-computability in terms of the Zheng--Weihrauch hierarchy. For both approaches, we derive a lower and an upper bound on the degree of non-computability in the Zheng--Weihrauch hierarchy.
https://arxiv.org/abs/2601.06954
Academic Papers
svg
e936afdc157e7fbb758a0e6337b0cb99d0d8610f597898036910538fa49d3bcd
2026-01-13T00:00:00-05:00
HAS-VQ: Hessian-Adaptive Sparse Vector Quantization for High-Fidelity LLM Compression
arXiv:2601.06959v1 Announce Type: new Abstract: Post-training quantization is essential for deploying Large Language Models (LLMs) on resource- constrained devices. However, standard integer quantization (e.g., INT4) fundamentally degrades per- formance by imposing a uniform grid on the heavy-tailed distribution of weight parameters, particularly in smaller-scale models (e.g., <2B parameters). We introduce HAS-VQ (Hessian-Adaptive Sparse Vec- tor Quantization), a compression framework that strictly decouples high-sensitivity outliers from the bulk weight distribution using second-order sensitivity analysis. HAS-VQ employs a Hessian-Masked Decoupling strategy to isolate sensitive parameters, followed by robust Vector Quantization (VQ) of the remaining dense body. Crucially, we introduce a residual sparse feedback mechanism that corrects quan- tization errors in the most sensitive dimensions, ensuring exact reconstruction of outliers. We evaluate HAS-VQ on SmolLM2-1.7B, demonstrating two distinct regimes of superiority: (1) Pareto Dominance over Integer Baselines: At 4.23 effective bits-per-parameter (BPP), we achieve a perplexity of 14.23, significantly outperforming the standard INT4 baseline (20.03 PPL at 4.71 BPP). (2) High-Fidelity Compression: Relative to the FP16 baseline, HAS-VQ achieves a 2.3x reduction in model size (7.03 BPP) while maintaining statistically indistinguishable perplexity (10.12 vs. 10.04), effectively offering a lossless compression alternative for bandwidth-constrained environments. The code is available at https://github.com/VladimerKhasia/HASVQ
https://arxiv.org/abs/2601.06959
Academic Papers
svg
6536581ae3b7e6efea0de6d1a5605536a797e05b3b18d0e629b0626b21780efe
2026-01-13T00:00:00-05:00
Hardware-in-the-loop wind-tunnel testing of wake interactions between two floating wind turbines
arXiv:2601.06964v1 Announce Type: new Abstract: Wake interactions in floating wind farms are inherently coupled to platform motion, yet most experimental studies to date neglect this two-way coupling by prescribing platform movements. This work presents a hardware-in-the-loop (HIL) wind-tunnel methodology to investigate wake interactions between two floating wind turbines with fully coupled aerodynamic loading and platform dynamics. The approach integrates physical wind-tunnel testing of two scaled rotors with a real-time numerical model that accounts for platform motion, mooring restoring forces, and hydrodynamic loads. Experiments conducted under low-turbulence inflow conditions show that a downstream turbine operating in the wake of an upstream turbine experiences reduced mean thrust and platform deflections due to the decreased inflow velocity, alongside enhanced low-frequency platform motions driven by increased turbulent energy in the wake. The proposed HIL framework provides a controlled experimental basis for studying wake-induced excitation mechanisms and supports the validation of floating wind farm models and control strategies.
https://arxiv.org/abs/2601.06964
Academic Papers
svg
7fab8c2b1d63be6217ed565f8ea135dbc3985b1bcc8674a2ca2924e52447954c
2026-01-13T00:00:00-05:00
Unified Personalized Understanding, Generating and Editing
arXiv:2601.06965v1 Announce Type: new Abstract: Unified large multimodal models (LMMs) have achieved remarkable progress in general-purpose multimodal understanding and generation. However, they still operate under a ``one-size-fits-all'' paradigm and struggle to model user-specific concepts (e.g., generate a photo of \texttt{}) in a consistent and controllable manner. Existing personalization methods typically rely on external retrieval, which is inefficient and poorly integrated into unified multimodal pipelines. Recent personalized unified models introduce learnable soft prompts to encode concept information, yet they either couple understanding and generation or depend on complex multi-stage training, leading to cross-task interference and ultimately to fuzzy or misaligned personalized knowledge. We present \textbf{OmniPersona}, an end-to-end personalization framework for unified LMMs that, for the first time, integrates personalized understanding, generation, and image editing within a single architecture. OmniPersona introduces structurally decoupled concept tokens, allocating dedicated subspaces for different tasks to minimize interference, and incorporates an explicit knowledge replay mechanism that propagates personalized attribute knowledge across tasks, enabling consistent personalized behavior. To systematically evaluate unified personalization, we propose \textbf{\texttt{OmniPBench}}, extending the public UnifyBench concept set with personalized editing tasks and cross-task evaluation protocols integrating understanding, generation, and editing. Experimental results demonstrate that OmniPersona delivers competitive and robust performance across diverse personalization tasks. We hope OmniPersona will serve as a strong baseline and spur further research on controllable, unified personalization.
https://arxiv.org/abs/2601.06965
Academic Papers
svg
3fb339ed381038a96c84a70b192a97d0f203980e1f0f340a9da8e1bf9ad26619
2026-01-13T00:00:00-05:00
RealMem: Benchmarking LLMs in Real-World Memory-Driven Interaction
arXiv:2601.06966v1 Announce Type: new Abstract: As Large Language Models (LLMs) evolve from static dialogue interfaces to autonomous general agents, effective memory is paramount to ensuring long-term consistency. However, existing benchmarks primarily focus on casual conversation or task-oriented dialogue, failing to capture **"long-term project-oriented"** interactions where agents must track evolving goals. To bridge this gap, we introduce **RealMem**, the first benchmark grounded in realistic project scenarios. RealMem comprises over 2,000 cross-session dialogues across eleven scenarios, utilizing natural user queries for evaluation. We propose a synthesis pipeline that integrates Project Foundation Construction, Multi-Agent Dialogue Generation, and Memory and Schedule Management to simulate the dynamic evolution of memory. Experiments reveal that current memory systems face significant challenges in managing the long-term project states and dynamic context dependencies inherent in real-world projects. Our code and datasets are available at [https://github.com/AvatarMemory/RealMemBench](https://github.com/AvatarMemory/RealMemBench).
https://arxiv.org/abs/2601.06966
Academic Papers
svg
c90be628c5186e9a7ead1d8a3dbe3f449f1b69bf08ecd8339562d1459905a031
2026-01-13T00:00:00-05:00
A Robust Certified Machine Unlearning Method Under Distribution Shift
arXiv:2601.06967v1 Announce Type: new Abstract: The Newton method has been widely adopted to achieve certified unlearning. A critical assumption in existing approaches is that the data requested for unlearning are selected i.i.d.(independent and identically distributed). However,the problem of certified unlearning under non-i.i.d. deletions remains largely unexplored. In practice, unlearning requests are inherently biased, leading to non-i.i.d. deletions and causing distribution shifts between the original and retained datasets. In this paper, we show that certified unlearning with the Newton method becomes inefficient and ineffective under non-i.i.d. unlearning sets. We then propose a better certified unlearning approach by performing a distribution-aware certified unlearning framework based on iterative Newton updates constrained by a trust region. Our method provides a closer approximation to the retrained model and yields a tighter pre-run bound on the gradient residual, thereby ensuring efficient (epsilon, delta)-certified unlearning. To demonstrate its practical effectiveness under distribution shift, we also conduct extensive experiments across multiple evaluation metrics, providing a comprehensive assessment of our approach.
https://arxiv.org/abs/2601.06967
Academic Papers
svg
712010ca0ff6b056125945f5552bc00b9c49048d19e01153bdca73dece84ed67
2026-01-13T00:00:00-05:00
Generalization Bounds for Transformer Channel Decoders
arXiv:2601.06969v1 Announce Type: new Abstract: Transformer channel decoders, such as the Error Correction Code Transformer (ECCT), have shown strong empirical performance in channel decoding, yet their generalization behavior remains theoretically unclear. This paper studies the generalization performance of ECCT from a learning-theoretic perspective. By establishing a connection between multiplicative noise estimation errors and bit-error-rate (BER), we derive an upper bound on the generalization gap via bit-wise Rademacher complexity. The resulting bound characterizes the dependence on code length, model parameters, and training set size, and applies to both single-layer and multi-layer ECCTs. We further show that parity-check-based masked attention induces sparsity that reduces the covering number, leading to a tighter generalization bound. To the best of our knowledge, this work provides the first theoretical generalization guarantees for this class of decoders.
https://arxiv.org/abs/2601.06969
Academic Papers
svg
eac833c24b71ce49d67bb6dfadb5479b5fc7292e9a781050a984e8083b31fe88
2026-01-13T00:00:00-05:00
Categorize Early, Integrate Late: Divergent Processing Strategies in Automatic Speech Recognition
arXiv:2601.06972v1 Announce Type: new Abstract: In speech language modeling, two architectures dominate the frontier: the Transformer and the Conformer. However, it remains unknown whether their comparable performance stems from convergent processing strategies or distinct architectural inductive biases. We introduce Architectural Fingerprinting, a probing framework that isolates the effect of architecture on representation, and apply it to a controlled suite of 24 pre-trained encoders (39M-3.3B parameters). Our analysis reveals divergent hierarchies: Conformers implement a "Categorize Early" strategy, resolving phoneme categories 29% earlier in depth and speaker gender by 16% depth. In contrast, Transformers "Integrate Late," deferring phoneme, accent, and duration encoding to deep layers (49-57%). These fingerprints suggest design heuristics: Conformers' front-loaded categorization may benefit low-latency streaming, while Transformers' deep integration may favor tasks requiring rich context and cross-utterance normalization.
https://arxiv.org/abs/2601.06972
Academic Papers
svg
60647fb175b377bb36a7e4cd5415c9e021cfd9edddb31f7f2dbd2362c3c139ea
2026-01-13T00:00:00-05:00
LLMs Can't Play Hangman: On the Necessity of a Private Working Memory for Language Agents
arXiv:2601.06973v1 Announce Type: new Abstract: As LLMs move from text completion toward autonomous agents, they remain constrained by the standard chat interface, which lacks private working memory. This raises a fundamental question: can agents reliably perform interactive tasks that depend on hidden state? We define Private State Interactive Tasks (PSITs), which require agents to generate and maintain hidden information while producing consistent public responses. We show theoretically that any agent restricted to the public conversation history cannot simultaneously preserve secrecy and consistency in PSITs, yielding an impossibility theorem. To empirically validate this limitation, we introduce a self-consistency testing protocol that evaluates whether agents can maintain a hidden secret across forked dialogue branches. Standard chat-based LLMs and retrieval-based memory baselines fail this test regardless of scale, demonstrating that semantic retrieval does not enable true state maintenance. To address this, we propose a novel architecture incorporating an explicit private working memory; we demonstrate that this mechanism restores consistency, establishing private state as a necessary component for interactive language agents.
https://arxiv.org/abs/2601.06973
Academic Papers
svg
b1d20069d951acf27141ad665b4d9e70c210f7bf5f20ec25891ce7a60952b86f
2026-01-13T00:00:00-05:00
UETQuintet at BioCreative IX - MedHopQA: Enhancing Biomedical QA with Selective Multi-hop Reasoning and Contextual Retrieval
arXiv:2601.06974v1 Announce Type: new Abstract: Biomedical Question Answering systems play a critical role in processing complex medical queries, yet they often struggle with the intricate nature of medical data and the demand for multi-hop reasoning. In this paper, we propose a model designed to effectively address both direct and sequential questions. While sequential questions are decomposed into a chain of sub-questions to perform reasoning across a chain of steps, direct questions are processed directly to ensure efficiency and minimise processing overhead. Additionally, we leverage multi-source information retrieval and in-context learning to provide rich, relevant context for generating answers. We evaluated our model on the BioCreative IX - MedHopQA Shared Task datasets. Our approach achieves an Exact Match score of 0.84, ranking second on the current leaderboard. These results highlight the model's capability to meet the challenges of Biomedical Question Answering, offering a versatile solution for advancing medical research and practice.
https://arxiv.org/abs/2601.06974
Academic Papers
svg
f2df80a9fa92785c65dadc059dbd4265c1183ab9b0bc9a214f8a615b89a240ca
2026-01-13T00:00:00-05:00
MedTutor: A Retrieval-Augmented LLM System for Case-Based Medical Education
arXiv:2601.06979v1 Announce Type: new Abstract: The learning process for medical residents presents significant challenges, demanding both the ability to interpret complex case reports and the rapid acquisition of accurate medical knowledge from reliable sources. Residents typically study case reports and engage in discussions with peers and mentors, but finding relevant educational materials and evidence to support their learning from these cases is often time-consuming and challenging. To address this, we introduce MedTutor, a novel system designed to augment resident training by automatically generating evidence-based educational content and multiple-choice questions from clinical case reports. MedTutor leverages a Retrieval-Augmented Generation (RAG) pipeline that takes clinical case reports as input and produces targeted educational materials. The system's architecture features a hybrid retrieval mechanism that synergistically queries a local knowledge base of medical textbooks and academic literature (using PubMed, Semantic Scholar APIs) for the latest related research, ensuring the generated content is both foundationally sound and current. The retrieved evidence is filtered and ordered using a state-of-the-art reranking model and then an LLM generates the final long-form output describing the main educational content regarding the case-report. We conduct a rigorous evaluation of the system. First, three radiologists assessed the quality of outputs, finding them to be of high clinical and educational value. Second, we perform a large scale evaluation using an LLM-as-a Judge to understand if LLMs can be used to evaluate the output of the system. Our analysis using correlation between LLMs outputs and human expert judgments reveals a moderate alignment and highlights the continued necessity of expert oversight.
https://arxiv.org/abs/2601.06979
Academic Papers
svg
f63ee3775c260db0987b12b96f6a418ab4316c2e76ccf24a2b78e9d12e88dc6f
2026-01-13T00:00:00-05:00
A New Perspective on Drawing Venn Diagrams for Data Visualization
arXiv:2601.06980v1 Announce Type: new Abstract: We introduce VennFan, a method for generating $n$-set Venn diagrams based on the polar coordinate projection of trigonometric boundaries, resulting in Venn diagrams that resemble a set of fan blades. Unlike most classical constructions, our method emphasizes readability and customizability by using shaped sinusoids and amplitude scaling. We describe both sine- and cosine-based variants of VennFan and propose an automatic label placement heuristic tailored to these fan-like layouts. VennFan is available as a Python package (https://pypi.org/project/vennfan/).
https://arxiv.org/abs/2601.06980
Academic Papers
svg
51817a0ecb86b58c0ee822c8e9d31e96a65bd044a3de0093efae284d37964437
2026-01-13T00:00:00-05:00
Directional Selective Fixed-Filter Active Noise Control Based on a Convolutional Neural Network in Reverberant Environments
arXiv:2601.06981v1 Announce Type: new Abstract: Selective fixed-filter active noise control (SFANC) is a novel approach capable of mitigating noise with varying frequency characteristics. It offers faster response and greater computational efficiency compared to traditional adaptive algorithms. However, spatial factors, particularly the influence of the noise source location, are often overlooked. Some existing studies have explored the impact of the direction-of-arrival (DoA) of the noise source on ANC performance, but they are mostly limited to free-field conditions and do not consider the more complex indoor reverberant environments. To address this gap, this paper proposes a learning-based directional SFANC method that incorporates the DoA of the noise source in reverberant environments. In this framework, multiple reference signals are processed by a convolutional neural network (CNN) to estimate the azimuth and elevation angles of the noise source, as well as to identify the most appropriate control filter for effective noise cancellation. Compared to traditional adaptive algorithms, the proposed approach achieves superior noise reduction with shorter response times, even in the presence of reverberations.
https://arxiv.org/abs/2601.06981
Academic Papers
svg
0f018866e22bac7417faae99c037394ca5779cf375e0ada3de79b9623043b16d
2026-01-13T00:00:00-05:00
FinCARDS: Card-Based Analyst Reranking for Financial Document Question Answering
arXiv:2601.06992v1 Announce Type: new Abstract: Financial question answering (QA) over long corporate filings requires evidence to satisfy strict constraints on entities, financial metrics, fiscal periods, and numeric values. However, existing LLM-based rerankers primarily optimize semantic relevance, leading to unstable rankings and opaque decisions on long documents. We propose FinCards, a structured reranking framework that reframes financial evidence selection as constraint satisfaction under a finance-aware schema. FinCards represents filing chunks and questions using aligned schema fields (entities, metrics, periods, and numeric spans), enabling deterministic field-level matching. Evidence is selected via a multi-stage tournament reranking with stability-aware aggregation, producing auditable decision traces. Across two corporate filing QA benchmarks, FinCards substantially improves early-rank retrieval over both lexical and LLM-based reranking baselines, while reducing ranking variance, without requiring model fine-tuning or unpredictable inference budgets. Our code is available at https://github.com/XanderZhou2022/FINCARDS.
https://arxiv.org/abs/2601.06992
Academic Papers
svg
8ec7c5b96ebef90d71339c3811bbf4336573269bb6ac786669d0d35a123354eb
2026-01-13T00:00:00-05:00
Can Textual Reasoning Improve the Performance of MLLMs on Fine-grained Visual Classification?
arXiv:2601.06993v1 Announce Type: new Abstract: Multi-modal large language models (MLLMs) exhibit strong general-purpose capabilities, yet still struggle on Fine-Grained Visual Classification (FGVC), a core perception task that requires subtle visual discrimination and is crucial for many real-world applications. A widely adopted strategy for boosting performance on challenging tasks such as math and coding is Chain-of-Thought (CoT) reasoning. However, several prior works have reported that CoT can actually harm performance on visual perception tasks. These studies, though, examine the issue from relatively narrow angles and leave open why CoT degrades perception-heavy performance. We systematically re-examine the role of CoT in FGVC through the lenses of zero-shot evaluation and multiple training paradigms. Across these settings, we uncover a central paradox: the degradation induced by CoT is largely driven by the reasoning length, in which longer textual reasoning consistently lowers classification accuracy. We term this phenomenon the ``Cost of Thinking''. Building on this finding, we make two key contributions: (1) \alg, a simple and general plug-and-play normalization method for multi-reward optimization that balances heterogeneous reward signals, and (2) ReFine-RFT, a framework that combines ensemble rewards with \alg to constrain reasoning length while providing dense accuracy-oriented feedback. Extensive experiments demonstrate the effectiveness of our findings and the proposed ReFine-RFT, achieving state-of-the-art performance across FGVC benchmarks. Code and models are available at \href{https://github.com/jiezhu23/ReFine-RFT}{Project Link}.
https://arxiv.org/abs/2601.06993
Academic Papers
svg
84ac7964b6ce1d18886293c061835a92817148a376910d8e6b3a0a270b106a6d
2026-01-13T00:00:00-05:00
ObjSplat: Geometry-Aware Gaussian Surfels for Active Object Reconstruction
arXiv:2601.06997v1 Announce Type: new Abstract: Autonomous high-fidelity object reconstruction is fundamental for creating digital assets and bridging the simulation-to-reality gap in robotics. We present ObjSplat, an active reconstruction framework that leverages Gaussian surfels as a unified representation to progressively reconstruct unknown objects with both photorealistic appearance and accurate geometry. Addressing the limitations of conventional opacity or depth-based cues, we introduce a geometry-aware viewpoint evaluation pipeline that explicitly models back-face visibility and occlusion-aware multi-view covisibility, reliably identifying under-reconstructed regions even on geometrically complex objects. Furthermore, to overcome the limitations of greedy planning strategies, ObjSplat employs a next-best-path (NBP) planner that performs multi-step lookahead on a dynamically constructed spatial graph. By jointly optimizing information gain and movement cost, this planner generates globally efficient trajectories. Extensive experiments in simulation and on real-world cultural artifacts demonstrate that ObjSplat produces physically consistent models within minutes, achieving superior reconstruction fidelity and surface completeness while significantly reducing scan time and path length compared to state-of-the-art approaches. Project page: https://li-yuetao.github.io/ObjSplat-page/ .
https://arxiv.org/abs/2601.06997
Academic Papers
svg
cd8082c521dee7ea2f0c721c9ff921a216c182796c9e63a09f60ee6f1215c436
2026-01-13T00:00:00-05:00
Spatial Multi-Task Learning for Breast Cancer Molecular Subtype Prediction from Single-Phase DCE-MRI
arXiv:2601.07001v1 Announce Type: new Abstract: Accurate molecular subtype classification is essential for personalized breast cancer treatment, yet conventional immunohistochemical analysis relies on invasive biopsies and is prone to sampling bias. Although dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) enables non-invasive tumor characterization, clinical workflows typically acquire only single-phase post-contrast images to reduce scan time and contrast agent dose. In this study, we propose a spatial multi-task learning framework for breast cancer molecular subtype prediction from clinically practical single-phase DCE-MRI. The framework simultaneously predicts estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2) status, and the Ki-67 proliferation index -- biomarkers that collectively define molecular subtypes. The architecture integrates a deep feature extraction network with multi-scale spatial attention to capture intratumoral and peritumoral characteristics, together with a region-of-interest weighting module that emphasizes the tumor core, rim, and surrounding tissue. Multi-task learning exploits biological correlations among biomarkers through shared representations with task-specific prediction branches. Experiments on a dataset of 960 cases (886 internal cases split 7:1:2 for training/validation/testing, and 74 external cases evaluated via five-fold cross-validation) demonstrate that the proposed method achieves an AUC of 0.893, 0.824, and 0.857 for ER, PR, and HER2 classification, respectively, and a mean absolute error of 8.2\% for Ki-67 regression, significantly outperforming radiomics and single-task deep learning baselines. These results indicate the feasibility of accurate, non-invasive molecular subtype prediction using standard imaging protocols.
https://arxiv.org/abs/2601.07001
Academic Papers
svg
886d547d65a38649bf3db588bbcc049c626ff14c49dd3aa19adfdbddf5832ddb
2026-01-13T00:00:00-05:00
MemTrust: A Zero-Trust Architecture for Unified AI Memory System
arXiv:2601.07004v1 Announce Type: new Abstract: AI memory systems are evolving toward unified context layers that enable efficient cross-agent collaboration and multi-tool workflows, facilitating better accumulation of personal data and learning of user preferences. However, centralization creates a trust crisis where users must entrust cloud providers with sensitive digital memory data. We identify a core tension between personalization demands and data sovereignty: centralized memory systems enable efficient cross-agent collaboration but expose users' sensitive data to cloud provider risks, while private deployments provide security but limit collaboration. To resolve this tension, we aim to achieve local-equivalent security while enabling superior maintenance efficiency and collaborative capabilities. We propose a five-layer architecture abstracting common functional components of AI memory systems: Storage, Extraction, Learning, Retrieval, and Governance. By applying TEE protection to each layer, we establish a trustworthy framework. Based on this, we design MemTrust, a hardware-backed zero-trust architecture that provides cryptographic guarantees across all layers. Our contributions include the five-layer abstraction, "Context from MemTrust" protocol for cross-application sharing, side-channel hardened retrieval with obfuscated access patterns, and comprehensive security analysis. The architecture enables third-party developers to port existing systems with acceptable development costs, achieving system-wide trustworthiness. We believe that AI memory plays a crucial role in enhancing the efficiency and collaboration of agents and AI tools. AI memory will become the foundational infrastructure for AI agents, and MemTrust serves as a universal trusted framework for AI memory systems, with the goal of becoming the infrastructure of memory infrastructure.
https://arxiv.org/abs/2601.07004
Academic Papers
svg
083111b184034be5f11bb24b22847b5fb1e82c67daca5735d48cfa1de82b5ce6
2026-01-13T00:00:00-05:00
MicLog: Towards Accurate and Efficient LLM-based Log Parsing via Progressive Meta In-Context Learning
arXiv:2601.07005v1 Announce Type: new Abstract: Log parsing converts semi-structured logs into structured templates, forming a critical foundation for downstream analysis. Traditional syntax and semantic-based parsers often struggle with semantic variations in evolving logs and data scarcity stemming from their limited domain coverage. Recent large language model (LLM)-based parsers leverage in-context learning (ICL) to extract semantics from examples, demonstrating superior accuracy. However, LLM-based parsers face two main challenges: 1) underutilization of ICL capabilities, particularly in dynamic example selection and cross-domain generalization, leading to inconsistent performance; 2) time-consuming and costly LLM querying. To address these challenges, we present MicLog, the first progressive meta in-context learning (ProgMeta-ICL) log parsing framework that combines meta-learning with ICL on small open-source LLMs (i.e., Qwen-2.5-3B). Specifically, MicLog: i) enhances LLMs' ICL capability through a zero-shot to k-shot ProgMeta-ICL paradigm, employing weighted DBSCAN candidate sampling and enhanced BM25 demonstration selection; ii) accelerates parsing via a multi-level pre-query cache that dynamically matches and refines recently parsed templates. Evaluated on Loghub-2.0, MicLog achieves 10.3% higher parsing accuracy than the state-of-the-art parser while reducing parsing time by 42.4%.
https://arxiv.org/abs/2601.07005
Academic Papers
svg
019a4b5d25c431304aad728bdc55294232e280276914ed12eb56d22496682feb
2026-01-13T00:00:00-05:00
LLM Performance Predictors: Learning When to Escalate in Hybrid Human-AI Moderation Systems
arXiv:2601.07006v1 Announce Type: new Abstract: As LLMs are increasingly integrated into human-in-the-loop content moderation systems, a central challenge is deciding when their outputs can be trusted versus when escalation for human review is preferable. We propose a novel framework for supervised LLM uncertainty quantification, learning a dedicated meta-model based on LLM Performance Predictors (LPPs) derived from LLM outputs: log-probabilities, entropy, and novel uncertainty attribution indicators. We demonstrate that our method enables cost-aware selective classification in real-world human-AI workflows: escalating high-risk cases while automating the rest. Experiments across state-of-the-art LLMs, including both off-the-shelf (Gemini, GPT) and open-source (Llama, Qwen), on multimodal and multilingual moderation tasks, show significant improvements over existing uncertainty estimators in accuracy-cost trade-offs. Beyond uncertainty estimation, the LPPs enhance explainability by providing new insights into failure conditions (e.g., ambiguous content vs. under-specified policy). This work establishes a principled framework for uncertainty-aware, scalable, and responsible human-AI moderation workflows.
https://arxiv.org/abs/2601.07006
Academic Papers
svg
982576aacae56d3b013ac34e72900b726d50129c700bc85be313780b89c4c27d
2026-01-13T00:00:00-05:00
Lexicalized Constituency Parsing for Middle Dutch: Low-resource Training and Cross-Domain Generalization
arXiv:2601.07008v1 Announce Type: new Abstract: Recent years have seen growing interest in applying neural networks and contextualized word embeddings to the parsing of historical languages. However, most advances have focused on dependency parsing, while constituency parsing for low-resource historical languages like Middle Dutch has received little attention. In this paper, we adapt a transformer-based constituency parser to Middle Dutch, a highly heterogeneous and low-resource language, and investigate methods to improve both its in-domain and cross-domain performance. We show that joint training with higher-resource auxiliary languages increases F1 scores by up to 0.73, with the greatest gains achieved from languages that are geographically and temporally closer to Middle Dutch. We further evaluate strategies for leveraging newly annotated data from additional domains, finding that fine-tuning and data combination yield comparable improvements, and our neural parser consistently outperforms the currently used PCFG-based parser for Middle Dutch. We further explore feature-separation techniques for domain adaptation and demonstrate that a minimum threshold of approximately 200 examples per domain is needed to effectively enhance cross-domain performance.
https://arxiv.org/abs/2601.07008
Academic Papers
svg
4f96393130d2ecfd4a7df0f658e39c7d9add320b6d5ea34ce84f3adb315f62df
2026-01-13T00:00:00-05:00
A Sliding Mode Controller Based on Timoshenko Beam Theory Developed for a Tendon-Driven Robotic Wrist
arXiv:2601.07009v1 Announce Type: new Abstract: Development of dexterous robotic joints is essential for advancing manipulation capabilities in robotic systems. This paper presents the design and implementation of a tendon-driven robotic wrist joint together with an efficient Sliding Mode Controller (SMC) for precise motion control. The wrist mechanism is modeled using a Timoshenko-based approach to accurately capture its kinematic and dynamic properties, which serve as the foundation for tendon force calculations within the controller. The proposed SMC is designed to deliver fast dynamic response and computational efficiency, enabling accurate trajectory tracking under varying operating conditions. The effectiveness of the controller is validated through comparative analyses with existing controllers for similar wrist mechanisms. The proposed SMC demonstrates superior performance in both simulation and experimental studies. The Root Mean Square Error (RMSE) in simulation is approximately 1.67e-2 radians, while experimental validation yields an error of 0.2 radians. Additionally, the controller achieves a settling time of less than 3 seconds and a steady-state error below 1e-1 radians, consistently observed across both simulation and experimental evaluations. Comparative analyses confirm that the developed SMC surpasses alternative control strategies in motion accuracy, rapid convergence, and steady-state precision. This work establishes a foundation for future exploration of tendon-driven wrist mechanisms and control strategies in robotic applications.
https://arxiv.org/abs/2601.07009
Academic Papers
svg
2413f46d1fd36aff3766591a0d610b637eb8ad938d1304a1875d9f64b2ada134
2026-01-13T00:00:00-05:00
Belief in False Information: A Human-Centered Security Risk in Sociotechnical Systems
arXiv:2601.07016v1 Announce Type: new Abstract: This paper provides a comprehensive literature review on the belief in false information, including misinformation, disinformation, and fake information. It addresses the increasing societal concern regarding false information, which is fueled by technological progress, especially advancements in artificial intelligence. This review systematically identifies and categorizes factors that influence the belief in false information. The review identifies 24 influence factors grouped into six main categories: demographic factors, personality traits, psychological factors, policy and values, media consumption, and preventive factors. Key findings highlight that lower education levels, high extraversion, low agreeableness, high neuroticism, and low cognitive reflection significantly increase belief in false information. The effectiveness of preventive strategies like labeling false information and promoting reflection about correctness is also discussed. This literature review conceptualizes belief in false information as a human-centered security risk in sociotechnical systems, as it can be exploited to manipulate decisions, undermine trust, and increase susceptibility to social engineering. It aims to inform preventive strategies that strengthen socio-technical security and societal resilience.
https://arxiv.org/abs/2601.07016
Academic Papers
svg
ed8cd285c6988e612053118ee0cd1cd7529339cdefa1bb1271575d849ab994fd
2026-01-13T00:00:00-05:00
The Ill-Posed Foundations of Physics-Informed Neural Networks and Their Finite-Difference Variants
arXiv:2601.07017v1 Announce Type: new Abstract: Physics-informed neural networks based on automatic differentiation (AD-PINNs) and their finite-difference counterparts (FD-PINNs) are widely used for solving partial differential equations (PDEs), yet their analytical properties remain poorly understood. This work provides a unified mathematical foundation for both formulations. Under mild regularity assumptions on the activation function and for sufficiently wide neural networks of depth at least two, we prove that both the AD- and FD-PINN optimization problems are ill-posed: whenever a minimizer exists, there are in fact infinitely many, and uniqueness fails regardless of the choice of collocation points or finite-difference stencil. Nevertheless, we establish two structural properties. First, whenever the underlying PDE or its finite-difference discretization admits a solution, the corresponding AD-PINN or FD-PINN loss also admits a minimizer, realizable by a neural network of finite width. Second, FD-PINNs are tightly coupled to the underlying finite-difference scheme: every FD-PINN minimizer agrees with a finite-difference minimizer on the grid, and in regimes where the discrete PDE solution is unique, all zero-loss FD-PINN minimizers coincide with the discrete PDE solution on the stencil. Numerical experiments illustrate these theoretical insights: FD-PINNs remain stable in representative forward and inverse problems, including settings where AD-PINNs may fail to converge. We also include an inverse problem with noisy data, demonstrating that FD-PINNs retain robustness in this setting as well. Taken together, our results clarify the analytical limitations of AD-PINNs and explain the structural reasons for the more stable behavior observed in FD-PINNs.
https://arxiv.org/abs/2601.07017
Academic Papers
svg
f91b370c9b60a28614377f7f8f62029f0d8f7ba45d746aef5e7ba2c6c158698d
2026-01-13T00:00:00-05:00
Zer0n: An AI-Assisted Vulnerability Discovery and Blockchain-Backed Integrity Framework
arXiv:2601.07019v1 Announce Type: new Abstract: As vulnerability research increasingly adopts generative AI, a critical reliance on opaque model outputs has emerged, creating a "trust gap" in security automation. We address this by introducing Zer0n, a framework that anchors the reasoning capabilities of Large Language Models (LLMs) to the immutable audit trails of blockchain technology. Specifically, we integrate Gemini 2.0 Pro for logic-based vulnerability detection with the Avalanche C-Chain for tamper-evident artifact logging. Unlike fully decentralized solutions that suffer from high latency, Zer0n employs a hybrid architecture: execution remains off-chain for performance, while integrity proofs are finalized on-chain. Our evaluation on a dataset of 500 endpoints reveals that this approach achieves 80% detection accuracy with only a marginal 22.9% overhead, effectively demonstrating that decentralized integrity can coexist with high-speed security workflows.
https://arxiv.org/abs/2601.07019
Academic Papers
svg
1181b98c6fe1c6f515231eb12d9f4e6d7d5f73b76f176b4a174eb8b955da2982
2026-01-13T00:00:00-05:00
TurkBench: A Benchmark for Evaluating Turkish Large Language Models
arXiv:2601.07020v1 Announce Type: new Abstract: With the recent surge in the development of large language models, the need for comprehensive and language-specific evaluation benchmarks has become critical. While significant progress has been made in evaluating English language models, benchmarks for other languages, particularly those with unique linguistic characteristics such as Turkish, remain less developed. Our study introduces TurkBench, a comprehensive benchmark designed to assess the capabilities of generative large language models in the Turkish language. TurkBench involves 8,151 data samples across 21 distinct subtasks. These are organized under six main categories of evaluation: Knowledge, Language Understanding, Reasoning, Content Moderation, Turkish Grammar and Vocabulary, and Instruction Following. The diverse range of tasks and the culturally relevant data would provide researchers and developers with a valuable tool for evaluating their models and identifying areas for improvement. We further publish our benchmark for online submissions at https://huggingface.co/turkbench
https://arxiv.org/abs/2601.07020
Academic Papers
svg
6d04361bd22f5bc45dbc2ccfb0ad2f8c47d1fe20da5e66219b2f7c21bc78cbf2
2026-01-13T00:00:00-05:00
Tight Analysis of Decentralized SGD: A Markov Chain Perspective
arXiv:2601.07021v1 Announce Type: new Abstract: We propose a novel analysis of the Decentralized Stochastic Gradient Descent (DSGD) algorithm with constant step size, interpreting the iterates of the algorithm as a Markov chain. We show that DSGD converges to a stationary distribution, with its bias, to first order, decomposable into two components: one due to decentralization (growing with the graph's spectral gap and clients' heterogeneity) and one due to stochasticity. Remarkably, the variance of local parameters is, at the first-order, inversely proportional to the number of clients, regardless of the network topology and even when clients' iterates are not averaged at the end. As a consequence of our analysis, we obtain non-asymptotic convergence bounds for clients' local iterates, confirming that DSGD has linear speed-up in the number of clients, and that the network topology only impacts higher-order terms.
https://arxiv.org/abs/2601.07021
Academic Papers
svg
3979c0a3b04957a0abf458a0ebee503e50b449c16817575da533544d0fc61647
2026-01-13T00:00:00-05:00
Solar Open Technical Report
arXiv:2601.07022v1 Announce Type: new Abstract: We introduce Solar Open, a 102B-parameter bilingual Mixture-of-Experts language model for underserved languages. Solar Open demonstrates a systematic methodology for building competitive LLMs by addressing three interconnected challenges. First, to train effectively despite data scarcity for underserved languages, we synthesize 4.5T tokens of high-quality, domain-specific, and RL-oriented data. Second, we coordinate this data through a progressive curriculum jointly optimizing composition, quality thresholds, and domain coverage across 20 trillion tokens. Third, to enable reasoning capabilities through scalable RL, we apply our proposed framework SnapPO for efficient optimization. Across benchmarks in English and Korean, Solar Open achieves competitive performance, demonstrating the effectiveness of this methodology for underserved language AI development.
https://arxiv.org/abs/2601.07022
Academic Papers
svg
e7e872e2032da517183d00eeefe3465e08918aadec2b13ca5ec49458cab8b9ea
2026-01-13T00:00:00-05:00
CloneMem: Benchmarking Long-Term Memory for AI Clones
arXiv:2601.07023v1 Announce Type: new Abstract: AI Clones aim to simulate an individual's thoughts and behaviors to enable long-term, personalized interaction, placing stringent demands on memory systems to model experiences, emotions, and opinions over time. Existing memory benchmarks primarily rely on user-agent conversational histories, which are temporally fragmented and insufficient for capturing continuous life trajectories. We introduce CloneMem, a benchmark for evaluating longterm memory in AI Clone scenarios grounded in non-conversational digital traces, including diaries, social media posts, and emails, spanning one to three years. CloneMem adopts a hierarchical data construction framework to ensure longitudinal coherence and defines tasks that assess an agent's ability to track evolving personal states. Experiments show that current memory mechanisms struggle in this setting, highlighting open challenges for life-grounded personalized AI. Code and dataset are available at https://github.com/AvatarMemory/CloneMemBench
https://arxiv.org/abs/2601.07023
Academic Papers
svg
59817dc0878f2b2216186fdd12bb52c0b93264ba869f124ec200b8285a832131
2026-01-13T00:00:00-05:00
A Relaxed Direct-insertion Downscaling Method For Discrete-in-time Data Assimilation
arXiv:2601.07025v1 Announce Type: new Abstract: This paper improves the spectrally-filtered direct-insertion downscaling method for discrete-in-time data assimilation by introducing a relaxation parameter that overcomes a constraint on the observation frequency. Numerical simulations demonstrate that taking the relaxation parameter proportional to the time between observations allows one to vary the observation frequency over a wide range while maintaining convergence of the approximating solution to the reference solution. Under the same assumptions we analytically prove that taking the observation frequency to infinity results in the continuous-in-time nudging method.
https://arxiv.org/abs/2601.07025
Academic Papers
svg
4c224fcd933b1578c348d27cd2fce49ebe1ebf849885417d3ae7b57f08497f86
2026-01-13T00:00:00-05:00
Codified Foreshadowing-Payoff Text Generation
arXiv:2601.07033v1 Announce Type: new Abstract: Foreshadowing and payoff are ubiquitous narrative devices through which authors introduce commitments early in a story and resolve them through concrete, observable outcomes. However, despite advances in story generation, large language models (LLMs) frequently fail to bridge these long-range narrative dependencies, often leaving "Chekhov's guns" unfired even when the necessary context is present. Existing evaluations largely overlook this structural failure, focusing on surface-level coherence rather than the logical fulfillment of narrative setups. In this paper, we introduce Codified Foreshadowing-Payoff Generation (CFPG), a novel framework that reframes narrative quality through the lens of payoff realization. Recognizing that LLMs struggle to intuitively grasp the "triggering mechanism" of a foreshadowed event, CFPG transforms narrative continuity into a set of executable causal predicates. By mining and encoding Foreshadow-Trigger-Payoff triples from the BookSum corpus, we provide structured supervision that ensures foreshadowed commitments are not only mentioned but also temporally and logically fulfilled. Experiments demonstrate that CFPG significantly outperforms standard prompting baselines in payoff accuracy and narrative alignment. Our findings suggest that explicitly codifying narrative mechanics is essential for moving LLMs from surface-level fluency to genuine narrative competence.
https://arxiv.org/abs/2601.07033
Academic Papers
svg
45d6c4f067ba3e6c9337cd018e720858f0fa2a0dcb27454d966e05ba5e3848b1
2026-01-13T00:00:00-05:00
Quantum Optical Integrated Sensing and Communication with Homodyne BPSK Detection
arXiv:2601.07034v1 Announce Type: new Abstract: In this letter, we propose a quantum integrated sensing and communication scheme for a quantum optical link using binary phase-shift keying modulation and homodyne detection. The link operates over a phase-insensitive Gaussian channel with an unknown deterministic phase rotation, where the homodyne receiver jointly carries out symbol detection and phase estimation. We formulate a design problem that minimizes the bit-error rate subject to a Fisher information-based constraint on estimation accuracy. To solve it, we develop an iterative algorithm composed of an inner expectation-maximization loop for joint detection and estimation and an outer loop that adaptively retunes the local oscillator phase. Numerical results confirm the effectiveness of the proposed approach and demonstrate a fundamental trade-off between communication reliability and sensing accuracy.
https://arxiv.org/abs/2601.07034
Academic Papers
svg
691ad58c228d6d0d7ce0c13578f61be1299fac87c40e67a7bb4221ba8cb4e8fb
2026-01-13T00:00:00-05:00
Explainable Deep Radiogenomic Molecular Imaging for MGMT Methylation Prediction in Glioblastoma
arXiv:2601.07035v1 Announce Type: new Abstract: Glioblastoma (GBM) is a highly aggressive primary brain tumor with limited therapeutic options and poor prognosis. The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) gene promoter is a critical molecular biomarker that influences patient response to temozolomide chemotherapy. Traditional methods for determining MGMT status rely on invasive biopsies and are limited by intratumoral heterogeneity and procedural risks. This study presents a radiogenomic molecular imaging analysis framework for the non-invasive prediction of MGMT promoter methylation using multi-parametric magnetic resonance imaging (mpMRI). Our approach integrates radiomics, deep learning, and explainable artificial intelligence (XAI) to analyze MRI-derived imaging phenotypes and correlate them with molecular labels. Radiomic features are extracted from FLAIR, T1-weighted, T1-contrast-enhanced, and T2-weighted MRI sequences, while a 3D convolutional neural network learns deep representations from the same modalities. These complementary features are fused using both early fusion and attention-based strategies and classified to predict MGMT methylation status. To enhance clinical interpretability, we apply XAI methods such as Grad-CAM and SHAP to visualize and explain model decisions. The proposed framework is trained on the RSNA-MICCAI Radiogenomic Classification dataset and externally validated on the BraTS 2021 dataset. This work advances the field of molecular imaging by demonstrating the potential of AI-driven radiogenomics for precision oncology, supporting non-invasive, accurate, and interpretable prediction of clinically actionable molecular biomarkers in GBM.
https://arxiv.org/abs/2601.07035
Academic Papers
svg
cdab818c5dec16ab90eb706bbdc22c17802897287f505e1f086b7c504807f513
2026-01-13T00:00:00-05:00
Mid-Think: Training-Free Intermediate-Budget Reasoning via Token-Level Triggers
arXiv:2601.07036v1 Announce Type: new Abstract: Hybrid reasoning language models are commonly controlled through high-level Think/No-think instructions to regulate reasoning behavior, yet we found that such mode switching is largely driven by a small set of trigger tokens rather than the instructions themselves. Through attention analysis and controlled prompting experiments, we show that a leading ``Okay'' token induces reasoning behavior, while the newline pattern following ``'' suppresses it. Based on this observation, we propose Mid-Think, a simple training-free prompting format that combines these triggers to achieve intermediate-budget reasoning, consistently outperforming fixed-token and prompt-based baselines in terms of the accuracy-length trade-off. Furthermore, applying Mid-Think to RL training after SFT reduces training time by approximately 15% while improving final performance of Qwen3-8B on AIME from 69.8% to 72.4% and on GPQA from 58.5% to 61.1%, demonstrating its effectiveness for both inference-time control and RL-based reasoning training.
https://arxiv.org/abs/2601.07036
Academic Papers
svg
b94d79fab11f0ec0319a70e60c6d562f4e6be0c16348075ffb89b8bb5d6ed81c
2026-01-13T00:00:00-05:00
Task Arithmetic with Support Languages for Low-Resource ASR
arXiv:2601.07038v1 Announce Type: new Abstract: The development of resource-constrained approaches to automatic speech recognition (ASR) is of great interest due to its broad applicability to many low-resource languages for which there is scant usable data. Existing approaches to many low-resource natural language processing tasks leverage additional data from higher-resource languages that are closely related to a target low-resource language. One increasingly popular approach uses task arithmetic to combine models trained on different tasks to create a model for a task where there is little to no training data. In this paper, we consider training on a particular language to be a task, and we generate task vectors by fine-tuning variants of the Whisper ASR system. For pairings of high- and low-resource languages, we merge task vectors via a linear combination, optimizing the weights of the linear combination on the downstream word error rate on the low-resource target language's validation set. We find that this approach consistently improves performance on the target languages.
https://arxiv.org/abs/2601.07038
Academic Papers
svg
a44c839c456be240f12a3d53004577910a3c296ab6aefd2382f08e7349c4f844
2026-01-13T00:00:00-05:00
When Abundance Conceals Weakness: Knowledge Conflict in Multilingual Models
arXiv:2601.07041v1 Announce Type: new Abstract: Large Language Models (LLMs) encode vast world knowledge across multiple languages, yet their internal beliefs are often unevenly distributed across linguistic spaces. When external evidence contradicts these language-dependent memories, models encounter \emph{cross-lingual knowledge conflict}, a phenomenon largely unexplored beyond English-centric settings. We introduce \textbf{CLEAR}, a \textbf{C}ross-\textbf{L}ingual knowl\textbf{E}dge conflict ev\textbf{A}luation f\textbf{R}amework that systematically examines how multilingual LLMs reconcile conflicting internal beliefs and multilingual external evidence. CLEAR decomposes conflict resolution into four progressive scenarios, from multilingual parametric elicitation to competitive multi-source cross-lingual induction, and systematically evaluates model behavior across two complementary QA benchmarks with distinct task characteristics. We construct multilingual versions of ConflictQA and ConflictingQA covering 10 typologically diverse languages and evaluate six representative LLMs. Our experiments reveal a task-dependent decision dichotomy. In reasoning-intensive tasks, conflict resolution is dominated by language resource abundance, with high-resource languages exerting stronger persuasive power. In contrast, for entity-centric factual conflicts, linguistic affinity, not resource scale, becomes decisive, allowing low-resource but linguistically aligned languages to outperform distant high-resource ones.
https://arxiv.org/abs/2601.07041
Academic Papers
svg
9f057a88f28d7d923959c0951a31a49160a8357b1396ea682cc2189bc934555c
2026-01-13T00:00:00-05:00
Engineering of Hallucination in Generative AI: It's not a Bug, it's a Feature
arXiv:2601.07046v1 Announce Type: new Abstract: Generative artificial intelligence (AI) is conquering our lives at lightning speed. Large language models such as ChatGPT answer our questions or write texts for us, large computer vision models such as GAIA-1 generate videos on the basis of text descriptions or continue prompted videos. These neural network models are trained using large amounts of text or video data, strictly according to the real data employed in training. However, there is a surprising observation: When we use these models, they only function satisfactorily when they are allowed a certain degree of fantasy (hallucination). While hallucination usually has a negative connotation in generative AI - after all, ChatGPT is expected to give a fact-based answer! - this article recapitulates some simple means of probability engineering that can be used to encourage generative AI to hallucinate to a limited extent and thus lead to the desired results. We have to ask ourselves: Is hallucination in gen-erative AI probably not a bug, but rather a feature?
https://arxiv.org/abs/2601.07046
Academic Papers
svg
50e75b8534a356a2cf2a502db4e15e6f0e082c3d43abbeab25ed663a89ae9062
2026-01-13T00:00:00-05:00
Jasper: ANNS Quantized for Speed, Built for Change on GPU
arXiv:2601.07048v1 Announce Type: new Abstract: Approximate nearest neighbor search (ANNS) is a core problem in machine learning and information retrieval applications. GPUs offer a promising path to high-performance ANNS: they provide massive parallelism for distance computations, are readily available, and can co-locate with downstream applications. Despite these advantages, current GPU-accelerated ANNS systems face three key limitations. First, real-world applications operate on evolving datasets that require fast batch updates, yet most GPU indices must be rebuilt from scratch when new data arrives. Second, high-dimensional vectors strain memory bandwidth, but current GPU systems lack efficient quantization techniques that reduce data movement without introducing costly random memory accesses. Third, the data-dependent memory accesses inherent to greedy search make overlapping compute and memory difficult, leading to reduced performance. We present Jasper, a GPU-native ANNS system with both high query throughput and updatability. Jasper builds on the Vamana graph index and overcomes existing bottlenecks via three contributions: (1) a CUDA batch-parallel construction algorithm that enables lock-free streaming insertions, (2) a GPU-efficient implementation of RaBitQ quantization that reduces memory footprint up to 8x without the random access penalties, and (3) an optimized greedy search kernel that increases compute utilization, resulting in better latency hiding and higher throughput. Our evaluation across five datasets shows that Jasper achieves up to 1.93x higher query throughput than CAGRA and achieves up to 80% peak utilization as measured by the roofline model. Jasper's construction scales efficiently and constructs indices an average of 2.4x faster than CAGRA while providing updatability that CAGRA lacks. Compared to BANG, the previous fastest GPU Vamana implementation, Jasper delivers 19-131x faster queries.
https://arxiv.org/abs/2601.07048
Academic Papers
svg
7056c1ec060e60965ca918221c940d6de784555f8c37f637cade9e9e9bbf5517
2026-01-13T00:00:00-05:00
Between Policy and Practice: GenAI Adoption in Agile Software Development Teams
arXiv:2601.07051v1 Announce Type: new Abstract: Context: The rapid emergence of generative AI (GenAI) tools has begun to reshape various software engineering activities. Yet, their adoption within agile environments remains underexplored. Objective: This study investigates how agile practitioners adopt GenAI tools in real-world organizational contexts, focusing on regulatory conditions, use cases, benefits, and barriers. Method: An exploratory multiple case study was conducted in three German organizations, involving 17 semi-structured interviews and document analysis. A cross-case thematic analysis was applied to identify GenAI adoption patterns. Results: Findings reveal that GenAI is primarily used for creative tasks, documentation, and code assistance. Benefits include efficiency gains and enhanced creativity, while barriers relate to data privacy, validation effort, and lack of governance. Using the Technology-Organization-Environment (TOE) framework, we find that these barriers stem from misalignments across the three dimensions. Regulatory pressures are often translated into policies without accounting for actual technological usage patterns or organizational constraints. This leads to systematic gaps between policy and practice. Conclusion: GenAI offers significant potential to augment agile roles but requires alignment across TOE dimensions, including clear policies, data protection measures, and user training to ensure responsible and effective integration.
https://arxiv.org/abs/2601.07051
Academic Papers
svg
1f4b239eb585c09d9c04116af73eee5eb907f6028c0e7343a1f0660ec72ec598
2026-01-13T00:00:00-05:00
RSLCPP - Deterministic Simulations Using ROS 2
arXiv:2601.07052v1 Announce Type: new Abstract: Simulation is crucial in real-world robotics, offering safe, scalable, and efficient environments for developing applications, ranging from humanoid robots to autonomous vehicles and drones. While the Robot Operating System (ROS) has been widely adopted as the backbone of these robotic applications in both academia and industry, its asynchronous, multiprocess design complicates reproducibility, especially across varying hardware platforms. Deterministic callback execution cannot be guaranteed when computation times and communication delays vary. This lack of reproducibility complicates scientific benchmarking and continuous integration, where consistent results are essential. To address this, we present a methodology to create deterministic simulations using ROS 2 nodes. Our ROS Simulation Library for C++ (RSLCPP) implements this approach, enabling existing nodes to be combined into a simulation routine that yields reproducible results without requiring any code changes. We demonstrate that our approach yields identical results across various CPUs and architectures when testing both a synthetic benchmark and a real-world robotics system. RSLCPP is open-sourced at https://github.com/TUMFTM/rslcpp.
https://arxiv.org/abs/2601.07052
Academic Papers
svg
dbdfeed49db3527a5a999b8fe7a3af8ae9ab14d7f07431d65f669e8d54e88784
2026-01-13T00:00:00-05:00
Random Access in DNA Storage: Algorithms, Constructions, and Bounds
arXiv:2601.07053v1 Announce Type: new Abstract: As DNA data storage moves closer to practical deployment, minimizing sequencing coverage depth is essential to reduce both operational costs and retrieval latency. This paper addresses the recently studied Random Access Problem, which evaluates the expected number of read samples required to recover a specific information strand from $n$ encoded strands. We propose a novel algorithm to compute the exact expected number of reads, achieving a computational complexity of $O(n)$ for fixed field size $q$ and information length $k$. Furthermore, we derive explicit formulas for the average and maximum expected number of reads, enabling an efficient search for optimal generator matrices under small parameters. Beyond theoretical analysis, we present new code constructions that improve the best-known upper bound from $0.8815k$ to $0.8811k$ for $k=3$, and achieve an upper bound of $0.8629k$ for $k=4$ for sufficiently large $q$. We also establish a tighter theoretical lower bound on the expected number of reads that improves upon state-of-the-art bounds. In particular, this bound establishes the optimality of the simple parity code for the case of $n=k+1$ across any alphabet $q$.
https://arxiv.org/abs/2601.07053
Academic Papers
svg
8ec07e05733bfcf84af5f50fce6b394eb5cf0ea3c956060211e860c917fdfb15
2026-01-13T00:00:00-05:00
Fine-Tuning vs. RAG for Multi-Hop Question Answering with Novel Knowledge
arXiv:2601.07054v1 Announce Type: new Abstract: Multi-hop question answering is widely used to evaluate the reasoning capabilities of large language models (LLMs), as it requires integrating multiple pieces of supporting knowledge to arrive at a correct answer. While prior work has explored different mechanisms for providing knowledge to LLMs, such as finetuning and retrieval-augmented generation (RAG), their relative effectiveness for multi-hop question answering remains insufficiently understood, particularly when the required knowledge is temporally novel. In this paper, we systematically compare parametric and non-parametric knowledge injection methods for open-domain multi-hop question answering. We evaluate unsupervised fine-tuning (continual pretraining), supervised fine-tuning, and retrieval-augmented generation across three 7B-parameter open-source LLMs. Experiments are conducted on two benchmarks: QASC, a standard multi-hop science question answering dataset, and a newly constructed dataset of over 10,000 multi-hop questions derived from Wikipedia events in 2024, designed to test knowledge beyond the models' pretraining cutoff. Our results show that unsupervised fine-tuning provides only limited gains over base models, suggesting that continual pretraining alone is insufficient for improving multi-hop reasoning accuracy. In contrast, retrieval-augmented generation yields substantial and consistent improvements, particularly when answering questions that rely on temporally novel information. Supervised fine-tuning achieves the highest overall accuracy across models and datasets. These findings highlight fundamental differences in how knowledge injection mechanisms support multi-hop question answering and underscore the importance of retrieval-based methods when external or compositional knowledge is required.
https://arxiv.org/abs/2601.07054
Academic Papers
svg
cdaf88b083418502d33f2645b408b0d97bd5fae63518e12a17809d67f30f346e
2026-01-13T00:00:00-05:00
Dr. Zero: Self-Evolving Search Agents without Training Data
arXiv:2601.07055v1 Announce Type: new Abstract: As high-quality data becomes increasingly difficult to obtain, data-free self-evolution has emerged as a promising paradigm. This approach allows large language models (LLMs) to autonomously generate and solve complex problems, thereby improving their reasoning capabilities. However, multi-turn search agents struggle in data-free self-evolution due to the limited question diversity and the substantial compute required for multi-step reasoning and tool using. In this work, we introduce Dr. Zero, a framework enabling search agents to effectively self-evolve without any training data. In particular, we design a self-evolution feedback loop where a proposer generates diverse questions to train a solver initialized from the same base model. As the solver evolves, it incentivizes the proposer to produce increasingly difficult yet solvable tasks, thus establishing an automated curriculum to refine both agents. To enhance training efficiency, we also introduce hop-grouped relative policy optimization (HRPO). This method clusters structurally similar questions to construct group-level baselines, effectively minimizing the sampling overhead in evaluating each query's individual difficulty and solvability. Consequently, HRPO significantly reduces the compute requirements for solver training without compromising performance or stability. Extensive experiment results demonstrate that the data-free Dr. Zero matches or surpasses fully supervised search agents, proving that complex reasoning and search capabilities can emerge solely through self-evolution.
https://arxiv.org/abs/2601.07055
Academic Papers
svg
efb3198d724905c508f69301dc908015de3164fb4cb559a4d2f7c18f0ca4e929
2026-01-13T00:00:00-05:00
Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features
arXiv:2601.07056v1 Announce Type: new Abstract: Medical hyperspectral imaging (HSI) enables accurate disease diagnosis by capturing rich spectral-spatial tissue information, but recent advances in deep learning have exposed its vulnerability to adversarial attacks. In this work, we identify two fundamental causes of this fragility: the reliance on local pixel dependencies for preserving tissue structure and the dependence on multiscale spectral-spatial representations for hierarchical feature encoding. Building on these insights, we propose a targeted adversarial attack framework for medical HSI, consisting of a Local Pixel Dependency Attack that exploits spatial correlations among neighboring pixels, and a Multiscale Information Attack that perturbs features across hierarchical spectral-spatial scales. Experiments on the Brain and MDC datasets demonstrate that our attacks significantly degrade classification performance, especially in tumor regions, while remaining visually imperceptible. Compared with existing methods, our approach reveals the unique vulnerabilities of medical HSI models and underscores the need for robust, structure-aware defenses in clinical applications.
https://arxiv.org/abs/2601.07056
Academic Papers
svg
4ee7e2f23e3b4370632f2477e0524e37bca91e16b3b9c43c69cd083a06f61271
2026-01-13T00:00:00-05:00
Hallucinations Live in Variance
arXiv:2601.07058v1 Announce Type: new Abstract: Benchmarks measure whether a model is correct. They do not measure whether a model is reliable. This distinction is largely academic for single-shot inference, but becomes critical for agentic AI systems, where a single rephrased prompt can trigger cascading failures in multi-step execution. Yet this form of instability is not captured by existing evaluations. Hallucinations live in variance: they arise when semantically equivalent prompts activate inconsistent internal pathways, producing divergent outputs. Consistent but incorrect outputs reflect bias or missing knowledge; confident guessing reflects calibration failure. Neither constitutes hallucination under this definition. When error is variance-dominated, reducing redundant pathways improves reliability without adding knowledge. We formalize this through Semantic Stability (SS), measured via Paraphrase Consistency (PC@k): generate k paraphrases, greedy decode each, compute mode agreement. SS is a diagnostic for variance-driven unreliability, not a method for improving correctness. We show that a dense Qwen3-0.6B agrees with itself only 23.8% of the time; at 32% sparsity, agreement jumps to 55.9%. A phase diagram reveals the sweet spot where variance reduction outpaces bias accumulation, and regimes where stability collapses onto wrong answers.
https://arxiv.org/abs/2601.07058
Academic Papers
svg
119c5336d72cb1652258c60683d1920f9bfd24a6b33eb8447725a60b5fbcbfff
2026-01-13T00:00:00-05:00
PALM: Progress-Aware Policy Learning via Affordance Reasoning for Long-Horizon Robotic Manipulation
arXiv:2601.07060v1 Announce Type: new Abstract: Recent advancements in vision-language-action (VLA) models have shown promise in robotic manipulation, yet they continue to struggle with long-horizon, multi-step tasks. Existing methods lack internal reasoning mechanisms that can identify task-relevant interaction cues or track progress within a subtask, leading to critical execution errors such as repeated actions, missed steps, and premature termination. To address these challenges, we introduce PALM, a VLA framework that structures policy learning around interaction-centric affordance reasoning and subtask progress cues. PALM distills complementary affordance representations that capture object relevance, contact geometry, spatial placements, and motion dynamics, and serve as task-relevant anchors for visuomotor control. To further stabilize long-horizon execution, PALM predicts continuous within-subtask progress, enabling seamless subtask transitions. Across extensive simulation and real-world experiments, PALM consistently outperforms baselines, achieving a 91.8% success rate on LIBERO-LONG, a 12.5% improvement in average length on CALVIN ABC->D, and a 2x improvement over real-world baselines across three long-horizon generalization settings.
https://arxiv.org/abs/2601.07060
Academic Papers
svg
b0e044f9a60d9dd4834bf5df0ccd6e4464ec8858e8beaa464763bab03d3fe30c
2026-01-13T00:00:00-05:00
Automated Domain Question Mapping (DQM) with Educational Learning Materials
arXiv:2601.07062v1 Announce Type: new Abstract: Concept maps have been widely utilized in education to depict knowledge structures and the interconnections between disciplinary concepts. Nonetheless, devising a computational method for automatically constructing a concept map from unstructured educational materials presents challenges due to the complexity and variability of educational content. We focus primarily on two challenges: (1) the lack of disciplinary concepts that are specifically designed for multi-level pedagogical purposes from low-order to high-order thinking, and (2) the limited availability of labeled data concerning disciplinary concepts and their interrelationships. To tackle these challenges, this research introduces an innovative approach for constructing Domain Question Maps (DQMs), rather than traditional concept maps. By formulating specific questions aligned with learning objectives, DQMs enhance knowledge representation and improve readiness for learner engagement. The findings indicate that the proposed method can effectively generate educational questions and discern hierarchical relationships among them, leading to structured question maps that facilitate personalized and adaptive learning in downstream applications.
https://arxiv.org/abs/2601.07062
Academic Papers
svg
d6ee0f1c23a73c57e5b0e70772b90fca532e21e6635b11fcc5de1bac4bed204b
2026-01-13T00:00:00-05:00
Neuromorphic FPGA Design for Digital Signal Processing
arXiv:2601.07069v1 Announce Type: new Abstract: In this paper, the foundations of neuromorphic computing, spiking neural networks (SNNs) and memristors, are analyzed and discussed. Neuromorphic computing is then applied to FPGA design for digital signal processing (DSP). Finite impulse response (FIR) and infinite impulse response (IIR) filters are implemented with and without neuromorphic computing in Vivado using Verilog HDL. The results suggest that neuromorphic computing can provide low-latency and synaptic plasticity thereby enabling continuous on-chip learning. Due to their parallel and event-driven nature, neuromorphic computing can reduce power consumption by eliminating von Neumann bottlenecks and improve efficiency, but at the cost of reduced numeric precision.
https://arxiv.org/abs/2601.07069
Academic Papers
svg
23e272c6b5ef138a0918b78a5622dc1a3a1467baa3c261decf7d37fd0faf1244
2026-01-13T00:00:00-05:00
LINEture: novel signature cryptosystem
arXiv:2601.07071v1 Announce Type: new Abstract: We propose a novel digital signature cryptosystem that exploits the concept of the brute-force problem. To ensure the security of the cryptosystem, we employed several mechanisms: sharing a common secret for factorable permutations, associating permutations with the message being signed, and confirming knowledge of the shared secret using a zero-knowledge proof. We developed a secret-sharing theory based on homomorphic matrix transformations for factorized permutations. The inverse matrix transformation for computing the shared secret is determined by secret parameters, which results in incompletely defined functionality and gives rise to a brute-force cryptanalysis problem. Randomization of session keys using a message hash and random parameters guarantees the uniqueness of each signature, even for identical messages. We employed a zero-knowledge authentication protocol to confirm knowledge of the shared secret, thereby protecting the verifier against unauthorized signature imposition. The LINEture cryptosystem is built on linear matrix algebra and does not rely on a computationally hard problem. High security is achieved through the appropriate selection of matrix transformation dimensions. Matrix computations potentially offer low operational costs for signature generation and verification.
https://arxiv.org/abs/2601.07071
Academic Papers
svg
47e038fc9c62a087c3e57103123b79de30926b32b3b725d2af03773a795ae407
2026-01-13T00:00:00-05:00
Overcoming the Retrieval Barrier: Indirect Prompt Injection in the Wild for LLM Systems
arXiv:2601.07072v1 Announce Type: new Abstract: Large language models (LLMs) increasingly rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt injection (IPI), where hidden instructions are planted in the corpora and hijack model behavior once retrieved. Previous studies have highlighted this risk but often avoid the hardest step: ensuring that malicious content is actually retrieved. In practice, unoptimized IPI is rarely retrieved under natural queries, which leaves its real-world impact unclear. We address this challenge by decomposing the malicious content into a trigger fragment that guarantees retrieval and an attack fragment that encodes arbitrary attack objectives. Based on this idea, we design an efficient and effective black-box attack algorithm that constructs a compact trigger fragment to guarantee retrieval for any attack fragment. Our attack requires only API access to embedding models, is cost-efficient (as little as $0.21 per target user query on OpenAI's embedding models), and achieves near-100% retrieval across 11 benchmarks and 8 embedding models (including both open-source models and proprietary services). Based on this attack, we present the first end-to-end IPI exploits under natural queries and realistic external corpora, spanning both RAG and agentic systems with diverse attack objectives. These results establish IPI as a practical and severe threat: when a user issued a natural query to summarize emails on frequently asked topics, a single poisoned email was sufficient to coerce GPT-4o into exfiltrating SSH keys with over 80% success in a multi-agent workflow. We further evaluate several defenses and find that they are insufficient to prevent the retrieval of malicious text, highlighting retrieval as a critical open vulnerability.
https://arxiv.org/abs/2601.07072
Academic Papers
svg
72a0faf90eba97b0df7a09cdd705172fcfce6cfe953a94a37d69386687d2f828
2026-01-13T00:00:00-05:00
Billboard in Focus: Estimating Driver Gaze Duration from a Single Image
arXiv:2601.07073v1 Announce Type: new Abstract: Roadside billboards represent a central element of outdoor advertising, yet their presence may contribute to driver distraction and accident risk. This study introduces a fully automated pipeline for billboard detection and driver gaze duration estimation, aiming to evaluate billboard relevance without reliance on manual annotations or eye-tracking devices. Our pipeline operates in two stages: (1) a YOLO-based object detection model trained on Mapillary Vistas and fine-tuned on BillboardLamac images achieved 94% mAP@50 in the billboard detection task (2) a classifier based on the detected bounding box positions and DINOv2 features. The proposed pipeline enables estimation of billboard driver gaze duration from individual frames. We show that our method is able to achieve 68.1% accuracy on BillboardLamac when considering individual frames. These results are further validated using images collected from Google Street View.
https://arxiv.org/abs/2601.07073
Academic Papers
svg
4216cbb1246bc385a09d19698b7c62b4ac329f66961b30c7191b339b14b4e756
2026-01-13T00:00:00-05:00
An efficient hyper reduced-order model for segregated solvers for geometrical parametrization problems
arXiv:2601.07082v1 Announce Type: new Abstract: We propose an efficient hyper-reduced order model (HROM) designed for segregated finite-volume solvers in geometrically parametrized problems. The method follows a discretize-then-project strategy: the full-order operators are first assembled using finite volume or finite element discretizations and then projected onto low-dimensional spaces using a small set of spatial sampling points, selected through hyper-reduction techniques such as DEIM. This approach removes the dependence of the online computational cost on the full mesh size. The method is assessed on three benchmark problems: a linear transport equation, a nonlinear Burgers equation, and the incompressible Navier--Stokes equations. The results show that the hyper-reduced models closely match full-order solutions while achieving substantial reductions in computational time. Since only a sparse subset of mesh cells is evaluated during the online phase, the method is naturally parallelizable and scalable to very large meshes. These findings demonstrate that hyper-reduction can be effectively combined with segregated solvers and geometric parametrization to enable fast and accurate CFD simulations.
https://arxiv.org/abs/2601.07082
Academic Papers
svg
be0d412c30e88d203535c82ae8d76186e51ba2610eb3f35ef05afba39c142365
2026-01-13T00:00:00-05:00
How Secure is Secure Code Generation? Adversarial Prompts Put LLM Defenses to the Test
arXiv:2601.07084v1 Announce Type: new Abstract: Recent secure code generation methods, using vulnerability-aware fine-tuning, prefix-tuning, and prompt optimization, claim to prevent LLMs from producing insecure code. However, their robustness under adversarial conditions remains untested, and current evaluations decouple security from functionality, potentially inflating reported gains. We present the first systematic adversarial audit of state-of-the-art secure code generation methods (SVEN, SafeCoder, PromSec). We subject them to realistic prompt perturbations such as paraphrasing, cue inversion, and context manipulation that developers might inadvertently introduce or adversaries deliberately exploit. To enable fair comparison, we evaluate all methods under consistent conditions, jointly assessing security and functionality using multiple analyzers and executable tests. Our findings reveal critical robustness gaps: static analyzers overestimate security by 7 to 21 times, with 37 to 60% of ``secure'' outputs being non-functional. Under adversarial conditions, true secure-and-functional rates collapse to 3 to 17%. Based on these findings, we propose best practices for building and evaluating robust secure code generation methods. Our code is available.
https://arxiv.org/abs/2601.07084
Academic Papers
svg
d6213f10243d388b6b9694ee314496b39c50d02205d2e71698e3c0d8e920ed0f
2026-01-13T00:00:00-05:00
The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance
arXiv:2601.07085v1 Announce Type: new Abstract: Large language model (LLM)-based conversational AI systems present a challenge to human cognition that current frameworks for understanding misinformation and persuasion do not adequately address. This paper proposes that a significant epistemic risk from conversational AI may lie not in inaccuracy or intentional deception, but in something more fundamental: these systems may be configured, through optimization processes that make them useful, to present characteristics that bypass the cognitive mechanisms humans evolved to evaluate incoming information. The Cognitive Trojan Horse hypothesis draws on Sperber and colleagues' theory of epistemic vigilance -- the parallel cognitive process monitoring communicated information for reasons to doubt -- and proposes that LLM-based systems present 'honest non-signals': genuine characteristics (fluency, helpfulness, apparent disinterest) that fail to carry the information equivalent human characteristics would carry, because in humans these are costly to produce while in LLMs they are computationally trivial. Four mechanisms of potential bypass are identified: processing fluency decoupled from understanding, trust-competence presentation without corresponding stakes, cognitive offloading that delegates evaluation itself to the AI, and optimization dynamics that systematically produce sycophancy. The framework generates testable predictions, including a counterintuitive speculation that cognitively sophisticated users may be more vulnerable to AI-mediated epistemic influence. This reframes AI safety as partly a problem of calibration -- aligning human evaluative responses with the actual epistemic status of AI-generated content -- rather than solely a problem of preventing deception.
https://arxiv.org/abs/2601.07085
Academic Papers
svg
9ebc4058d7bc695a68716fe52bd766c443c0629eadcccc18cad776ed0e18e458
2026-01-13T00:00:00-05:00
XBTorch: A Unified Framework for Modeling and Co-Design of Crossbar-Based Deep Learning Accelerators
arXiv:2601.07086v1 Announce Type: new Abstract: Emerging memory technologies have gained significant attention as a promising pathway to overcome the limitations of conventional computing architectures in deep learning applications. By enabling computation directly within memory, these technologies - built on nanoscale devices with tunable and nonvolatile conductance - offer the potential to drastically reduce energy consumption and latency compared to traditional von Neumann systems. This paper introduces XBTorch (short for CrossBarTorch), a novel simulation framework that integrates seamlessly with PyTorch and provides specialized tools for accurately and efficiently modeling crossbar-based systems based on emerging memory technologies. Through detailed comparisons and case studies involving hardware-aware training and inference, we demonstrate how XBTorch offers a unified interface for key research areas such as device-level modeling, cross-layer co-design, and inference-time fault tolerance. While exemplar studies utilize ferroelectric field-effect transistor (FeFET) models, the framework remains technology-agnostic - supporting other emerging memories such as resistive RAM (ReRAM), as well as enabling user-defined custom device models. The code is publicly available at: https://github.com/ADAM-Lab-GW/xbtorch
https://arxiv.org/abs/2601.07086
Academic Papers
svg
996bfa41e6e310453f48d02769ce2c692ed4472cec7056baa9797ebf31797ac9
2026-01-13T00:00:00-05:00
When Should We Introduce Safety Interventions During Pretraining?
arXiv:2601.07087v1 Announce Type: new Abstract: Ensuring the safety of language models in high-stakes settings remains a pressing challenge, as aligned behaviors are often brittle and easily undone by adversarial pressure or downstream finetuning. Prior work has shown that interventions applied during pretraining, such as rephrasing harmful content, can substantially improve the safety of the resulting models. In this paper, we study the fundamental question: "When during pretraining should safety interventions be introduced?" We keep the underlying data fixed and vary only the choice of a safety curriculum: the timing of these interventions, i.e., after 0%, 20%, or 60% of the pretraining token budget. We find that introducing interventions earlier generally yields more robust models with no increase in overrefusal rates, with the clearest benefits appearing after downstream, benign finetuning. We also see clear benefits in the steerability of models towards safer generations. Finally, we observe that earlier interventions reshape internal representations: linear probes more cleanly separate safe vs harmful examples. Overall, these results argue for incorporating safety signals early in pretraining, producing models that are more robust to downstream finetuning and jailbreaking, and more reliable under both standard and safety-aware inference procedures.
https://arxiv.org/abs/2601.07087
Academic Papers
svg
aeff22e57908b9a5369dafb8dd36615373c256f9284e590d1cd9d2150bff8493
2026-01-13T00:00:00-05:00
Next-Generation Grid Codes: Toward a New Paradigm for Dynamic Ancillary Services
arXiv:2601.07090v1 Announce Type: new Abstract: This paper presents preliminary results toward a conceptual foundation for Next Generation Grid Codes (NGGCs) based on decentralized stability and performance certification for dynamic ancillary services. The proposed NGGC framework targets two core outcomes: (i) guaranteed closed-loop stability and (ii) explicit performance assurances for power-system frequency and voltage dynamics. Stability is addressed using loop-shifting and passivity-based methods that yield local frequency-domain certificates for individual devices, enabling fully decentralized verification of the interconnected system. Performance is characterized by deriving quantitative bounds on key time-domain metrics (e.g., nadirs, rate-of-change-of-frequency (RoCoF), steady-state deviations, and oscillation damping) through frequency-domain constraints on local device behavior. The framework is non-parametric and model-agnostic, accommodating a broad class of device dynamics under mild assumptions, and provides an initial unified approach to stability and performance certification without explicit device-model parameterization. As such, these results offer a principled starting point for the development of future grid codes and control design methodologies in modern power systems.
https://arxiv.org/abs/2601.07090
Academic Papers
svg
c982eaf55abae296136cc003a5810f2c0cfc9556ac799b9a437b81f94de40944
2026-01-13T00:00:00-05:00
Efficient Visual Question Answering Pipeline for Autonomous Driving via Scene Region Compression
arXiv:2601.07092v1 Announce Type: new Abstract: Autonomous driving increasingly relies on Visual Question Answering (VQA) to enable vehicles to understand complex surroundings by analyzing visual inputs and textual queries. Currently, a paramount concern for VQA in this domain is the stringent requirement for fast latency and real-time processing, as delays directly impact real-world safety in this safety-critical application. However, current state-of-the-art VQA models, particularly large vision-language models (VLMs), often prioritize performance over computational efficiency. These models typically process dense patch tokens for every frame, leading to prohibitive computational costs (FLOPs) and significant inference latency, especially with long video sequences. This focus limits their practical deployment in real-time autonomous driving scenarios. To tackle this issue, we propose an efficient VLM framework for autonomous driving VQA tasks, SRC-Pipeline. It learns to compress early frame tokens into a small number of high-level tokens while retaining full patch tokens for recent frames. Experiments on autonomous driving video question answering tasks show that our approach achieves 66% FLOPs reduction while maintaining comparable performance, enabling VLMs to operate more effectively in real-time, safety-critical autonomous driving settings.
https://arxiv.org/abs/2601.07092
Academic Papers
svg
6d77f1b4ecebf5c3c7bc9a57226675ac0e352b5bb86f9b6fd95bdc805af274c7
2026-01-13T00:00:00-05:00
3D Wavelet-Based Structural Priors for Controlled Diffusion in Whole-Body Low-Dose PET Denoising
arXiv:2601.07093v1 Announce Type: new Abstract: Low-dose Positron Emission Tomography (PET) imaging reduces patient radiation exposure but suffers from increased noise that degrades image quality and diagnostic reliability. Although diffusion models have demonstrated strong denoising capability, their stochastic nature makes it challenging to enforce anatomically consistent structures, particularly in low signal-to-noise regimes and volumetric whole-body imaging. We propose Wavelet-Conditioned ControlNet (WCC-Net), a fully 3D diffusion-based framework that introduces explicit frequency-domain structural priors via wavelet representations to guide volumetric PET denoising. By injecting wavelet-based structural guidance into a frozen pretrained diffusion backbone through a lightweight control branch, WCC-Net decouples anatomical structure from noise while preserving generative expressiveness and 3D structural continuity. Extensive experiments demonstrate that WCC-Net consistently outperforms CNN-, GAN-, and diffusion-based baselines. On the internal 1/20-dose test set, WCC-Net improves PSNR by +1.21 dB and SSIM by +0.008 over a strong diffusion baseline, while reducing structural distortion (GMSD) and intensity error (NMAE). Moreover, WCC-Net generalizes robustly to unseen dose levels (1/50 and 1/4), achieving superior quantitative performance and improved volumetric anatomical consistency.
https://arxiv.org/abs/2601.07093
Academic Papers
svg
cf2c10baa6f2476726c806a52d3960d9628242f6db315de9306a401cc3fb6f30
2026-01-13T00:00:00-05:00
Score-Based VAMP with Fisher-Information-Based Onsager Correction
arXiv:2601.07095v1 Announce Type: new Abstract: We propose score-based VAMP (SC-VAMP), a variant of vector approximate message passing (VAMP) in which the Onsager correction is expressed and computed via conditional Fisher information, thereby enabling a Jacobian-free implementation. Using learned score functions, SC-VAMP constructs nonlinear MMSE estimators through Tweedie's formula and derives the corresponding Onsager terms from the score-norm statistics, avoiding the need for analytical derivatives of the prior or likelihood. When combined with random orthogonal/unitary mixing to mitigate non-ideal, structured or correlated sensing settings, the proposed framework extends VAMP to complex black-box inference problems where explicit modeling is intractable. Finally, by leveraging the entropic CLT, we provide an information-theoretic perspective on the Gaussian approximation underlying SE, offering insight into the decoupling principle beyond idealized i.i.d. settings, including nonlinear regimes.
https://arxiv.org/abs/2601.07095
Academic Papers
svg
2d53095eede4cd072cac60bfe2af045fd91c4aae6ec41b969a0aaedbcb16c764
2026-01-13T00:00:00-05:00
MEDVISTAGYM: A Scalable Training Environment for Thinking with Medical Images via Tool-Integrated Reinforcement Learning
arXiv:2601.07107v1 Announce Type: new Abstract: Vision language models (VLMs) achieve strong performance on general image understanding but struggle to think with medical images, especially when performing multi-step reasoning through iterative visual interaction. Medical VLMs often rely on static visual embeddings and single-pass inference, preventing models from re-examining, verifying, or refining visual evidence during reasoning. While tool-integrated reasoning offers a promising path forward, open-source VLMs lack the training infrastructure to learn effective tool selection, invocation, and coordination in multi-modal medical reasoning. We introduce MedVistaGym, a scalable and interactive training environment that incentivizes tool-integrated visual reasoning for medical image analysis. MedVistaGym equips VLMs to determine when and which tools to invoke, localize task-relevant image regions, and integrate single or multiple sub-image evidence into interleaved multimodal reasoning within a unified, executable interface for agentic training. Using MedVistaGym, we train MedVistaGym-R1 to interleave tool use with agentic reasoning through trajectory sampling and end-to-end reinforcement learning. Across six medical VQA benchmarks, MedVistaGym-R1-8B exceeds comparably sized tool-augmented baselines by 19.10% to 24.21%, demonstrating that structured agentic training--not tool access alone--unlocks effective tool-integrated reasoning for medical image analysis.
https://arxiv.org/abs/2601.07107
Academic Papers
svg
17c957c53184f771cbf68d8112652186069ea2a5d77d7967d2117bc581209f79
2026-01-13T00:00:00-05:00
The Need for a Socially-Grounded Persona Framework for User Simulation
arXiv:2601.07110v1 Announce Type: new Abstract: Synthetic personas are widely used to condition large language models (LLMs) for social simulation, yet most personas are still constructed from coarse sociodemographic attributes or summaries. We revisit persona creation by introducing SCOPE, a socially grounded framework for persona construction and evaluation, built from a 141-item, two-hour sociopsychological protocol collected from 124 U.S.-based participants. Across seven models, we find that demographic-only personas are a structural bottleneck: demographics explain only ~1.5% of variance in human response similarity. Adding sociopsychological facets improves behavioral prediction and reduces over-accentuation, and non-demographic personas based on values and identity achieve strong alignment with substantially lower bias. These trends generalize to SimBench (441 aligned questions), where SCOPE personas outperform default prompting and NVIDIA Nemotron personas, and SCOPE augmentation improves Nemotron-based personas. Our results indicate that persona quality depends on sociopsychological structure rather than demographic templates or summaries.
https://arxiv.org/abs/2601.07110
Academic Papers
svg
3326cc0db7c214f701736ef698ddb91047c0669c00283a8cdb1b4a3b8b15f27d
2026-01-13T00:00:00-05:00
Few-shot Class-Incremental Learning via Generative Co-Memory Regularization
arXiv:2601.07117v1 Announce Type: new Abstract: Few-shot class-incremental learning (FSCIL) aims to incrementally learn models from a small amount of novel data, which requires strong representation and adaptation ability of models learned under few-example supervision to avoid catastrophic forgetting on old classes and overfitting to novel classes. This work proposes a generative co-memory regularization approach to facilitate FSCIL. In the approach, the base learning leverages generative domain adaptation finetuning to finetune a pretrained generative encoder on a few examples of base classes by jointly incorporating a masked autoencoder (MAE) decoder for feature reconstruction and a fully-connected classifier for feature classification, which enables the model to efficiently capture general and adaptable representations. Using the finetuned encoder and learned classifier, we construct two class-wise memories: representation memory for storing the mean features for each class, and weight memory for storing the classifier weights. After that, the memory-regularized incremental learning is performed to train the classifier dynamically on the examples of few-shot classes in each incremental session by simultaneously optimizing feature classification and co-memory regularization. The memories are updated in a class-incremental manner and they collaboratively regularize the incremental learning. In this way, the learned models improve recognition accuracy, while mitigating catastrophic forgetting over old classes and overfitting to novel classes. Extensive experiments on popular benchmarks clearly demonstrate that our approach outperforms the state-of-the-arts.
https://arxiv.org/abs/2601.07117
Academic Papers
svg
4b6ac24ecb6661762ba208de2fb5f04adac93f1d303266c8a7c305bc4efb94bc
2026-01-13T00:00:00-05:00
Reward-Preserving Attacks For Robust Reinforcement Learning
arXiv:2601.07118v1 Announce Type: new Abstract: Adversarial robustness in RL is difficult because perturbations affect entire trajectories: strong attacks can break learning, while weak attacks yield little robustness, and the appropriate strength varies by state. We propose $\alpha$-reward-preserving attacks, which adapt the strength of the adversary so that an $\alpha$ fraction of the nominal-to-worst-case return gap remains achievable at each state. In deep RL, we use a gradient-based attack direction and learn a state-dependent magnitude $\eta\le \eta_{\mathcal B}$ selected via a critic $Q^{\pi}_\alpha((s,a),\eta)$ trained off-policy over diverse radii. This adaptive tuning calibrates attack strength and, with intermediate $\alpha$, improves robustness across radii while preserving nominal performance, outperforming fixed- and random-radius baselines.
https://arxiv.org/abs/2601.07118
Academic Papers
svg
4b7626d4392fe7fcc86323ab93a37234093195e06b059aa80e4d8f51ba484ed4
2026-01-13T00:00:00-05:00
SC-MII: Infrastructure LiDAR-based 3D Object Detection on Edge Devices for Split Computing with Multiple Intermediate Outputs Integration
arXiv:2601.07119v1 Announce Type: new Abstract: 3D object detection using LiDAR-based point cloud data and deep neural networks is essential in autonomous driving technology. However, deploying state-of-the-art models on edge devices present challenges due to high computational demands and energy consumption. Additionally, single LiDAR setups suffer from blind spots. This paper proposes SC-MII, multiple infrastructure LiDAR-based 3D object detection on edge devices for Split Computing with Multiple Intermediate outputs Integration. In SC-MII, edge devices process local point clouds through the initial DNN layers and send intermediate outputs to an edge server. The server integrates these features and completes inference, reducing both latency and device load while improving privacy. Experimental results on a real-world dataset show a 2.19x speed-up and a 71.6% reduction in edge device processing time, with at most a 1.09% drop in accuracy.
https://arxiv.org/abs/2601.07119
Academic Papers
svg
1737511ca89596185be5ed4419ba16bd3d402472e823bf312a889f927ebd825b
2026-01-13T00:00:00-05:00
ReMIND: Orchestrating Modular Large Language Models for Controllable Serendipity A REM-Inspired System Design for Emergent Creative Ideation
arXiv:2601.07121v1 Announce Type: new Abstract: Large language models (LLMs) are used not only for problem solving but also for creative ideation; however, eliciting serendipitous insights that are both novel and internally coherent remains difficult. While stochastic sampling promotes novelty, it often degrades consistency. Here, we propose ReMIND, a REM-inspired modular framework for ideation. ReMIND consists of four stages: wake, which generates a stable low-temperature semantic baseline; dream, which performs high-temperature exploratory generation; judge, which applies coarse evaluation to filter incoherent outputs and extract candidate ideas; and re-wake, which re-articulates selected ideas into coherent final outputs. By instantiating each stage as an independent LLM, ReMIND enables functional separation between exploration and consolidation. Parameter sweeps show that ReMIND reliably induces semantic exploration while preserving downstream stability. Embedding-based analyses confirm substantial semantic displacement during the dream phase, whereas external evaluations reveal that high-quality ideas emerge sporadically rather than as extrema along any single metric. These results suggest that serendipitous ideation in LLMs is a rare-event process best approached through system level design that shapes the conditions under which valuable ideas can emerge and be stabilized. ReMIND provides a general framework for studying the computational basis of serendipity and illustrates how modular LLM orchestration can bridge exploration and stabilization.
https://arxiv.org/abs/2601.07121
Academic Papers
svg
e6376969e8dc1e1b81f87bb5157ea424734791b93e35e2888d8d6e901d7a8eb8
2026-01-13T00:00:00-05:00
Enhancing Cloud Network Resilience via a Robust LLM-Empowered Multi-Agent Reinforcement Learning Framework
arXiv:2601.07122v1 Announce Type: new Abstract: While virtualization and resource pooling empower cloud networks with structural flexibility and elastic scalability, they inevitably expand the attack surface and challenge cyber resilience. Reinforcement Learning (RL)-based defense strategies have been developed to optimize resource deployment and isolation policies under adversarial conditions, aiming to enhance system resilience by maintaining and restoring network availability. However, existing approaches lack robustness as they require retraining to adapt to dynamic changes in network structure, node scale, attack strategies, and attack intensity. Furthermore, the lack of Human-in-the-Loop (HITL) support limits interpretability and flexibility. To address these limitations, we propose CyberOps-Bots, a hierarchical multi-agent reinforcement learning framework empowered by Large Language Models (LLMs). Inspired by MITRE ATT&amp;CK's Tactics-Techniques model, CyberOps-Bots features a two-layer architecture: (1) An upper-level LLM agent with four modules--ReAct planning, IPDRR-based perception, long-short term memory, and action/tool integration--performs global awareness, human intent recognition, and tactical planning; (2) Lower-level RL agents, developed via heterogeneous separated pre-training, execute atomic defense actions within localized network regions. This synergy preserves LLM adaptability and interpretability while ensuring reliable RL execution. Experiments on real cloud datasets show that, compared to state-of-the-art algorithms, CyberOps-Bots maintains network availability 68.5% higher and achieves a 34.7% jumpstart performance gain when shifting the scenarios without retraining. To our knowledge, this is the first study to establish a robust LLM-RL framework with HITL support for cloud defense. We will release our framework to the community, facilitating the advancement of robust and autonomous defense in cloud networks.
https://arxiv.org/abs/2601.07122
Academic Papers
svg
b022356b243b4e215d6a1b698e3090d743aabf107d00761efeb0abc5a8c8152e
2026-01-13T00:00:00-05:00
ENTRA: Entropy-Based Redundancy Avoidance in Large Language Model Reasoning
arXiv:2601.07123v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) often suffer from overthinking, generating unnecessarily long reasoning chains even for simple tasks. This leads to substantial computational overhead with limited performance gain, primarily due to redundant verification and repetitive generation. While prior work typically constrains output length or optimizes correctness, such coarse supervision fails to guide models toward concise yet accurate inference. In this paper, we propose ENTRA, an entropy-based training framework that suppresses redundant reasoning while preserving performance. ENTRA first estimates the token-level importance using a lightweight Bidirectional Importance Estimation (BIE) method, which accounts for both prediction confidence and forward influence. It then computes a redundancy reward based on the entropy of low-importance tokens, normalized by its theoretical upper bound, and optimizes this reward via reinforcement learning. Experiments on mathematical reasoning benchmarks demonstrate that ENTRA reduces output length by 37% to 53% with no loss-and in some cases, gains-in accuracy. Our approach offers a principled and efficient solution to reduce overthinking in LRMs, and provides a generalizable path toward redundancy-aware reasoning optimization.
https://arxiv.org/abs/2601.07123
Academic Papers
svg
61e4d2f3d58a810b33e1f45c3a7e45979dcb61aa7f76e55c5ddda0d29ae5ee37
2026-01-13T00:00:00-05:00
Towards Automated Diagnosis of Inherited Arrhythmias: Combined Arrhythmia Classification Using Lead-Aware Spatial Attention Networks
arXiv:2601.07124v1 Announce Type: new Abstract: Arrhythmogenic right ventricular cardiomyopathy (ARVC) and long QT syndrome (LQTS) are inherited arrhythmia syndromes associated with sudden cardiac death. Deep learning shows promise for ECG interpretation, but multi-class inherited arrhythmia classification with clinically grounded interpretability remains underdeveloped. Our objective was to develop and validate a lead-aware deep learning framework for multi-class (ARVC vs LQTS vs control) and binary inherited arrhythmia classification, and to determine optimal strategies for integrating ECG foundation models within arrhythmia screening tools. We assembled a 13-center Canadian cohort (645 patients; 1,344 ECGs). We evaluated four ECG foundation models using three transfer learning approaches: linear probing, fine-tuning, and combined strategies. We developed lead-aware spatial attention networks (LASAN) and assessed integration strategies combining LASAN with foundation models. Performance was compared against the established foundation model baselines. Lead-group masking quantified disease-specific lead dependence. Fine-tuning outperformed linear probing and combined strategies across all foundation models (mean macro-AUROC 0.904 vs 0.825). The best lead-aware integrations achieved near-ceiling performance (HuBERT-ECG hybrid: macro-AUROC 0.990; ARVC vs control AUROC 0.999; LQTS vs control AUROC 0.994). Lead masking demonstrated physiologic plausibility: V1-V3 were most critical for ARVC detection (4.54% AUROC reduction), while lateral leads were preferentially important for LQTS (2.60% drop). Lead-aware architectures achieved state-of-the-art performance for inherited arrhythmia classification, outperforming all existing published models on both binary and multi-class tasks while demonstrating clinically aligned lead dependence. These findings support potential utility for automated ECG screening pending validation.
https://arxiv.org/abs/2601.07124
Academic Papers
svg
9497fb8200e5bc72fb537a34346de93ce72ac0614ae71328a186e74afbbaf8e1
2026-01-13T00:00:00-05:00
ReinPool: Reinforcement Learning Pooling Multi-Vector Embeddings for Retrieval System
arXiv:2601.07125v1 Announce Type: new Abstract: Multi-vector embedding models have emerged as a powerful paradigm for document retrieval, preserving fine-grained visual and textual details through token-level representations. However, this expressiveness comes at a staggering cost: storing embeddings for every token inflates index sizes by over $1000\times$ compared to single-vector approaches, severely limiting scalability. We introduce \textbf{ReinPool}, a reinforcement learning framework that learns to dynamically filter and pool multi-vector embeddings into compact, retrieval-optimized representations. By training with an inverse retrieval objective and NDCG-based rewards, ReinPool identifies and retains only the most discriminative vectors without requiring manual importance annotations. On the Vidore V2 benchmark across three vision-language embedding models, ReinPool compresses multi-vector representations by $746$--$1249\times$ into single vectors while recovering 76--81\% of full multi-vector retrieval performance. Compared to static mean pooling baselines, ReinPool achieves 22--33\% absolute NDCG@3 improvement, demonstrating that learned selection significantly outperforms heuristic aggregation.
https://arxiv.org/abs/2601.07125
Academic Papers
svg
fc075aee376fb1e8cc64be5d92903df702010256a675778195ba92bd4c1527ee
2026-01-13T00:00:00-05:00
Digital Twin for Ultra-Reliable & Low-Latency 6G Wireless Communications in Dense Urban City
arXiv:2601.07132v1 Announce Type: new Abstract: High-frequency deployments in dense cities are difficult to plan because coverage, interference, and service reliability depend sensitively on local morphology. This paper develops a geometric Digital Twin (DT) of the Sunway City and uses it to study the service implications of a multi-site mmWave deployment. The DT is constructed from geo-referenced three-dimensional meshes of buildings, roads, and open areas, assembled in Blender and exported as a mesh scene. A seven-transmitter downlink at 10 GHz is then embedded into this geometry and evaluated using a GPU accelerated ray tracing engine that returns path-gain and Signal-to-Interference-plus-Noise Ratio (SINR) fields over a dense grid of user locations. These fields are mapped to achievable throughput and compared against representative target rates for immersive extended reality (XR), vehicle-to-everything (V2X) services, and ultra-reliable low-latency communication (URLLC). The resulting maps show that favourable streets and courtyards form narrow high rate corridors surrounded by deep shadows, even within a dense area. In the baseline deployment, one fifth of the simulated area can maintain 100 Mbps URLLC rates, and less than 10% of cells can reach 1.7 Gbps for XR, despite the presence of several rooftop sites. By exploiting the DT, we further quantify the macro-diversity margin between the best and second best serving sites and show that most URLLC-feasible cells have several decibels of SINR headroom that could be harvested through dual connectivity. The study shows how a city DT can translate ray tracing output into service centric metrics and planning insights, complementing both analytical models and expensive measurement campaigns.
https://arxiv.org/abs/2601.07132
Academic Papers
svg
260135d044b59718d8bfd0bb598d19c8cd1bd2198e7cf4559d28274a97872c88
2026-01-13T00:00:00-05:00
Geometry-Aware LoRaWAN Gateway Placement in Dense Urban Cities Using Digital Twins
arXiv:2601.07133v1 Announce Type: new Abstract: LoRaWAN deployments rely on rough range estimates or simplified propagation models to decide where to place/mount gateways. As a result, operators have limited visibility into how rooftop choice, streets, and building shadowing jointly affect coverage and reliability. This paper addresses the problem of gateway placement in dense urban environments by combining a geometry accurate Digital Twin (DT) with a GPU accelerated ray tracing engine. Existing studies optimize placement on abstract grids or tune models with sparse measurements; few works evaluate LoRaWAN gateways on a full 3D city model using a realistic link budget. In this paper, we develop a DT with ITU radio materials and evaluate eight candidate rooftops for RAK7289 WisGate Edge Pro gateways under a sub-GHz link budget derived from the data sheet. For each rooftop, we obtain Signal-to-Noise Ratios (SNR) on a 5 meter grid, derive robust and edge coverage indicators, and apply a greedy maximum coverage algorithm to rank sites and quantify the benefit of incremental densification. Results show that a single rooftop gateway covers one fifth of the full Sunway twin (i.e., the DT) at a robust SNR threshold, and that six sites still leave large areas of single gateway or out of coverage cells in surrounding residential streets. The findings from this paper shows that DT and ray tracing tools enable network operators to bridge the gap of expensive real-world trials and planning to identify if the planned LoRaWAN gateway is sufficient or additional sites are required.
https://arxiv.org/abs/2601.07133
Academic Papers
svg
6c01dfa9c1111bb04a9ab299ab9ecd12a34f3cbf7b620c616322ebf1233787cd
2026-01-13T00:00:00-05:00
Proof of Reasoning for Privacy Enhanced Federated Blockchain Learning at the Edge
arXiv:2601.07134v1 Announce Type: new Abstract: Consensus mechanisms are the core of any blockchain system. However, the majority of these mechanisms do not target federated learning directly nor do they aid in the aggregation step. This paper introduces Proof of Reasoning (PoR), a novel consensus mechanism specifically designed for federated learning using blockchain, aimed at preserving data privacy, defending against malicious attacks, and enhancing the validation of participating networks. Unlike generic blockchain consensus mechanisms commonly found in the literature, PoR integrates three distinct processes tailored for federated learning. Firstly, a masked autoencoder (MAE) is trained to generate an encoder that functions as a feature map and obfuscates input data, rendering it resistant to human reconstruction and model inversion attacks. Secondly, a downstream classifier is trained at the edge, receiving input from the trained encoder. The downstream network's weights, a single encoded datapoint, the network's output and the ground truth are then added to a block for federated aggregation. Lastly, this data facilitates the aggregation of all participating networks, enabling more complex and verifiable aggregation methods than previously possible. This three-stage process results in more robust networks with significantly reduced computational complexity, maintaining high accuracy by training only the downstream classifier at the edge. PoR scales to large IoT networks with low latency and storage growth, and adapts to evolving data, regulations, and network conditions.
https://arxiv.org/abs/2601.07134
Academic Papers
svg
508beaad1b02ca0449e38acb74e03d1363a2a5707836679fb87369a10b2eb5e0
2026-01-13T00:00:00-05:00
A Large-Scale Study on the Development and Issues of Multi-Agent AI Systems
arXiv:2601.07136v1 Announce Type: new Abstract: The rapid emergence of multi-agent AI systems (MAS), including LangChain, CrewAI, and AutoGen, has shaped how large language model (LLM) applications are developed and orchestrated. However, little is known about how these systems evolve and are maintained in practice. This paper presents the first large-scale empirical study of open-source MAS, analyzing over 42K unique commits and over 4.7K resolved issues across eight leading systems. Our analysis identifies three distinct development profiles: sustained, steady, and burst-driven. These profiles reflect substantial variation in ecosystem maturity. Perfective commits constitute 40.8% of all changes, suggesting that feature enhancement is prioritized over corrective maintenance (27.4%) and adaptive updates (24.3%). Data about issues shows that the most frequent concerns involve bugs (22%), infrastructure (14%), and agent coordination challenges (10%). Issue reporting also increased sharply across all frameworks starting in 2023. Median resolution times range from under one day to about two weeks, with distributions skewed toward fast responses but a minority of issues requiring extended attention. These results highlight both the momentum and the fragility of the current ecosystem, emphasizing the need for improved testing infrastructure, documentation quality, and maintenance practices to ensure long-term reliability and sustainability.
https://arxiv.org/abs/2601.07136
Academic Papers
svg
c71d247e6a8c43c3bba1daf276ab46baf244e669dbd82724336d13c9da8d4699
2026-01-13T00:00:00-05:00
Recovering polynomials over finite fields from noisy character values
arXiv:2601.07137v1 Announce Type: new Abstract: Let $g(X)$ be a polynomial over a finite field ${\mathbb F}_q$ with degree $o(q^{1/2})$, and let $\chi$ be the quadratic residue character. We give a polynomial time algorithm to recover $g(X)$ (up to perfect square factors) given the values of $\chi \circ g$ on ${\mathbb F}_q$, with up to a constant fraction of the values having errors. This was previously unknown even for the case of no errors. We give a similar algorithm for additive characters of polynomials over fields of characteristic $2$. This gives the first polynomial time algorithm for decoding dual-BCH codes of polynomial dimension from a constant fraction of errors. Our algorithms use ideas from Stepanov's polynomial method proof of the classical Weil bounds on character sums, as well as from the Berlekamp-Welch decoding algorithm for Reed-Solomon codes. A crucial role is played by what we call *pseudopolynomials*: high degree polynomials, all of whose derivatives behave like low degree polynomials on ${\mathbb F}_q$. Both these results can be viewed as algorithmic versions of the Weil bounds for this setting.
https://arxiv.org/abs/2601.07137
Academic Papers
svg
a4b4ddea0aad48f4e69cc75adee1f6711bc1992c98a4fb422d2f3102dece8b22
2026-01-13T00:00:00-05:00
AdaField: Generalizable Surface Pressure Modeling with Physics-Informed Pre-training and Flow-Conditioned Adaptation
arXiv:2601.07139v1 Announce Type: new Abstract: The surface pressure field of transportation systems, including cars, trains, and aircraft, is critical for aerodynamic analysis and design. In recent years, deep neural networks have emerged as promising and efficient methods for modeling surface pressure field, being alternatives to computationally expensive CFD simulations. Currently, large-scale public datasets are available for domains such as automotive aerodynamics. However, in many specialized areas, such as high-speed trains, data scarcity remains a fundamental challenge in aerodynamic modeling, severely limiting the effectiveness of standard neural network approaches. To address this limitation, we propose the Adaptive Field Learning Framework (AdaField), which pre-trains the model on public large-scale datasets to improve generalization in sub-domains with limited data. AdaField comprises two key components. First, we design the Semantic Aggregation Point Transformer (SAPT) as a high-performance backbone that efficiently handles large-scale point clouds for surface pressure prediction. Second, regarding the substantial differences in flow conditions and geometric scales across different aerodynamic subdomains, we propose Flow-Conditioned Adapter (FCA) and Physics-Informed Data Augmentation (PIDA). FCA enables the model to flexibly adapt to different flow conditions with a small set of trainable parameters, while PIDA expands the training data distribution to better cover variations in object scale and velocity. Our experiments show that AdaField achieves SOTA performance on the DrivAerNet++ dataset and can be effectively transferred to train and aircraft scenarios with minimal fine-tuning. These results highlight AdaField's potential as a generalizable and transferable solution for surface pressure field modeling, supporting efficient aerodynamic design across a wide range of transportation systems.
https://arxiv.org/abs/2601.07139
Academic Papers
svg
1c36584d562e7d525fcc912f92156333d0337761161975541332e8fbe2043cdc
2026-01-13T00:00:00-05:00
MacPrompt: Maraconic-guided Jailbreak against Text-to-Image Models
arXiv:2601.07141v1 Announce Type: new Abstract: Text-to-image (T2I) models have raised increasing safety concerns due to their capacity to generate NSFW and other banned objects. To mitigate these risks, safety filters and concept removal techniques have been introduced to block inappropriate prompts or erase sensitive concepts from the models. However, all the existing defense methods are not well prepared to handle diverse adversarial prompts. In this work, we introduce MacPrompt, a novel black-box and cross-lingual attack that reveals previously overlooked vulnerabilities in T2I safety mechanisms. Unlike existing attacks that rely on synonym substitution or prompt obfuscation, MacPrompt constructs macaronic adversarial prompts by performing cross-lingual character-level recombination of harmful terms, enabling fine-grained control over both semantics and appearance. By leveraging this design, MacPrompt crafts prompts with high semantic similarity to the original harmful inputs (up to 0.96) while bypassing major safety filters (up to 100%). More critically, it achieves attack success rates as high as 92% for sex-related content and 90% for violence, effectively breaking even state-of-the-art concept removal defenses. These results underscore the pressing need to reassess the robustness of existing T2I safety mechanisms against linguistically diverse and fine-grained adversarial strategies.
https://arxiv.org/abs/2601.07141
Academic Papers
svg
924f4bb22fd4b573a1cc2292341974b5c1bf87f9ce43dfab3a1efc9805ada661
2026-01-13T00:00:00-05:00
EZBlender: Efficient 3D Editing with Plan-and-ReAct Agent
arXiv:2601.07143v1 Announce Type: new Abstract: As a cornerstone of the modern digital economy, 3D modeling and rendering demand substantial resources and manual effort when scene editing is performed in the traditional manner. Despite recent progress in VLM-based agents for 3D editing, the fundamental trade-off between editing precision and agent responsiveness remains unresolved. To overcome these limitations, we present EZBlender, a Blender agent with a hybrid framework that combines planning-based task decomposition and reactive local autonomy for efficient human AI collaboration and semantically faithful 3D editing. Specifically, this unexplored Plan-and-ReAct design not only preserves editing quality but also significantly reduces latency and computational cost. To further validate the efficiency and effectiveness of the proposed edge-autonomy architecture, we construct a dedicated multi-tasking benchmark that has not been systematically investigated in prior research. In addition, we provide a comprehensive analysis of language model preference, system responsiveness, and economic efficiency.
https://arxiv.org/abs/2601.07143
Academic Papers
svg