id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
53907fb12fd52df703b97e0bbe1ee5601f94a128be9696df978a277aee9a04e9
2026-01-21T00:00:00-05:00
Capability-Aware Early-Stage Research Idea Evaluation
arXiv:2601.12473v1 Announce Type: new Abstract: Predicting the outcomes of research ideas at their conceptual stage (i.e. before significant resources are committed) holds great potential for optimizing scientific resource allocation and research planning. While existing methods rely heavily on finished manuscripts or peer reviews, we propose a novel capability-aware framework that predicts paper acceptance and ratings using only author information and research ideas, without requiring full text or experimental results. Our approach integrates author information, (inferred) capability presentation, and research ideas through a three-way transformer architecture with flexible fusion mechanisms. We also introduce a two-stage architecture for learning the capability representation given the author information and idea. Experiments show that our method significantly outperform the single-way models by finetuning bert-base and bert-large, and the capability predicting significantly increase the predictive accuracy of the final model. The proposed method can be applied in both early-stage research outcome prediction and scientific resource allocation.
https://arxiv.org/abs/2601.12473
Academic Papers
svg
61b4ae1662f4373f938dfc254645ebff6f1c1e41618aa0ddd3d6d7a8ff6187da
2026-01-21T00:00:00-05:00
Language-Based Swarm Perception: Decentralized Person Re-Identification via Natural Language Descriptions
arXiv:2601.12479v1 Announce Type: new Abstract: We introduce a method for decentralized person re-identification in robot swarms that leverages natural language as the primary representational modality. Unlike traditional approaches that rely on opaque visual embeddings -- high-dimensional feature vectors extracted from images -- the proposed method uses human-readable language to represent observations. Each robot locally detects and describes individuals using a vision-language model (VLM), producing textual descriptions of appearance instead of feature vectors. These descriptions are compared and clustered across the swarm without centralized coordination, allowing robots to collaboratively group observations of the same individual. Each cluster is distilled into a representative description by a language model, providing an interpretable, concise summary of the swarm's collective perception. This approach enables natural-language querying, enhances transparency, and supports explainable swarm behavior. Preliminary experiments demonstrate competitive performance in identity consistency and interpretability compared to embedding-based methods, despite current limitations in text similarity and computational load. Ongoing work explores refined similarity metrics, semantic navigation, and the extension of language-based perception to environmental elements. This work prioritizes decentralized perception and communication, while active navigation remains an open direction for future study.
https://arxiv.org/abs/2601.12479
Academic Papers
svg
2d1f5ee5a589c1d1662d4846703b154576d88b7e1e3e6fa57f09012fa1ae225e
2026-01-21T00:00:00-05:00
A Unified Neural Codec Language Model for Selective Editable Text to Speech Generation
arXiv:2601.12480v1 Announce Type: new Abstract: Neural codec language models achieve impressive zero-shot Text-to-Speech (TTS) by fully imitating the acoustic characteristics of a short speech prompt, including timbre, prosody, and paralinguistic information. However, such holistic imitation limits their ability to isolate and control individual attributes. In this paper, we present a unified codec language model SpeechEdit that extends zero-shot TTS with a selective control mechanism. By default, SpeechEdit reproduces the complete acoustic profile inferred from the speech prompt, but it selectively overrides only the attributes specified by explicit control instructions. To enable controllable modeling, SpeechEdit is trained on our newly constructed LibriEdit dataset, which provides delta (difference-aware) training pairs derived from LibriHeavy. Experimental results show that our approach maintains naturalness and robustness while offering flexible and localized control over desired attributes. Audio samples are available at https://speech-editing.github.io/speech-editing/.
https://arxiv.org/abs/2601.12480
Academic Papers
svg
32ca64b163aea88ccf6eb8a423b3df685b72688c171b4caa6b77b8348afdc7b0
2026-01-21T00:00:00-05:00
NeuralFur: Animal Fur Reconstruction From Multi-View Images
arXiv:2601.12481v1 Announce Type: new Abstract: Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that can be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a vision language model (VLM) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal's furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VLM to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VLM to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types. For additional results and code, please refer to https://neuralfur.is.tue.mpg.de.
https://arxiv.org/abs/2601.12481
Academic Papers
svg
6c442e23ed93cccb5f766368dc6a8f335ade0fb8405d5345a0a71dbb2a3a432b
2026-01-21T00:00:00-05:00
A Multimodal Assistive System for Product Localization and Retrieval for People who are Blind or have Low Vision
arXiv:2601.12486v1 Announce Type: new Abstract: Shopping is a routine activity for sighted individuals, yet for people who are blind or have low vision (pBLV), locating and retrieving products in physical environments remains a challenge. This paper presents a multimodal wearable assistive system that integrates object detection with vision-language models to support independent product or item retrieval, with the goal of enhancing users'autonomy and sense of agency. The system operates through three phases: product search, which identifies target products using YOLO-World detection combined with embedding similarity and color histogram matching; product navigation, which provides spatialized sonification and VLM-generated verbal descriptions to guide users toward the target; and product correction, which verifies whether the user has reached the correct product and provides corrective feedback when necessary. Technical evaluation demonstrated promising performance across all modules, with product detection achieving near-perfect accuracy at close range and high accuracy when facing shelves within 1.5 m. VLM-based navigation achieved up to 94.4% accuracy, and correction accuracy exceeded 86% under optimal model configurations. These results demonstrate the system's potential to address the last-meter problem in assistive shopping. Future work will focus on user studies with pBLV participants and integration with multi-scale navigation ecosystems.
https://arxiv.org/abs/2601.12486
Academic Papers
svg
eb8ea6d446eb745c6eaf8436927f7804f25abb2a6f680aac213541860c80d923
2026-01-21T00:00:00-05:00
VASTU: Value-Aligned Social Toolkit for Online Content Curation
arXiv:2601.12491v1 Announce Type: new Abstract: Detecting what content communities value is a foundational challenge for social computing systems -- from feed curation and content ranking to moderation tools and personalized recommendation systems. Yet existing approaches remain fragmented across methodological paradigms, and it remains unclear which methods best capture community-specific notions of value. We introduce VASTU (Value-Aligned Social Toolkit for Online Content Curation), a benchmark and evaluation framework for systematically comparing approaches to detecting community-valued content. VASTU includes a dataset of 75,000 comments from 15 diverse Reddit communities, annotated with community approval labels and rich linguistic features. Using VASTU, we evaluate feature-based models, transformers, prompted and fine-tuned language models under global versus community-specific training regimes. We find that community-specific models consistently outperform global approaches, with fine-tuned transformers achieving the strongest performance (0.72 AUROC). Notably, fine-tuned SLMs (0.65 AUROC) substantially outperform prompted LLMs (0.60 AUROC) despite being 100 times smaller. Counterintuitively, chain-of-thought prompting provides no benefit, and reasoning models perform the worst (0.53 AUROC), suggesting this task requires learning community norms rather than test-time reasoning. By releasing VASTU, we provide a standardized benchmark to advance research on value-aligned sociotechnical systems.
https://arxiv.org/abs/2601.12491
Academic Papers
svg
f4513b9550e6d9be56e218c0d460fe20e873b9731b7401b192e72a9019df0517
2026-01-21T00:00:00-05:00
Histopath-C: Towards Realistic Domain Shifts for Histopathology Vision-Language Adaptation
arXiv:2601.12493v1 Announce Type: new Abstract: Medical Vision-language models (VLMs) have shown remarkable performances in various medical imaging domains such as histo\-pathology by leveraging pre-trained, contrastive models that exploit visual and textual information. However, histopathology images may exhibit severe domain shifts, such as staining, contamination, blurring, and noise, which may severely degrade the VLM's downstream performance. In this work, we introduce Histopath-C, a new benchmark with realistic synthetic corruptions designed to mimic real-world distribution shifts observed in digital histopathology. Our framework dynamically applies corruptions to any available dataset and evaluates Test-Time Adaptation (TTA) mechanisms on the fly. We then propose LATTE, a transductive, low-rank adaptation strategy that exploits multiple text templates, mitigating the sensitivity of histopathology VLMs to diverse text inputs. Our approach outperforms state-of-the-art TTA methods originally designed for natural images across a breadth of histopathology datasets, demonstrating the effectiveness of our proposed design for robust adaptation in histopathology images. Code and data are available at https://github.com/Mehrdad-Noori/Histopath-C.
https://arxiv.org/abs/2601.12493
Academic Papers
svg
ce1510ce1fa90f0cdb2e480ce0b328be5a73b6acc8559fd790e8ac571a89a232
2026-01-21T00:00:00-05:00
Harmonizing the Arabic Audio Space with Data Scheduling
arXiv:2601.12494v1 Announce Type: new Abstract: Audio large language models (LLMs) enable unified speech understanding and generation, yet their adaptation to linguistically complex, dialect-rich settings remains underexplored. This paper presents the first systematic study of multi-task instruction tuning for an Arabic-centric audio LLM, covering a hierarchy of generative tasks (ASR, speech summarization) and discriminative tasks (dialect and emotion identification). To support this study, we introduce AraMega-SSum, a novel dataset for Arabic speech summarization. We fine-tune Qwen2.5-Omni (7B) and propose Task-Progressive Curriculum (TPC) along with Aligner-Based Diverse Sampling (ADS), a strategy that constructs information-dense batches by selecting task- and label-balanced examples. Our results reveal a critical efficiency, robustness trade-off: while ADS accelerates initial convergence and boosts paralinguistic F1-scores, its inherent gradient volatility can destabilize generative decoding under prolonged training. Furthermore, while the TPC stabilizes core acoustic mapping, it often induces negative transfer in downstream tasks. We demonstrate that a Hybrid TPC+ADS Strategy provides an optimal training ``recipe'', first establishing a robust representative foundation before employing diversity-aware refinement to capture fine-grained nuances. These findings offer practical guidance for the efficient adaptation of Omni-models in complex, low-resource multimodal environments.
https://arxiv.org/abs/2601.12494
Academic Papers
svg
0e6e6fc3277bc9c378ba7a6f1fe7511757337e3c6f2acef29d5f0cdd3f45aef8
2026-01-21T00:00:00-05:00
Failure Modes in Multi-Hop QA: The Weakest Link Law and the Recognition Bottleneck
arXiv:2601.12499v1 Announce Type: new Abstract: Despite scaling to massive context windows, Large Language Models (LLMs) struggle with multi-hop reasoning due to inherent position bias, which causes them to overlook information at certain positions. Whether these failures stem from an inability to locate evidence (recognition failure) or integrate it (synthesis failure) is unclear. We introduce Multi-Focus Attention Instruction (MFAI), a semantic probe to disentangle these mechanisms by explicitly steering attention towards selected positions. Across 5 LLMs on two multi-hop QA tasks (MuSiQue and NeoQA), we establish the "Weakest Link Law": multi-hop reasoning performance collapses to the performance level of the least visible evidence. Crucially, this failure is governed by absolute position rather than the linear distance between facts (performance variance $<3%$). We further identify a duality in attention steering: while matched MFAI resolves recognition bottlenecks, improving accuracy by up to 11.5% in low-visibility positions, misleading MFAI triggers confusion in real-world tasks but is successfully filtered in synthetic tasks. Finally, we demonstrate that "thinking" models that utilize System-2 reasoning, effectively locate and integrate the required information, matching gold-only baselines even in noisy, long-context settings.
https://arxiv.org/abs/2601.12499
Academic Papers
svg
930626c2501face8687cd1d4ddcfe2503c0489b8b325e213dd99e09d53710e76
2026-01-21T00:00:00-05:00
Video Individual Counting and Tracking from Moving Drones: A Benchmark and Methods
arXiv:2601.12500v1 Announce Type: new Abstract: Counting and tracking dense crowds in large-scale scenes is highly challenging, yet existing methods mainly rely on datasets captured by fixed cameras, which provide limited spatial coverage and are inadequate for large-scale dense crowd analysis. To address this limitation, we propose a flexible solution using moving drones to capture videos and perform video-level crowd counting and tracking of unique pedestrians across entire scenes. We introduce MovingDroneCrowd++, the largest video-level dataset for dense crowd counting and tracking captured by moving drones, covering diverse and complex conditions with varying flight altitudes, camera angles, and illumination. Existing methods fail to achieve satisfactory performance on this dataset. To this end, we propose GD3A (Global Density Map Decomposition via Descriptor Association), a density map-based video individual counting method that avoids explicit localization. GD3A establishes pixel-level correspondences between pedestrian descriptors across consecutive frames via optimal transport with an adaptive dustbin score, enabling the decomposition of global density maps into shared, inflow, and outflow components. Building on this framework, we further introduce DVTrack, which converts descriptor-level matching into instance-level associations through a descriptor voting mechanism for pedestrian tracking. Experimental results show that our methods significantly outperform existing approaches under dense crowds and complex motion, reducing counting error by 47.4 percent and improving tracking performance by 39.2 percent.
https://arxiv.org/abs/2601.12500
Academic Papers
svg
ecf0897e88488870e091cf9b91e91bd8145cffdff0ec9cdb1df5fe54eca82669
2026-01-21T00:00:00-05:00
Semidefinite Programming for Quantum Channel Learning
arXiv:2601.12502v1 Announce Type: new Abstract: The problem of reconstructing a quantum channel from a sample of classical data is considered. When the total fidelity can be represented as a ratio of two quadratic forms (e.g., in the case of mapping a mixed state to a pure state, projective operators, unitary learning, and others), Semidefinite Programming (SDP) can be applied to solve the fidelity optimization problem with respect to the Choi matrix. A remarkable feature of SDP is that the optimization is convex, which allows the problem to be efficiently solved by a variety of numerical algorithms. We have tested several commercially available SDP solvers, all of which allowed for the reconstruction of quantum channels of different forms. A notable feature is that the Kraus rank of the obtained quantum channel typically comprises less than a few percent of its maximal possible value. This suggests that a relatively small Kraus rank quantum channel is typically sufficient to describe experimentally observed classical data. The theory was also applied to the problem of reconstructing projective operators from data. Finally, we discuss a classical computational model based on quantum channel transformation, performed and calculated on a classical computer, possibly hardware-optimized.
https://arxiv.org/abs/2601.12502
Academic Papers
svg
c3f09bee7eef646d885ce2097e5b8cb68801804f955e514f2ba97808b60a2887
2026-01-21T00:00:00-05:00
Hard Clique Formulas for Resolution
arXiv:2601.12503v1 Announce Type: new Abstract: We show how to convert any unsatisfiable 3-CNF formula which is sparse and exponentially hard to refute in Resolution into a negative instance of the $k$-clique problem whose corresponding natural encoding as a CNF formula is $n^{\Omega(k)}$-hard to refute in Resolution. This applies to any function $k = k(n)$ of the number $n$ of vertices, provided $k_0 \leq k \leq n^{1/c_0}$, where $k_0$ and $c_0$ are small constants. We establish this by demonstrating that Resolution can simulate the correctness proof of a particular kind of reduction from 3-SAT to the parameterized clique problem. This also re-establishes the known conditional hardness result for $k$-clique which states that if the Exponential Time Hypothesis (ETH) holds, then the $k$-clique problem cannot be solved in time $n^{o(k)}$. Since it is known that the analogue of ETH holds for Resolution, unconditionally and with explicit hard instances, this gives a way to obtain explicit instances of $k$-clique that are unconditionally $n^{\Omega(k)}$-hard to refute in Resolution. This answers an open problem that appeared published in the literature at least twice.
https://arxiv.org/abs/2601.12503
Academic Papers
svg
6b90cd77c58dafaca6923038eee3e5d7f5cc18cf2050e7a0b9edd88bb365e77b
2026-01-21T00:00:00-05:00
DoPE: Decoy Oriented Perturbation Encapsulation Human-Readable, AI-Hostile Documents for Academic Integrity
arXiv:2601.12505v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) can directly consume exam documents, threatening conventional assessments and academic integrity. We present DoPE (Decoy-Oriented Perturbation Encapsulation), a document-layer defense framework that embeds semantic decoys into PDF/HTML assessments to exploit render-parse discrepancies in MLLM pipelines. By instrumenting exams at authoring time, DoPE provides model-agnostic prevention (stop or confound automated solving) and detection (flag blind AI reliance) without relying on conventional one-shot classifiers. We formalize prevention and detection tasks, and introduce FewSoRT-Q, an LLM-guided pipeline that generates question-level semantic decoys and FewSoRT-D to encapsulate them into watermarked documents. We evaluate on Integrity-Bench, a novel benchmark of 1826 exams (PDF+HTML) derived from public QA datasets and OpenCourseWare. Against black-box MLLMs from OpenAI and Anthropic, DoPE yields strong empirical gains: a 91.4% detection rate at an 8.7% false-positive rate using an LLM-as-Judge verifier, and prevents successful completion or induces decoy-aligned failures in 96.3% of attempts. We release Integrity-Bench, our toolkit, and evaluation code to enable reproducible study of document-layer defenses for academic integrity.
https://arxiv.org/abs/2601.12505
Academic Papers
svg
d472320fe8fa1865db1ea56ce5ca3174439d8d8784e2b0432c0dfded401b42ff
2026-01-21T00:00:00-05:00
SDCoNet: Saliency-Driven Multi-Task Collaborative Network for Remote Sensing Object Detection
arXiv:2601.12507v1 Announce Type: new Abstract: In remote sensing images, complex backgrounds, weak object signals, and small object scales make accurate detection particularly challenging, especially under low-quality imaging conditions. A common strategy is to integrate single-image super-resolution (SR) before detection; however, such serial pipelines often suffer from misaligned optimization objectives, feature redundancy, and a lack of effective interaction between SR and detection. To address these issues, we propose a Saliency-Driven multi-task Collaborative Network (SDCoNet) that couples SR and detection through implicit feature sharing while preserving task specificity. SDCoNet employs the swin transformer-based shared encoder, where hierarchical window-shifted self-attention supports cross-task feature collaboration and adaptively balances the trade-off between texture refinement and semantic representation. In addition, a multi-scale saliency prediction module produces importance scores to select key tokens, enabling focused attention on weak object regions, suppression of background clutter, and suppression of adverse features introduced by multi-task coupling. Furthermore, a gradient routing strategy is introduced to mitigate optimization conflicts. It first stabilizes detection semantics and subsequently routes SR gradients along a detection-oriented direction, enabling the framework to guide the SR branch to generate high-frequency details that are explicitly beneficial for detection. Experiments on public datasets, including NWPU VHR-10-Split, DOTAv1.5-Split, and HRSSD-Split, demonstrate that the proposed method, while maintaining competitive computational efficiency, significantly outperforms existing mainstream algorithms in small object detection on low-quality remote sensing images. Our code is available at https://github.com/qiruo-ya/SDCoNet.
https://arxiv.org/abs/2601.12507
Academic Papers
svg
e28f225b21694c74834ecf4c3a6c741cf4e605550228fc3c64c637e9f077110e
2026-01-21T00:00:00-05:00
AlphaSyndrome: Tackling the Syndrome Measurement Circuit Scheduling Problem for QEC Codes
arXiv:2601.12509v1 Announce Type: new Abstract: Quantum error correction (QEC) is essential for scalable quantum computing, yet repeated syndrome-measurement cycles dominate its spacetime and hardware cost. Although stabilizers commute and admit many valid execution orders, different schedules induce distinct error-propagation paths under realistic noise, leading to large variations in logical error rate. Outside of surface codes, effective syndrome-measurement scheduling remains largely unexplored. We present AlphaSyndrome, an automated synthesis framework for scheduling syndrome-measurement circuits in general commuting-stabilizer codes under minimal assumptions: mutually commuting stabilizers and a heuristic decoder. AlphaSyndrome formulates scheduling as an optimization problem that shapes error propagation to (i) avoid patterns close to logical operators and (ii) remain within the decoder's correctable region. The framework uses Monte Carlo Tree Search (MCTS) to explore ordering and parallelism, guided by code structure and decoder feedback. Across diverse code families, sizes, and decoders, AlphaSyndrome reduces logical error rates by 80.6% on average (up to 96.2%) relative to depth-optimal baselines, matches Google's hand-crafted surface-code schedules, and outperforms IBM's schedule for the Bivariate Bicycle code.
https://arxiv.org/abs/2601.12509
Academic Papers
svg
d86daea26e6faa89868a59c05bcdf0f8eec16971f243b1686dcf876b71c0e2fc
2026-01-21T00:00:00-05:00
Fine-Tuning Cycle-GAN for Domain Adaptation of MRI Images
arXiv:2601.12512v1 Announce Type: new Abstract: Magnetic Resonance Imaging (MRI) scans acquired from different scanners or institutions often suffer from domain shifts owing to variations in hardware, protocols, and acquisition parameters. This discrepancy degrades the performance of deep learning models trained on source domain data when applied to target domain images. In this study, we propose a Cycle-GAN-based model for unsupervised medical-image domain adaptation. Leveraging CycleGANs, our model learns bidirectional mappings between the source and target domains without paired training data, preserving the anatomical content of the images. By leveraging Cycle-GAN capabilities with content and disparity loss for adaptation tasks, we ensured image-domain adaptation while maintaining image integrity. Several experiments on MRI datasets demonstrated the efficacy of our model in bidirectional domain adaptation without labelled data. Furthermore, research offers promising avenues for improving the diagnostic accuracy of healthcare. The statistical results confirm that our approach improves model performance and reduces domain-related variability, thus contributing to more precise and consistent medical image analysis.
https://arxiv.org/abs/2601.12512
Academic Papers
svg
5cea11da0cd1e884ad2c191cd52932b99dc2f7d9a3bd0115e796a5333f9f2dde
2026-01-21T00:00:00-05:00
Cooperative Multi-agent RL with Communication Constraints
arXiv:2601.12518v1 Announce Type: new Abstract: Cooperative MARL often assumes frequent access to global information in a data buffer, such as team rewards or other agents' actions, which is typically unrealistic in decentralized MARL systems due to high communication costs. When communication is limited, agents must rely on outdated information to estimate gradients and update their policies. A common approach to handle missing data is called importance sampling, in which we reweigh old data from a base policy to estimate gradients for the current policy. However, it quickly becomes unstable when the communication is limited (i.e. missing data probability is high), so that the base policy in importance sampling is outdated. To address this issue, we propose a technique called base policy prediction, which utilizes old gradients to predict the policy update and collect samples for a sequence of base policies, which reduces the gap between the base policy and the current policy. This approach enables effective learning with significantly fewer communication rounds, since the samples of predicted base policies could be collected within one communication round. Theoretically, we show that our algorithm converges to an $\varepsilon$-Nash equilibrium in potential games with only $O(\varepsilon^{-3/4})$ communication rounds and $O(poly(\max_i |A_i|)\varepsilon^{-11/4})$ samples, improving existing state-of-the-art results in communication cost, as well as sample complexity without the exponential dependence on the joint action space size. We also extend these results to general Markov Cooperative Games to find an agent-wise local maximum. Empirically, we test the base policy prediction algorithm in both simulated games and MAPPO for complex environments.
https://arxiv.org/abs/2601.12518
Academic Papers
svg
aa01da9ef1dd3c9c0fb9e0a94eacbdc8f98cc3ffb3268c5bc7055095f6e2917d
2026-01-21T00:00:00-05:00
Learning Relativistic Geodesics and Chaotic Dynamics via Stabilized Lagrangian Neural Networks
arXiv:2601.12519v1 Announce Type: new Abstract: Lagrangian Neural Networks (LNNs) can learn arbitrary Lagrangians from trajectory data, but their unusual optimization objective leads to significant training instabilities that limit their application to complex systems. We propose several improvements that address these fundamental challenges, namely, a Hessian regularization scheme that penalizes unphysical signatures in the Lagrangian's second derivatives with respect to velocities, preventing the network from learning unstable dynamics, activation functions that are better suited to the problem of learning Lagrangians, and a physics-aware coordinate scaling that improves stability. We systematically evaluate these techniques alongside previously proposed methods for improving stability. Our improved architecture successfully trains on systems of unprecedented complexity, including triple pendulums, and achieved 96.6\% lower validation loss value and 90.68\% better stability than baseline LNNs in double pendulum systems. With the improved framework, we show that our LNNs can learn Lagrangians representing geodesic motion in both non-relativistic and general relativistic settings. To deal with the relativistic setting, we extended our regularization to penalize violations of Lorentzian signatures, which allowed us to predict a geodesic Lagrangian under AdS\textsubscript{4} spacetime metric directly from trajectory data, which to our knowledge has not been done in the literature before. This opens new possibilities for automated discovery of geometric structures in physics, including extraction of spacetime metric tensor components from geodesic trajectories. While our approach inherits some limitations of the original LNN framework, particularly the requirement for invertible Hessians, it significantly expands the practical applicability of LNNs for scientific discovery tasks.
https://arxiv.org/abs/2601.12519
Academic Papers
svg
8aa00ac53f55e26080b0401aeeb25efbbbba07612133893025d312a122938a25
2026-01-21T00:00:00-05:00
Improved Bug Localization with AI Agents Leveraging Hypothesis and Dynamic Cognition
arXiv:2601.12522v1 Announce Type: new Abstract: Software bugs cost technology providers (e.g., AT&amp;T) billions annually and cause developers to spend roughly 50% of their time on bug resolution. Traditional methods for bug localization often analyze the suspiciousness of code components (e.g., methods, documents) in isolation, overlooking their connections with other components in the codebase. Recent advances in Large Language Models (LLMs) and agentic AI techniques have shown strong potential for code understanding, but still lack causal reasoning during code exploration and struggle to manage growing context effectively, limiting their capability. In this paper, we present a novel agentic technique for bug localization -- CogniGent -- that overcomes the limitations above by leveraging multiple AI agents capable of causal reasoning, call-graph-based root cause analysis and context engineering. It emulates developers-inspired debugging practices (a.k.a., dynamic cognitive debugging) and conducts hypothesis testing to support bug localization. We evaluate CogniGent on a curated dataset of 591 bug reports using three widely adopted performance metrics and compare it against six established baselines from the literature. Experimental results show that our technique consistently outperformed existing traditional and LLM-based techniques, achieving MAP improvements of 23.33-38.57% at the document and method levels. Similar gains were observed in MRR, with increases of 25.14-53.74% at both granularity levels. Statistical significance tests also confirm the superiority of our technique. By addressing the reasoning, dependency, and context limitations, CogniGent advances the state of bug localization, bridging human-like cognition with agentic automation for improved performance.
https://arxiv.org/abs/2601.12522
Academic Papers
svg
2e6ddbbbd27de037eaca1eafce785e0e22cccbd2470556870797e40f39c94c3a
2026-01-21T00:00:00-05:00
Enabling High-Curvature Navigation in Eversion Robots through Buckle-Inducing Constrictive Bands
arXiv:2601.12523v1 Announce Type: new Abstract: Tip-growing eversion robots are renowned for their ability to access remote spaces through narrow passages. However, achieving reliable navigation remains a significant challenge. Existing solutions often rely on artificial muscles integrated into the robot body or active tip-steering mechanisms. While effective, these additions introduce structural complexity and compromise the defining advantages of eversion robots: their inherent softness and compliance. In this paper, we propose a passive approach to reduce bending stiffness by purposefully introducing buckling points along the robot's outer wall. We achieve this by integrating inextensible diameter-reducing circumferential bands at regular intervals along the robot body facilitating forward motion through tortuous, obstacle cluttered paths. Rather than relying on active steering, our approach leverages the robot's natural interaction with the environment, allowing for smooth, compliant navigation. We present a Cosserat rod-based mathematical model to quantify this behavior, capturing the local stiffness reductions caused by the constricting bands and their impact on global bending mechanics. Experimental results demonstrate that these bands reduce the robot's stiffness when bent at the tip by up to 91 percent, enabling consistent traversal of 180 degree bends with a bending radius of as low as 25 mm-notably lower than the 35 mm achievable by standard eversion robots under identical conditions. The feasibility of the proposed method is further demonstrated through a case study in a colon phantom. By significantly improving maneuverability without sacrificing softness or increasing mechanical complexity, this approach expands the applicability of eversion robots in highly curved pathways, whether in relation to pipe inspection or medical procedures such as colonoscopy.
https://arxiv.org/abs/2601.12523
Academic Papers
svg
fc578aa3a068bd154cee943bbc5ecb09c2b0db66f1bf6a7131c30fc7b0af861a
2026-01-21T00:00:00-05:00
SGCP: A Self-Organized Game-Theoretic Framework For Collaborative Perception
arXiv:2601.12524v1 Announce Type: new Abstract: Collaborative perception holds great promise for improving safety in autonomous driving, particularly in dense traffic where vehicles can share sensory information to overcome individual blind spots and extend awareness. However, deploying such collaboration at scale remains difficult when communication bandwidth is limited and no roadside infrastructure is available. To overcome these limitations, we introduce a fully decentralized framework that enables vehicles to self organize into cooperative groups using only vehicle to vehicle communication. The approach decomposes the problem into two sequential game theoretic stages. In the first stage, vehicles form stable clusters by evaluating mutual sensing complementarity and motion coherence, and each cluster elects a coordinator. In the second stage, the coordinator guides its members to selectively transmit point cloud segments from perceptually salient regions through a non cooperative potential game, enabling efficient local fusion. Global scene understanding is then achieved by exchanging compact detection messages across clusters rather than raw sensor data. We design distributed algorithms for both stages that guarantee monotonic improvement of the system wide potential function. Comprehensive experiments on the CARLA-OpenCDA-NS3 co-simulation platform show that our method reduces communication overhead while delivering higher perception accuracy and wider effective coverage compared to existing baselines.
https://arxiv.org/abs/2601.12524
Academic Papers
svg
cbd8540df7bd659811a1abcc74d2d9ff2ca7e9cca4576611356ef91c78961b6b
2026-01-21T00:00:00-05:00
Approximating splits for decision trees quickly in sparse data streams
arXiv:2601.12525v1 Announce Type: new Abstract: Decision trees are one of the most popular classifiers in the machine learning literature. While the most common decision tree learning algorithms treat data as a batch, numerous algorithms have been proposed to construct decision trees from a data stream. A standard training strategy involves augmenting the current tree by changing a leaf node into a split. Here we typically maintain counters in each leaf which allow us to determine the optimal split, and whether the split should be done. In this paper we focus on how to speed up the search for the optimal split when dealing with sparse binary features and a binary class. We focus on finding splits that have the approximately optimal information gain or Gini index. In both cases finding the optimal split can be done in $O(d)$ time, where $d$ is the number of features. We propose an algorithm that yields $(1 + \alpha)$ approximation when using conditional entropy in amortized $O(\alpha^{-1}(1 + m\log d) \log \log n)$ time, where $m$ is the number of 1s in a data point, and $n$ is the number of data points. Similarly, for Gini index, we achieve $(1 + \alpha)$ approximation in amortized $O(\alpha^{-1} + m \log d)$ time. Our approach is beneficial for sparse data where $m \ll d$. In our experiments we find almost-optimal splits efficiently, faster than the baseline, overperforming the theoretical approximation guarantees.
https://arxiv.org/abs/2601.12525
Academic Papers
svg
9f8b9bb4240c5ed0d187c0820978452b656c3cf10258855a0e08cf0375248162
2026-01-21T00:00:00-05:00
Deep Feature Deformation Weights
arXiv:2601.12527v1 Announce Type: new Abstract: Handle-based mesh deformation has been a long-standing paradigm in computer graphics, enabling intuitive shape edits from sparse controls. Classic techniques offer precise and rapid deformation control. However, they solve an optimization problem with constraints defined by control handle placement, requiring a user to know apriori the ideal distribution of handles on the shape to accomplish the desired edit. The mapping from handle set to deformation behavior is often unintuitive and, importantly, non-semantic. Modern data-driven methods, on the other hand, leverage a data prior to obtain semantic edits, but are slow and imprecise. We propose a technique that fuses the semantic prior of data with the precise control and speed of traditional frameworks. Our approach is surprisingly simple yet effective: deep feature proximity makes for smooth and semantic deformation weights, with no need for additional regularization. The weights can be computed in real-time for any surface point, whereas prior methods require optimization for new handles. Moreover, the semantic prior from deep features enables co-deformation of semantic parts. We introduce an improved feature distillation pipeline, barycentric feature distillation, which efficiently uses the visual signal from shape renders to minimize distillation cost. This allows our weights to be computed for high resolution meshes in under a minute, in contrast to potentially hours for both classical and neural methods. We preserve and extend properties of classical methods through feature space constraints and locality weighting. Our field representation allows for automatic detection of semantic symmetries, which we use to produce symmetry-preserving deformations. We show a proof-of-concept application which can produce deformations for meshes up to 1 million faces in real-time on a consumer-grade machine.
https://arxiv.org/abs/2601.12527
Academic Papers
svg
ec53d2a250b9bf6c3688845ec60ab1faaabd3d824ea39ed7a4567c8b09f44140
2026-01-21T00:00:00-05:00
How to Get Close to the Median Shape
arXiv:2601.12529v1 Announce Type: new Abstract: $\renewcommand{\Re}{\mathbb{R}}\newcommand{\eps}{{\varepsilon}}\newcommand{\poly}{\mathrm{poly}} $In this paper, we study the problem of $L_1$-fitting a shape to a set of $n$ points in $\Re^d$ (where $d$ is a fixed constant), where the target is to minimize the sum of distances of the points to the shape, or the sum of squared distances. We present a general technique for computing a $(1 + \eps ) $-approximation for such a problem, with running time $O(n + \poly( \log n, 1/\eps))$, where $\poly(\log n, 1/\eps)$ is a polynomial of constant degree of $\log n$ and $1/\eps$ (the power of the polynomial is a function of $d$). The new algorithm runs in linear time for a fixed $\eps>0$, and is the first subquadratic algorithm for this problem. Applications of the algorithm include best fitting either a circle, a sphere, or a cylinder to a set of points when minimizing the sum of distances (or squared distances) to the respective shape.
https://arxiv.org/abs/2601.12529
Academic Papers
svg
9eb46ce5c71437037eea5e3b53f8251bce64cfc66292ea964b0fa30747545af5
2026-01-21T00:00:00-05:00
XRefine: Attention-Guided Keypoint Match Refinement
arXiv:2601.12530v1 Announce Type: new Abstract: Sparse keypoint matching is crucial for 3D vision tasks, yet current keypoint detectors often produce spatially inaccurate matches. Existing refinement methods mitigate this issue through alignment of matched keypoint locations, but they are typically detector-specific, requiring retraining for each keypoint detector. We introduce XRefine, a novel, detector-agnostic approach for sub-pixel keypoint refinement that operates solely on image patches centered at matched keypoints. Our cross-attention-based architecture learns to predict refined keypoint coordinates without relying on internal detector representations, enabling generalization across detectors. Furthermore, XRefine can be extended to handle multi-view feature tracks. Experiments on MegaDepth, KITTI, and ScanNet demonstrate that the approach consistently improves geometric estimation accuracy, achieving superior performance compared to existing refinement methods while maintaining runtime efficiency. Our code and trained models can be found at https://github.com/boschresearch/xrefine.
https://arxiv.org/abs/2601.12530
Academic Papers
svg
363a46ae68ca0a5ca4e064a3cbe9135e1b1cda5170e5b82529101b8c9f3f62ef
2026-01-21T00:00:00-05:00
BirdsEye-RU: A Dataset For Detecting Faces from Overhead Images
arXiv:2601.12533v1 Announce Type: new Abstract: Detecting faces in overhead images remains a significant challenge due to extreme scale variations and environmental clutter. To address this, we created the BirdsEye-RU dataset, a comprehensive collection of 2,978 images containing over eight thousand annotated faces. This dataset is specifically designed to capture small and distant faces across diverse environments, containing both drone images and smartphone-captured images from high altitude. We present a detailed description of the BirdsEye-RU dataset in this paper. We made our dataset freely available to the public, and it can be accessed at https://www.kaggle.com/datasets/mdahanafarifkhan/birdseye-ru.
https://arxiv.org/abs/2601.12533
Academic Papers
svg
0461489a13bc072ce24a7b12414677074920aea49f53058c48d14837dc56098f
2026-01-21T00:00:00-05:00
Encoding Emotion Through Self-Supervised Eye Movement Reconstruction
arXiv:2601.12534v1 Announce Type: new Abstract: The relationship between emotional expression and eye movement is well-documented, with literature establishing gaze patterns are reliable indicators of emotion. However, most studies utilize specialized, high-resolution eye-tracking equipment, limiting the potential reach of findings. We investigate how eye movement can be used to predict multimodal markers of emotional expression from naturalistic, low-resolution videos. We utilize a collection of video interviews from the USC Shoah Foundation's Visual History Archive with Holocaust survivors as they recount their experiences in the Auschwitz concentration camp. Inspired by pretraining methods on language models, we develop a novel gaze detection model that uses self-supervised eye movement reconstruction that can effectively leverage unlabeled video. We use this model's encoder embeddings to fine-tune models on two downstream tasks related to emotional expression. The first is aligning eye movement with directional emotion estimates from speech. The second task is using eye gaze as a predictor of three momentary manifestations of emotional behaviors: laughing, crying/sobbing, and sighing. We find our new model is predictive of emotion outcomes and observe a positive correlation between pretraining performance and emotion processing performance for both experiments. We conclude self-supervised eye movement reconstruction is an effective method for encoding the affective signal they carry.
https://arxiv.org/abs/2601.12534
Academic Papers
svg
93bcd0d55127ea001fe7a271b0d844457ca6d16d5d8bb4edb251bd5e49dc665c
2026-01-21T00:00:00-05:00
Improving Low-Resource Machine Translation via Round-Trip Reinforcement Learning
arXiv:2601.12535v1 Announce Type: new Abstract: Low-resource machine translation (MT) has gained increasing attention as parallel data from low-resource language communities is collected, but many potential methods for improving low-resource MT remain unexplored. We investigate a self-supervised reinforcement-learning-based fine-tuning for translation in low-resource settings using round-trip bootstrapping with the No Language Left Behind (NLLB) family of models. Our approach translates English into a target low-resource language and then back into English, using a combination of chrF++ and BLEU as the reward function on the reconstructed English sentences. Using the NLLB-MD dataset, we evaluate both the 600M and 1.3B parameter NLLB models and observe consistent improvements for the following languages: Central Aymara, Friulian, Wolof and Russian. Qualitative inspection of translation outputs indicates increased fluency and semantic fidelity. We argue that our method can further benefit from scale, enabling models to increasingly leverage their pretrained knowledge and continue self-improving.
https://arxiv.org/abs/2601.12535
Academic Papers
svg
79d1a982cff53919b7cd0abf8bf774866c669d8287fc4629ab5dbff821568c60
2026-01-21T00:00:00-05:00
Agentic Reasoning for Large Language Models
arXiv:2601.12538v1 Announce Type: new Abstract: Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making. While large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, they struggle in open-ended and dynamic environments. Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction. In this survey, we organize agentic reasoning along three complementary dimensions. First, we characterize environmental dynamics through three layers: foundational agentic reasoning, which establishes core single-agent capabilities including planning, tool use, and search in stable environments; self-evolving agentic reasoning, which studies how agents refine these capabilities through feedback, memory, and adaptation; and collective multi-agent reasoning, which extends intelligence to collaborative settings involving coordination, knowledge sharing, and shared goals. Across these layers, we distinguish in-context reasoning, which scales test-time interaction through structured orchestration, from post-training reasoning, which optimizes behaviors via reinforcement learning and supervised fine-tuning. We further review representative agentic reasoning frameworks across real-world applications and benchmarks, including science, robotics, healthcare, autonomous research, and mathematics. This survey synthesizes agentic reasoning methods into a unified roadmap bridging thought and action, and outlines open challenges and future directions, including personalization, long-horizon interaction, world modeling, scalable multi-agent training, and governance for real-world deployment.
https://arxiv.org/abs/2601.12538
Academic Papers
svg
41253152525f1c4008b8353ff9a39cb0a3368ec7b1075c11cb447d7997a93f2a
2026-01-21T00:00:00-05:00
MemeLens: Multilingual Multitask VLMs for Memes
arXiv:2601.12539v1 Announce Type: new Abstract: Memes are a dominant medium for online communication and manipulation because meaning emerges from interactions between embedded text, imagery, and cultural context. Existing meme research is distributed across tasks (hate, misogyny, propaganda, sentiment, humour) and languages, which limits cross-domain generalization. To address this gap we propose MemeLens, a unified multilingual and multitask explanation-enhanced Vision Language Model (VLM) for meme understanding. We consolidate 38 public meme datasets, filter and map dataset-specific labels into a shared taxonomy of $20$ tasks spanning harm, targets, figurative/pragmatic intent, and affect. We present a comprehensive empirical analysis across modeling paradigms, task categories, and datasets. Our findings suggest that robust meme understanding requires multimodal training, exhibits substantial variation across semantic categories, and remains sensitive to over-specialization when models are fine-tuned on individual datasets rather than trained in a unified setting. We will make the experimental resources and datasets publicly available for the community.
https://arxiv.org/abs/2601.12539
Academic Papers
svg
a024e88344ddfd8221fa878e3c43352764a7351a83f0e8e40ad0b737b356dea7
2026-01-21T00:00:00-05:00
Rethinking the AI Scientist: Interactive Multi-Agent Workflows for Scientific Discovery
arXiv:2601.12542v1 Announce Type: new Abstract: Artificial intelligence systems for scientific discovery have demonstrated remarkable potential, yet existing approaches remain largely proprietary and operate in batch-processing modes requiring hours per research cycle, precluding real-time researcher guidance. This paper introduces Deep Research, a multi-agent system enabling interactive scientific investigation with turnaround times measured in minutes. The architecture comprises specialized agents for planning, data analysis, literature search, and novelty detection, unified through a persistent world state that maintains context across iterative research cycles. Two operational modes support different workflows: semi-autonomous mode with selective human checkpoints, and fully autonomous mode for extended investigations. Evaluation on the BixBench computational biology benchmark demonstrated state-of-the-art performance, achieving 48.8% accuracy on open response and 64.5% on multiple-choice evaluation, exceeding existing baselines by 14 to 26 percentage points. Analysis of architectural constraints, including open access literature limitations and challenges inherent to automated novelty assessment, informs practical deployment considerations for AI-assisted scientific workflows.
https://arxiv.org/abs/2601.12542
Academic Papers
svg
618d743067027fb1d9183aa6b41d0ff4aca1df8b1b8b0716a83494460656ca53
2026-01-21T00:00:00-05:00
Press Start to Charge: Videogaming the Online Centralized Charging Scheduling Problem
arXiv:2601.12543v1 Announce Type: new Abstract: We study the online centralized charging scheduling problem (OCCSP). In this problem, a central authority must decide, in real time, when to charge dynamically arriving electric vehicles (EVs), subject to capacity limits, with the objective of balancing load across a finite planning horizon. To solve the problem, we first gamify it; that is, we model it as a game where charging blocks are placed within temporal and capacity constraints on a grid. We design heuristic policies, train learning agents with expert demonstrations, and improve them using Dataset Aggregation (DAgger). From a theoretical standpoint, we show that gamification reduces model complexity and yields tighter generalization bounds than vector-based formulations. Experiments across multiple EV arrival patterns confirm that gamified learning enhances load balancing. In particular, the image-to-movement model trained with DAgger consistently outperforms heuristic baselines, vector-based approaches, and supervised learning agents, while also demonstrating robustness in sensitivity analyses. These operational gains translate into tangible economic value. In a real-world case study for the Greater Montr\'eal Area (Qu\'ebec, Canada) using utility cost data, the proposed methods lower system costs by tens of millions of dollars per year over the prevailing practice and show clear potential to delay costly grid upgrades.
https://arxiv.org/abs/2601.12543
Academic Papers
svg
89d829506e2896748649fa6a947870f0c4b987b493ba6786dd71b535183aceab
2026-01-21T00:00:00-05:00
Information Farming: From Berry Picking to Berry Growing
arXiv:2601.12544v1 Announce Type: new Abstract: The classic paradigms of Berry Picking and Information Foraging Theory have framed users as gatherers, opportunistically searching across distributed sources to satisfy evolving information needs. However, the rise of GenAI is driving a fundamental transformation in how people produce, structure, and reuse information - one that these paradigms no longer fully capture. This transformation is analogous to the Neolithic Revolution, when societies shifted from hunting and gathering to cultivation. Generative technologies empower users to "farm" information by planting seeds in the form of prompts, cultivating workflows over time, and harvesting richly structured, relevant yields within their own plots, rather than foraging across others people's patches. In this perspectives paper, we introduce the notion of Information Farming as a conceptual framework and argue that it represents a natural evolution in how people engage with information. Drawing on historical analogy and empirical evidence, we examine the benefits and opportunities of information farming, its implications for design and evaluation, and the accompanying risks posed by this transition. We hypothesize that as GenAI technologies proliferate, cultivating information will increasingly supplant transient, patch-based foraging as a dominant mode of engagement, marking a broader shift in human-information interaction and its study.
https://arxiv.org/abs/2601.12544
Academic Papers
svg
bcaed4ca5e99d83a70beaa4b3e1d03d608833c8fd5e3406782503f1b91591d15
2026-01-21T00:00:00-05:00
An Experimental Comparison of Sliding Mode and Immersion and Invariance Adaptive Controllers forPosition-feedback Tracking of a Simple Mechanical System with Friction
arXiv:2601.12545v1 Announce Type: new Abstract: The purpose of this paper is to illustrate, in an experimental facility consisting of a simple pendular device, the performance of a sliding mode adaptive position-feedback tracking controller of mechanical systems with friction reported in the literature. To put this experimental evidence in perspective, we compare the performance of the sliding mode scheme with the one obtained by an adaptive controller designed following the well-known immersion and invariance technique.
https://arxiv.org/abs/2601.12545
Academic Papers
svg
18226eaca8ab163db87e9ffc4538206fabd5ed9bfc06a389a09c68332e3998de
2026-01-21T00:00:00-05:00
How Clinicians Think and What AI Can Learn From It
arXiv:2601.12547v1 Announce Type: new Abstract: Most clinical AI systems operate as prediction engines -- producing labels or risk scores -- yet real clinical reasoning is a time-bounded, sequential control problem under uncertainty. Clinicians interleave information gathering with irreversible actions, guided by regret, constraints and patient values. We argue that the dominant computational substrate of clinician reasoning is not cardinal optimization but ordinal, non-compensatory decision-making: Clinicians frequently rely on fast-and-frugal, lexicographic heuristics (e.g., fast-and-frugal trees) that stop early after checking a small, fixed sequence of cues. We provide a normative rationale for why such algorithms are not merely bounded rationality shortcuts, but can be epistemically preferred in medicine. First, many clinical trade-offs are constructed through human judgment and are only weakly measurable on absolute scales; without strong measurement axioms, only orderings are invariant, motivating an ordinal-by-default stance. Second, preference and signal elicitation are structurally crude: The mapping from truth $\to$ perception $\to$ inference $\to$ recorded variables introduces layered noise, leaving a persistent uncertainty floor. When this 'crudeness' overwhelms the decision margin, plug-in expected-utility optimization becomes brittle (high flip probability under small perturbations), whereas robust dominance/filtering rules ($\epsilon$-dominance, maximin) stabilize decisions.Finally, we outline a clinician-aligned AI blueprint: Use rich models for beliefs and trajectories, but choose actions through robust ordinal rules; treat heuristics as the low-dimensional special case; and deploy AI as 'selective complexity' -- invoked mainly for tie-breaking when decisions are fragile and information has positive expected impact.
https://arxiv.org/abs/2601.12547
Academic Papers
svg
ce6514ec64a4286d2ada0e0e21b57e7f05fdfbec9c106080555a63ebc59f27f7
2026-01-21T00:00:00-05:00
Traffic Collisions: Temporal Patterns and Severity-Weighted Hotspot Analysis
arXiv:2601.12548v1 Announce Type: new Abstract: Understanding traffic collision patterns is of high importance for effective road safety planning in fast-growing urban environments. This study examines the temporal and spatial patterns of traffic collisions in Dubai, UAE, with a particular focus on collision severity. To this end, traffic collision records from November 2024 to June 2025 were analyzed to examine hourly, daily, and monthly variations in collision frequency and severity for both overall traffic collisions and pedestrian-related accidents. Temporal associations with severity were evaluated using chi-square tests and Cramer's V, while spatial patterns were analyzed using severity-weighted hotspot analysis based on the Getis-Ord Gi* statistic, complemented by inverse distance weighting (IDW) interpolation. The results show a clear temporal variation in overall collision frequency and severity, with higher collision frequencies during evening and nighttime periods with 44% higher probability of high-severity outcomes at night compared to the afternoon. On the other hand, pedestrian-related accidents showed a distinct temporal profile, characterized by higher occurrence during late-evening hours and relatively limited variation across days of the week and months. Spatial analysis identified statistically significant severity hotspots for overall collisions in the northern and northwestern parts of Dubai and along the Al Ain-Dubai Highway, while pedestrian severity hotspots were concentrated near industrial areas in the southwestern region. Several policy measures are proposed based on the findings including, reducing nighttime speed limits, enhancing automated enforcement, improving roadway lighting, and implementing pedestrian-focused treatments in statistically significant hotspots.
https://arxiv.org/abs/2601.12548
Academic Papers
svg
f1ea37c1d489348f264a5858260f04dcb22fdb2dc9428be595a58ff65a06efec
2026-01-21T00:00:00-05:00
Benchmarking Concept-Spilling Across Languages in LLMs
arXiv:2601.12549v1 Announce Type: new Abstract: Multilingual Large Language Models (LLMs) exhibit remarkable cross-lingual abilities, yet often exhibit a systematic bias toward the representations from other languages, resulting in semantic interference when generating content in non-English languages$-$a phenomenon we define as language spilling. This paper presents a novel comparative framework for evaluating multilingual semantic robustness by systematically measuring how models handle polysemous words across languages. Our methodology provides a relative measure of model performance: when required to generate exactly five meanings, both strong and weak models may resort to meanings from dominant languages, but semantically stronger models do so later in the generation sequence, producing more true meanings from the target language before failing, while weaker models resort to dominant-language meanings earlier in the sequence. We evaluate a diverse set of open and closed multilingual LLMs using a structured meaning generation task across nine languages, employing a carefully curated benchmark of 100 high-polysemy English words. Our findings reveal significant variation in semantic robustness across both models and languages, providing a principled ranking system for model comparison without requiring definitive causal attribution of error sources. We contribute both a scalable comparative benchmark for multilingual semantic evaluation and a rigorous validation pipeline$-$critical tools for developing more linguistically balanced AI systems.
https://arxiv.org/abs/2601.12549
Academic Papers
svg
0842800cbeb9eaa0f4705395bb664327a52c4be8ef08eea80f457d7de4655dc8
2026-01-21T00:00:00-05:00
PISE: Physics-Anchored Semantically-Enhanced Deep Computational Ghost Imaging for Robust Low-Bandwidth Machine Perception
arXiv:2601.12551v1 Announce Type: new Abstract: We propose PISE, a physics-informed deep ghost imaging framework for low-bandwidth edge perception. By combining adjoint operator initialization with semantic guidance, PISE improves classification accuracy by 2.57% and reduces variance by 9x at 5% sampling.
https://arxiv.org/abs/2601.12551
Academic Papers
svg
0e8a214e7eeac753418a83a29385ab252049fde5748eb30257d031cbddc9fc51
2026-01-21T00:00:00-05:00
Evaluating Contextually Mediated Factual Recall in Multilingual Large Language Models
arXiv:2601.12555v1 Announce Type: new Abstract: Large language models (LLMs) can recall a wide range of factual knowledge across languages. However, existing factual recall evaluations primarily assess fact retrieval in isolation, where the queried entity is explicitly named and the fact is requested directly. In natural language use, facts are often accessed through context, where the relevant entity is introduced only indirectly. In this work, we study contextually mediated factual recall, asking whether LLMs can reliably retrieve factual knowledge when the target entity is embedded in a naturalistic context rather than queried explicitly, across languages. We construct controlled prompts that preserve the underlying fact while introducing referential mediation through contextual sentences. To disentangle contextual effects from name-specific associations, we further compare performance using synthetic names and real names across languages. Evaluating multiple model families in five languages, we find that contextual mediation consistently degrades factual recall, with substantial variation across relations. Larger models are more robust to contextual mediation, exhibiting a reduced performance gap relative to direct queries, while the effect of real names and name origin is mixed and unsystematic. These findings highlight a gap between isolated factual recall and context-dependent language understanding in multilingual LLMs.
https://arxiv.org/abs/2601.12555
Academic Papers
svg
6ef5d05e909b596bb3c7477f37feb3e6fa64ffec593173848c77669534c43c55
2026-01-21T00:00:00-05:00
Life, Machine Learning, and the Search for Habitability: Predicting Biosignature Fluxes for the Habitable Worlds Observatory
arXiv:2601.12557v1 Announce Type: new Abstract: Future direct-imaging flagship missions, such as NASA's Habitable Worlds Observatory (HWO), face critical decisions in prioritizing observations due to extremely stringent time and resource constraints. In this paper, we introduce two advanced machine-learning architectures tailored for predicting biosignature species fluxes from exoplanetary reflected-light spectra: a Bayesian Convolutional Neural Network (BCNN) and our novel model architecture, the Spectral Query Adaptive Transformer (SQuAT). The BCNN robustly quantifies both epistemic and aleatoric uncertainties, offering reliable predictions under diverse observational conditions, whereas SQuAT employs query-driven attention mechanisms to enhance interpretability by explicitly associating spectral features with specific biosignature species. We demonstrate that both models achieve comparably high predictive accuracy on an augmented dataset spanning a wide range of exoplanetary conditions, while highlighting their distinct advantages in uncertainty quantification and spectral interpretability. These capabilities position our methods as promising tools for accelerating target triage, optimizing observation schedules, and maximizing scientific return for upcoming flagship missions such as HWO.
https://arxiv.org/abs/2601.12557
Academic Papers
svg
aa0b82203f68a07d700fd9ac0adbcc49e3510a01a0c405a0f6a1d156b1703e02
2026-01-21T00:00:00-05:00
Automated Tool Support for Category-Partition Testing: Design Decisions, UI and Examples of Use
arXiv:2601.12559v1 Announce Type: new Abstract: Category-Partition is a functional testing technique that is based on the idea that the input domain of the system under test can be divided into sub-domains, with the assumption that inputs that belong to the same sub-domain trigger a similar behaviour and that therefore it is sufficient to select one input from each sub-domain. Category-Partition proceeds in several steps, from the identification of so-called categories and choices, possibly constrained, which are subsequently used to form test frames, i.e., combinations of choices, and eventually test cases. This paper reports on an ongoing attempt to automate as many of those steps as possible, with graphical-user interface tool support. Specifically, the user interface allows the user to specify parameters as well as so-called environment variables, further specify categories and choices with optional constraints. Choices are provided with precise specifications with operations specific to their types (e.g., Boolean, Integer, Real, String). Then, the tool automates the construction of test frames, which are combinations of choices, according to alternative selection criteria, and the identification of input values for parameters and environment variables for these test frames, thereby producing test cases. The paper illustrates the capabilities of the tool with the use of nine different case studies.
https://arxiv.org/abs/2601.12559
Academic Papers
svg
ddb7b597fdfbf7780a6e968714c1162fde6ef7cab1f8160414c50da5bf27a374
2026-01-21T00:00:00-05:00
Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
arXiv:2601.12560v1 Announce Type: new Abstract: Artificial Intelligence is moving from models that only generate text to Agentic AI, where systems behave as autonomous entities that can perceive, reason, plan, and act. Large Language Models (LLMs) are no longer used only as passive knowledge engines but as cognitive controllers that combine memory, tool use, and feedback from their environment to pursue extended goals. This shift already supports the automation of complex workflows in software engineering, scientific discovery, and web navigation, yet the variety of emerging designs, from simple single loop agents to hierarchical multi agent systems, makes the landscape hard to navigate. In this paper, we investigate architectures and propose a unified taxonomy that breaks agents into Perception, Brain, Planning, Action, Tool Use, and Collaboration. We use this lens to describe the move from linear reasoning procedures to native inference time reasoning models, and the transition from fixed API calls to open standards like the Model Context Protocol (MCP) and Native Computer Use. We also group the environments in which these agents operate, including digital operating systems, embodied robotics, and other specialized domains, and we review current evaluation practices. Finally, we highlight open challenges, such as hallucination in action, infinite loops, and prompt injection, and outline future research directions toward more robust and reliable autonomous systems.
https://arxiv.org/abs/2601.12560
Academic Papers
svg
1d05a9e2ba6d4cde32cc1235fe950063155eb42717973af380183488585edf10
2026-01-21T00:00:00-05:00
VR ProfiLens: User Profiling Risks in Consumer Virtual Reality Apps
arXiv:2601.12563v1 Announce Type: new Abstract: Virtual reality (VR) platforms and apps collect user sensor data, including motion, facial, eye, and hand data, in abstracted form. These data may expose users to unique privacy risks without their knowledge or meaningful awareness, yet the extent of these risks remains understudied. To address this gap, we propose VR ProfiLens, a framework to study user profiling based on VR sensor data and the resulting privacy risks across consumer VR apps. To systematically study this problem, we first develop a taxonomy rooted in the CCPA definition of personal information and expand it by sensor, app, and threat contexts to identify user attributes at risk. Then, we conduct a user study in which we collect VR sensor data from four sensor groups from real users interacting with 10 popular consumer VR apps, followed by a survey. We design and apply an analysis pipeline to demonstrate the feasibility of inferring user attributes using these data. Our results show that sensitive personal information can be inferred with moderately high to high risk (up to 90% F1 score) from abstracted sensor data. Through feature analysis, we further identify correlations among app groups and sensor groups in inferring user attributes. Our findings highlight risks to users, including privacy loss, tracking, targeted advertising, and safety threats. Finally, we discuss design implications and regulatory recommendations to enhance transparency and better protect users' privacy in VR.
https://arxiv.org/abs/2601.12563
Academic Papers
svg
6e1d3eaf6243a23b9653ca67d35db0bb01e209f75305a3262e02fa056d69365a
2026-01-21T00:00:00-05:00
Camera Pose Revisited
arXiv:2601.12567v1 Announce Type: new Abstract: Estimating the position and orientation of a camera with respect to an observed scene is one of the central problems in computer vision, particularly in the context of camera calibration and multi-sensor systems. This paper addresses the planar Perspective--$n$--Point problem, with special emphasis on the initial estimation of the pose of a calibration object. As a solution, we propose the \texttt{PnP-ProCay78} algorithm, which combines the classical quadratic formulation of the reconstruction error with a Cayley parameterization of rotations and least-squares optimization. The key component of the method is a deterministic selection of starting points based on an analysis of the reconstruction error for two canonical vectors, allowing costly solution-space search procedures to be avoided. Experimental validation is performed using data acquired also from high-resolution RGB cameras and very low-resolution thermal cameras in an integrated RGB--IR setup. The results demonstrate that the proposed algorithm achieves practically the same projection accuracy as optimal \texttt{SQPnP} and slightly higher than \texttt{IPPE}, both prominent \texttt{PnP-OpenCV} procedures. However, \texttt{PnP-ProCay78} maintains a significantly simpler algorithmic structure. Moreover, the analysis of optimization trajectories in Cayley space provides an intuitive insight into the convergence process, making the method attractive also from a didactic perspective. Unlike existing PnP solvers, the proposed \texttt{PnP-ProCay78} algorithm combines projection error minimization with an analytically eliminated reconstruction-error surrogate for translation, yielding a hybrid cost formulation that is both geometrically transparent and computationally efficient.
https://arxiv.org/abs/2601.12567
Academic Papers
svg
ee95c338b8b9b0a66d7ed7b4eb8cddc6c962edfd7d5b479fd50a4f067beba1b8
2026-01-21T00:00:00-05:00
The Origin of the Inaccessible Game
arXiv:2601.12576v1 Announce Type: new Abstract: The inaccessible game is an information-geometric framework where dynamics of information loss emerge from maximum entropy production under marginal-entropy conservation. We study the game's starting state, the origin. Classical Shannon entropy forbids a representation with zero joint entropy and positive marginal entropies: non-negativity of conditional entropy rules this out. Replacing Shannon with von Neumann entropy within the Baez Fritz Leinster Parzygnat categorical framework removes this obstruction and admits a well-defined origin: a globally pure state with maximally mixed marginals, selected up to local-unitary equivalence. At this LME origin, marginal-entropy conservation becomes a second-order geometric condition. Because the marginal-entropy sum is saturated termwise, the constraint gradient vanishes and first-order tangency is vacuous; admissible directions are selected by the kernel of the constraint Hessian, characterised by the marginal-preserving tangent space. We derive the constrained gradient flow in the matrix exponential family and show that, as the origin is approached, the affine time parameter degenerates. This motivates an axiomatically distinguished reparametrisation, entropy time $t$, defined by $dH/dt = c$ for fixed constant $c>0$. In this parametrisation, the infinite affine-time approach to the boundary maps to a finite entropy-time interval. The constrained dynamics split into a symmetric dissipative component realising SEA and a reversible component represented as unitary evolution. As in the classical game, marginal-entropy conservation is equivalent to conservation of a sum of local modular Hamiltonian expectations, a state-dependent "modular energy"; in Gibbs regimes where local modular generators become approximately parameter-invariant, this reduces to familiar fixed-energy constraints from nonequilibrium thermodynamics.
https://arxiv.org/abs/2601.12576
Academic Papers
svg
397efaebd43d2b85b42e7a10e50e25186dfcdf76386de5d9cc2e7f69abbdfc1f
2026-01-21T00:00:00-05:00
Semantic Fusion: Verifiable Alignment in Decentralized Multi-Agent Systems
arXiv:2601.12580v1 Announce Type: new Abstract: We present Semantic Fusion (SF), a formal framework for decentralized semantic coordination in multi-agent systems. SF allows agents to operate over scoped views of shared memory, propose structured updates, and maintain global coherence through local ontology-based validation and refresh without centralized control or explicit message passing. The central theoretical result is a bisimulation theorem showing that each agent's local execution is behaviorally equivalent to its projection of the global semantics, in both deterministic and probabilistic settings. This enables safety, liveness, and temporal properties to be verified locally and soundly lifted to the full system. SF supports agents whose update proposals vary across invocations, including those generated by learned or heuristic components, provided updates pass semantic validation before integration. We establish deterministic and probabilistic guarantees ensuring semantic alignment under asynchronous or degraded communication. To validate the model operationally, we implement a lightweight reference architecture that instantiates its core mechanisms. A 250-agent simulation evaluates these properties across over 11,000 validated updates, demonstrating convergence under probabilistic refresh, bounded communication, and resilience to agent failure. Together, these results show that Semantic Fusion can provide a formal and scalable basis for verifiable autonomy in decentralized systems.
https://arxiv.org/abs/2601.12580
Academic Papers
svg
79e5a57c731dd28fe40ccbf48cbe36cc63affe48344f1d7bd5e9f52b44c172a4
2026-01-21T00:00:00-05:00
Do MLLMs See What We See? Analyzing Visualization Literacy Barriers in AI Systems
arXiv:2601.12585v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) are increasingly used to interpret visualizations, yet little is known about why they fail. We present the first systematic analysis of barriers to visualization literacy in MLLMs. Using the regenerated Visualization Literacy Assessment Test (reVLAT) benchmark with synthetic data, we open-coded 309 erroneous responses from four state-of-the-art models with a barrier-centric strategy adapted from human visualization literacy research. Our analysis yields a taxonomy of MLLM failures, revealing two machine-specific barriers that extend prior human-participation frameworks. Results show that models perform well on simple charts but struggle with color-intensive, segment-based visualizations, often failing to form consistent comparative reasoning. Our findings inform future evaluation and design of reliable AI-driven visualization assistants.
https://arxiv.org/abs/2601.12585
Academic Papers
svg
ee9c0a1ae2892ce122325a8a7b9749477c08dc70a181ff0404a8bf13e0623f47
2026-01-21T00:00:00-05:00
Conversing with Objects toward Fluid Human and Artificial Identities during Life Transitions
arXiv:2601.12589v1 Announce Type: new Abstract: People's identities change during life transitions, e.g., studying abroad. They bring everyday objects that embody memories and reflect their identities during such moves. To assist in these transitions, we ask how people's human identities could be influenced by their objects through an artificial agent. This paper presents an exploratory research-through-design study around how people undergoing life transitions experience conversing with their everyday objects through a chatbot. Drawing on a two-week field deployment and interviews with 12 participants, we contribute (1) a conceptualization of 'trans-embodiment' describing the asynchronous imagination of object and human identities on the chatbot, (2) empirical evidence of the resulting emotional and reflective experiences, and (3) three types of object identities for designing conversational agents that role-play objects. Our contributions sum up to triangulating human-agent-object identity as trans-embodiment in supporting life transitions.
https://arxiv.org/abs/2601.12589
Academic Papers
svg
47ddaf2b5510321234bf4634f4005150b732f371c7d1f81eed94509d250d9b46
2026-01-21T00:00:00-05:00
SmoothCLAP: Soft-Target Enhanced Contrastive Language\--Audio Pretraining for Affective Computing
arXiv:2601.12591v1 Announce Type: new Abstract: The ambiguity of human emotions poses several challenges for machine learning models, as they often overlap and lack clear delineating boundaries. Contrastive language-audio pretraining (CLAP) has emerged as a key technique for generalisable emotion recognition. However, as conventional CLAP enforces a strict one-to-one alignment between paired audio-text samples, it overlooks intra-modal similarity and treats all non-matching pairs as equally negative. This conflicts with the fuzzy boundaries between different emotions. To address this limitation, we propose SmoothCLAP, which introduces softened targets derived from intra-modal similarity and paralinguistic features. By combining these softened targets with conventional contrastive supervision, SmoothCLAP learns embeddings that respect graded emotional relationships, while retaining the same inference pipeline as CLAP. Experiments on eight affective computing tasks across English and German demonstrate that SmoothCLAP is consistently achieving superior performance. Our results highlight that leveraging soft supervision is a promising strategy for building emotion-aware audio-text models.
https://arxiv.org/abs/2601.12591
Academic Papers
svg
cbcafd1b20b58146bc9e606c230210cef17983e257f2d893f95524bf174e49c1
2026-01-21T00:00:00-05:00
Blurred Drinker Paradoxes and Blurred Choice Axioms: Constructive Reverse Mathematics of the Downward L\"owenheim-Skolem Theorem
arXiv:2601.12592v1 Announce Type: new Abstract: In the setting of constructive reverse mathematics, we analyse the downward L\"owenheim-Skolem (DLS) theorem of first-order logic, stating that every infinite model has a countable elementary submodel. Refining the well-known equivalence of the DLS theorem to the axiom of dependent choice (DC) over classical base theories, our constructive approach allows for several finer logical decompositions: Just assuming countable choice (CC), the DLS theorem is equivalent to the conjunction of DC with a newly identified fragment of the excluded middle (LEM) that we call the blurred drinker paradox (BDP). Further without CC, the DLS theorem is equivalent to the conjunction of BDP with similarly blurred weakenings of DC and CC. Independently of their connection with the DLS theorem, we also study BDP and the blurred choice axioms on their own, for instance by showing that BDP is LEM without a contribution of Markov's principle and that blurred DC is DC without a contribution of CC. The paper is hyperlinked with an accompanying Coq development.
https://arxiv.org/abs/2601.12592
Academic Papers
svg
bbed39804f2e862e33c1803654a033daebe195fe636aa1042471cdd492ef2ffb
2026-01-21T00:00:00-05:00
Abusing the Internet of Medical Things: Evaluating Threat Models and Forensic Readiness for Multi-Vector Attacks on Connected Healthcare Devices
arXiv:2601.12593v1 Announce Type: new Abstract: Individuals experiencing interpersonal violence (IPV), who depend on medical devices, represent a uniquely vulnerable population as healthcare technologies become increasingly connected. Despite rapid growth in MedTech innovation and "health-at-home" ecosystems, the intersection of MedTech cybersecurity and technology-facilitated abuse remains critically under-examined. IPV survivors who rely on therapeutic devices encounter a qualitatively different threat environment from the external, technically sophisticated adversaries typically modeled in MedTech cybersecurity research. We address this gap through two complementary methods: (1) the development of hazard-integrated threat models that fuse Cyber physical system security modeling with tech-abuse frameworks, and (2) an immersive simulation with practitioners, deploying a live version of our model, identifying gaps in digital forensic practice. Our hazard-integrated CIA threat models map exploits to acute and chronic biological effects, uncovering (i) Integrity attack pathways that facilitate "Medical gaslighting" and "Munchausen-by-IoMT", (ii) Availability attacks that create life-critical and sub-acute harms (glycaemic emergencies, blindness, mood destabilization), and (iii) Confidentiality threats arising from MedTech advertisements (geolocation tracking from BLE broadcasts). Our simulation demonstrates that these attack surfaces are unlikely to be detected in practice: participants overlooked MedTech, misclassified reproductive and assistive technologies, and lacked awareness of BLE broadcast artifacts. Our findings show that MedTech cybersecurity in IPV contexts requires integrated threat modeling and improved forensic capabilities for detecting, preserving and interpreting harms arising from compromised patient-technology ecosystems.
https://arxiv.org/abs/2601.12593
Academic Papers
svg
1dde33618b6314b8bface96ce616196d125f74aabc00dc675cdda84a08b5f81f
2026-01-21T00:00:00-05:00
Dissecting Linear Recurrent Models: How Different Gating Strategies Drive Selectivity and Generalization
arXiv:2601.12598v1 Announce Type: new Abstract: Linear recurrent neural networks have emerged as efficient alternatives to the original Transformer's softmax attention mechanism, thanks to their highly parallelizable training and constant memory and computation requirements at inference. Iterative refinements of these models have introduced an increasing number of architectural mechanisms, leading to increased complexity and computational costs. Nevertheless, systematic direct comparisons among these models remain limited. Existing benchmark tasks are either too simplistic to reveal substantial differences or excessively resource-intensive for experimentation. In this work, we propose a refined taxonomy of linear recurrent models and introduce SelectivBench, a set of lightweight and customizable synthetic benchmark tasks for systematically evaluating sequence models. SelectivBench specifically evaluates selectivity in sequence models at small to medium scale, such as the capacity to focus on relevant inputs while ignoring context-based distractors. It employs rule-based grammars to generate sequences with adjustable complexity, incorporating irregular gaps that intentionally violate transition rules. Evaluations of linear recurrent models on SelectivBench reveal performance patterns consistent with results from large-scale language tasks. Our analysis clarifies the roles of essential architectural features: gating and rapid forgetting mechanisms facilitate recall, in-state channel mixing is unnecessary for selectivity, but critical for generalization, and softmax attention remains dominant due to its memory capacity scaling with sequence length. Our benchmark enables targeted, efficient exploration of linear recurrent models and provides a controlled setting for studying behaviors observed in large-scale evaluations. Code is available at https://github.com/symseqbench/selectivbench
https://arxiv.org/abs/2601.12598
Academic Papers
svg
dcb8b24e49551d01e0048bbc4e97922e3bdc8f196c1deb6dc747d7bc9d5332eb
2026-01-21T00:00:00-05:00
SSVD-O: Parameter-Efficient Fine-Tuning with Structured SVD for Speech Recognition
arXiv:2601.12600v1 Announce Type: new Abstract: Parameter-efficient fine-tuning (PEFT) is a scalable approach for adapting large speech foundation models to new domains. While methods such as LoRA and its state-of-the-art variants reduce adaptation costs, they typically allocate parameters uniformly across model subspaces, which limits their efficiency and scalability in speech applications. Building on our prior work, this paper introduces SSVD-Outer (SSVD-O), an extension of the structured SVD-guided (SSVD) fine-tuning method. SSVD-O combines input acoustic feature space-associated inner transformations with output semantic feature space-associated outer transformations to enable scalable and balanced adaptation. We conduct the first systematic analysis of parameter budget allocation across model subspaces in PEFT for automatic speech recognition (ASR), and investigate the trade-off between learning and forgetting under constrained resources. SSVD-O is benchmarked against LoRA, DoRA, PiSSA, and SSVD on domain-shifted ASR tasks, including child speech and regional accents, across model scales from 0.1B to 2B within the ESPnet framework. Experimental results show that SSVD-O consistently narrows the performance gap to full fine-tuning while improving generalization and mitigating catastrophic forgetting.
https://arxiv.org/abs/2601.12600
Academic Papers
svg
8d16f445b4a42d7e6dceaf3f2ec4bde0fa799203bdb8f421c304521ab6a2f879
2026-01-21T00:00:00-05:00
Beyond Softmax and Entropy: Improving Convergence Guarantees of Policy Gradients by f-SoftArgmax Parameterization with Coupled Regularization
arXiv:2601.12604v1 Announce Type: new Abstract: Policy gradient methods are known to be highly sensitive to the choice of policy parameterization. In particular, the widely used softmax parameterization can induce ill-conditioned optimization landscapes and lead to exponentially slow convergence. Although this can be mitigated by preconditioning, this solution is often computationally expensive. Instead, we propose replacing the softmax with an alternative family of policy parameterizations based on the generalized f-softargmax. We further advocate coupling this parameterization with a regularizer induced by the same f-divergence, which improves the optimization landscape and ensures that the resulting regularized objective satisfies a Polyak-Lojasiewicz inequality. Leveraging this structure, we establish the first explicit non-asymptotic last-iterate convergence guarantees for stochastic policy gradient methods for finite MDPs without any form of preconditioning. We also derive sample-complexity bounds for the unregularized problem and show that f-PG, with Tsallis divergences achieves polynomial sample complexity in contrast to the exponential complexity incurred by the standard softmax parameterization.
https://arxiv.org/abs/2601.12604
Academic Papers
svg
27518db631ef2440cf720b9f9936dcabb0bfef9d0527d5b8a007a99431801435
2026-01-21T00:00:00-05:00
Explicit Almost-Optimal $\varepsilon$-Balanced Codes via Free Expander Walks
arXiv:2601.12606v1 Announce Type: new Abstract: We study the problem of constructing explicit codes whose rate and distance match the Gilbert-Varshamov bound in the low-rate, high-distance regime. In 2017, Ta-Shma gave an explicit family of codes where every pair of codewords has relative distance $\frac{1-\varepsilon}{2}$, with rate $\Omega(\varepsilon^{2+o(1)})$, matching the Gilbert-Varshamov bound up to a factor of $\varepsilon^{o(1)}$. Ta-Shma's construction was based on starting with a good code and amplifying its bias with walks arising from the $s$-wide-replacement product. In this work, we give an arguably simpler almost-optimal construction, based on what we call free expander walks: ordinary expander walks where each step is taken on a distinct expander from a carefully chosen sequence. This sequence of expanders is derived from the construction of near-$X$-Ramanujan graphs due to O'Donnell and Wu.
https://arxiv.org/abs/2601.12606
Academic Papers
svg
db84f7d7da4f2be1ebacd56aa0004331255fdf27e4d0b8884cfdf4e49be2d2a4
2026-01-21T00:00:00-05:00
A Cloud-based Multi-Agentic Workflow for Science
arXiv:2601.12607v1 Announce Type: new Abstract: As Large Language Models (LLMs) become ubiquitous across various scientific domains, their lack of ability to perform complex tasks like running simulations or to make complex decisions limits their utility. LLM-based agents bridge this gap due to their ability to call external resources and tools and thus are now rapidly gaining popularity. However, coming up with a workflow that can balance the models, cloud providers, and external resources is very challenging, making implementing an agentic system more of a hindrance than a help. In this work, we present a domain-agnostic, model-independent workflow for an agentic framework that can act as a scientific assistant while being run entirely on cloud. Built with a supervisor agent marshaling an array of agents with individual capabilities, our framework brings together straightforward tasks like literature review and data analysis with more complex ones like simulation runs. We describe the framework here in full, including a proof-of-concept system we built to accelerate the study of Catalysts, which is highly important in the field of Chemistry and Material Science. We report the cost to operate and use this framework, including the breakdown of the cost by services use. We also evaluate our system on a custom-curated synthetic benchmark and a popular Chemistry benchmark, and also perform expert validation of the system. The results show that our system is able to route the task to the correct agent 90% of the time and successfully complete the assigned task 97.5% of the time for the synthetic tasks and 91% of the time for real-world tasks, while still achieving better or comparable accuracy to most frontier models, showing that this is a viable framework for other scientific domains to replicate.
https://arxiv.org/abs/2601.12607
Academic Papers
svg
76f40613860b01ff83f2b6a65cf58b0963369b80bfbda2ce0e156dfc8be50535
2026-01-21T00:00:00-05:00
HERMES: A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications
arXiv:2601.12610v1 Announce Type: new Abstract: Intelligent assistive technologies are increasingly recognized as critical daily-use enablers for people with disabilities and age-related functional decline. Longitudinal studies, curation of quality datasets, live monitoring in activities of daily living, and intelligent intervention devices, share the largely unsolved need in reliable high-throughput multimodal sensing and processing. Streaming large heterogeneous data from distributed sensors, historically closed-source environments, and limited prior works on realtime closed-loop AI methodologies, inhibit such applications. To accelerate the emergence of clinical deployments, we deliver HERMES - an open-source high-performance Python framework for continuous multimodal sensing and AI processing at the edge. It enables synchronized data collection, and realtime streaming inference with user PyTorch models, on commodity computing devices. HERMES is applicable to fixed-lab and free-living environments, of distributed commercial and custom sensors. It is the first work to offer a holistic methodology that bridges cross-disciplinary gaps in real-world implementation strategies, and guides downstream AI model development. Its application on the closed-loop intelligent prosthesis use case illustrates the process of suitable AI model development from the generated constraints and trade-offs. Validation on the use case, with 4 synchronized hosts cooperatively capturing 18 wearable and off-body modalities, demonstrates performance and relevance of HERMES to the trajectory of the intelligent healthcare domain.
https://arxiv.org/abs/2601.12610
Academic Papers
svg
251f4f2f74b1394da0abec744b271b7e30c3b7be1de2e5a42d1e302bdba82030
2026-01-21T00:00:00-05:00
What Trace Powers Reveal About Log-Determinants: Closed-Form Estimators, Certificates, and Failure Modes
arXiv:2601.12612v1 Announce Type: new Abstract: Computing $\log\det(A)$ for large symmetric positive definite matrices arises in Gaussian process inference and Bayesian model comparison. Standard methods combine matrix-vector products with polynomial approximations. We study a different model: access to trace powers $p_k = \tr(A^k)$, natural when matrix powers are available. Classical moment-based approximations Taylor-expand $\log(\lambda)$ around the arithmetic mean. This requires $|\lambda - \AM| 4$. We work instead with the moment-generating function $M(t) = \E[X^t]$ for normalized eigenvalues $X = \lambda/\AM$. Since $M'(0) = \E[\log X]$, the log-determinant becomes $\log\det(A) = n(\log \AM + M'(0))$ -- the problem reduces to estimating a derivative at $t = 0$. Trace powers give $M(k)$ at positive integers, but interpolating $M(t)$ directly is ill-conditioned due to exponential growth. The transform $K(t) = \log M(t)$ compresses this range. Normalization by $\AM$ ensures $K(0) = K(1) = 0$. With these anchors fixed, we interpolate $K$ through $m+1$ consecutive integers and differentiate to estimate $K'(0)$. However, this local interpolation cannot capture arbitrary spectral features. We prove a fundamental limit: no continuous estimator using finitely many positive moments can be uniformly accurate over unbounded conditioning. Positive moments downweight the spectral tail; $K'(0) = \E[\log X]$ is tail-sensitive. This motivates guaranteed bounds. From the same traces we derive upper bounds on $(\det A)^{1/n}$. Given a spectral floor $r \leq \lambda_{\min}$, we obtain moment-constrained lower bounds, yielding a provable interval for $\log\det(A)$. A gap diagnostic indicates when to trust the point estimate and when to report bounds. All estimators and bounds cost $O(m)$, independent of $n$. For $m \in \{4, \ldots, 8\}$, this is effectively constant time.
https://arxiv.org/abs/2601.12612
Academic Papers
svg
99d408edd76e7cc897460490ea401998e340feae99217725703e16185e2526ca
2026-01-21T00:00:00-05:00
Allocating Corrective Control to Mitigate Multi-agent Safety Violations Under Private Preferences
arXiv:2601.12616v1 Announce Type: new Abstract: We propose a novel framework that computes the corrective control efforts to ensure joint safety in multi-agent dynamical systems. This framework efficiently distributes the required corrective effort without revealing individual agents' private preferences. Our framework integrates high-order control barrier functions (HOCBFs), which enforce safety constraints with formal guarantees of safety for complex dynamical systems, with a privacy-preserving resource allocation mechanism based on the progressive second price (PSP) auction. When a joint safety constraint is violated, agents iteratively bid on new corrective efforts via 'avoidance credits' rather than explicitly solving for feasible corrective efforts that remove the safety violation. The resulting correction, determined via a second price payment rule, coincides with the socially optimal safe distribution of corrective actions. Critically, the bidding process achieves this optimal allocation efficiently and without revealing private preferences of individual agents. We demonstrate this method through multi-robot hardware experiments on the Robotarium platform.
https://arxiv.org/abs/2601.12616
Academic Papers
svg
59edb868f6f7606e56b2e0d778393a2d73f66f72821494054774197c9a9fdfe3
2026-01-21T00:00:00-05:00
Creating Disability Story Videos with Generative AI: Motivation, Expression, and Sharing
arXiv:2601.12617v1 Announce Type: new Abstract: Generative AI (GenAI) is both promising and challenging in supporting people with disabilities (PwDs) in creating stories about disability. GenAI can reduce barriers to media production and inspire the creativity of PwDs, but it may also introduce biases and imperfections that hinder its adoption for personal expression. In this research, we examine how nine PwD from a disability advocacy group used GenAI to create videos sharing their disability experiences. Grounded in digital storytelling theory, we explore the motivations, expression, and sharing of PwD-created GenAI story videos. We conclude with a framework of momentous depiction, which highlights four core affordances of GenAI that either facilitate or require improvements to better support disability storytelling: non-capturable depiction, identity concealment and representation, contextual realism and consistency, and emotional articulation. Based on this framework, we further discuss design implications for GenAI in relation to story completion, media formats, and corrective mechanisms.
https://arxiv.org/abs/2601.12617
Academic Papers
svg
424234f598b852d1f322febcc17619832550ada804e2a762f6038096c4d6152f
2026-01-21T00:00:00-05:00
Disagreement as Data: Reasoning Trace Analytics in Multi-Agent Systems
arXiv:2601.12618v1 Announce Type: new Abstract: Learning analytics researchers often analyze qualitative student data such as coded annotations or interview transcripts to understand learning processes. With the rise of generative AI, fully automated and human-AI workflows have emerged as promising methods for analysis. However, methodological standards to guide such workflows remain limited. In this study, we propose that reasoning traces generated by large language model (LLM) agents, especially within multi-agent systems, constitute a novel and rich form of process data to enhance interpretive practices in qualitative coding. We apply cosine similarity to LLM reasoning traces to systematically detect, quantify, and interpret disagreements among agents, reframing disagreement as a meaningful analytic signal. Analyzing nearly 10,000 instances of agent pairs coding human tutoring dialog segments, we show that LLM agents' semantic reasoning similarity robustly differentiates consensus from disagreement and correlates with human coding reliability. Qualitative analysis guided by this metric reveals nuanced instructional sub-functions within codes and opportunities for conceptual codebook refinement. By integrating quantitative similarity metrics with qualitative review, our method has the potential to improve and accelerate establishing inter-rater reliability during coding by surfacing interpretive ambiguity, especially when LLMs collaborate with humans. We discuss how reasoning-trace disagreements represent a valuable new class of analytic signals advancing methodological rigor and interpretive depth in educational research.
https://arxiv.org/abs/2601.12618
Academic Papers
svg
8d01f5c8152030f56743405e923b903f1f6a83cf6f9521e6c4963c13c4e62ccd
2026-01-21T00:00:00-05:00
Learning Deterministic Finite-State Machines from the Prefixes of a Single String is NP-Complete
arXiv:2601.12621v1 Announce Type: new Abstract: It is well known that computing a minimum DFA consistent with a given set of positive and negative examples is NP-hard. Previous work has identified conditions on the input sample under which the problem becomes tractable or remains hard. In this paper, we study the computational complexity of the case where the input sample is prefix-closed. This formulation is equivalent to computing a minimum Moore machine consistent with observations along its runs. We show that the problem is NP-hard to approximate when the sample set consists of all prefixes of binary strings. Furthermore, we show that the problem remains NP-hard as a decision problem even when the sample set consists of the prefixes of a single binary string. Our argument also extends to the corresponding problem for Mealy machines.
https://arxiv.org/abs/2601.12621
Academic Papers
svg
e0fe15574d02260ddde947d31887ff9352ad13c7e971c4763bdbe81adcf32f3f
2026-01-21T00:00:00-05:00
Towards Robust Universal Perturbation Attacks: A Float-Coded, Penalty-Driven Evolutionary Approach
arXiv:2601.12624v1 Announce Type: new Abstract: Universal adversarial perturbations (UAPs) have garnered significant attention due to their ability to undermine deep neural networks across multiple inputs using a single noise pattern. Evolutionary algorithms offer a promising approach to generating such perturbations due to their ability to navigate non-convex, gradient-free landscapes. In this work, we introduce a float-coded, penalty-driven single-objective evolutionary framework for UAP generation that achieves lower visibility perturbations while enhancing attack success rates. Our approach leverages continuous gene representations aligned with contemporary deep learning scales, incorporates dynamic evolutionary operators with adaptive scheduling, and utilizes a modular PyTorch implementation for seamless integration with modern architectures. Additionally, we ensure the universality of the generated perturbations by testing across diverse models and by periodically switching batches to prevent overfitting. Experimental results on the ImageNet dataset demonstrate that our framework consistently produces perturbations with smaller norms, higher misclassification effectiveness, and faster convergence compared to existing evolutionary-based methods. These findings highlight the robustness and scalability of our approach for universal adversarial attacks across various deep learning architectures.
https://arxiv.org/abs/2601.12624
Academic Papers
svg
d0ff85fd7ef248148b4a348a6cbeb0132c843e60c422f111e87163fe9398246d
2026-01-21T00:00:00-05:00
Resilient Interval Observer-Based Control for Cooperative Adaptive Cruise Control under FDI Attack
arXiv:2601.12625v1 Announce Type: new Abstract: Connectivity in connected and autonomous vehicles (CAVs) introduces vulnerability to cyber threats such as false data injection (FDI) attacks, which can compromise system reliability and safety. To ensure resilience, this paper proposes a control framework combining a nonlinear controller with an interval observer for robust state estimation under measurement noise. The observer bounds leader's states, while a neural network-based estimator estimates the unknown FDI attacks in real time. These estimates are then used to mitigate FDI attack effects maintaining safe inter-vehicle spacing. The proposed approach leverages an idea of interval observer-based estimation and merges model-based and learning-based methods to achieve accurate estimations and real-time performance. MATLAB/Simulink results confirm resilient tracking, precise FDI attack estimation, and robustness to noise, demonstrating potential for real-world CACC applications under cyberattacks, disturbance, and bounded measurement noise.
https://arxiv.org/abs/2601.12625
Academic Papers
svg
06a8de7eccb8511c2d845e717d7bdc6ff2800dbefd0ccf430419f2bf8d9a4e8e
2026-01-21T00:00:00-05:00
Linear Mechanisms for Spatiotemporal Reasoning in Vision Language Models
arXiv:2601.12626v1 Announce Type: new Abstract: Spatio-temporal reasoning is a remarkable capability of Vision Language Models (VLMs), but the underlying mechanisms of such abilities remain largely opaque. We postulate that visual/geometrical and textual representations of spatial structure must be combined at some point in VLM computations. We search for such confluence, and ask whether the identified representation can causally explain aspects of input-output model behavior through a linear model. We show empirically that VLMs encode object locations by linearly binding \textit{spatial IDs} to textual activations, then perform reasoning via language tokens. Through rigorous causal interventions we demonstrate that these IDs, which are ubiquitous across the model, can systematically mediate model beliefs at intermediate VLM layers. Additionally, we find that spatial IDs serve as a diagnostic tool for identifying limitations in existing VLMs, and as a valuable learning signal. We extend our analysis to video VLMs and identify an analogous linear temporal ID mechanism. By characterizing our proposed spatiotemporal ID mechanism, we elucidate a previously underexplored internal reasoning process in VLMs, toward improved interpretability and the principled design of more aligned and capable models. We release our code for reproducibility: https://github.com/Raphoo/linear-mech-vlms.
https://arxiv.org/abs/2601.12626
Academic Papers
svg
8d3454b3cee8a69b4ff7e7ced1c0d366d42337fd6bcc81a06381aac721a5fcf6
2026-01-21T00:00:00-05:00
Constructing a Dataset to Support Agent-Based Modeling of Online Interactions: Users, Topics, and Interaction Networks
arXiv:2601.12628v1 Announce Type: new Abstract: Agent-based modeling (ABM) provides a powerful framework for exploring how individual behaviors and interactions give rise to collective social dynamics. However, most ABMs rely on handcrafted or parameterized agent rules that are not empirically grounded, thereby limiting their realism and validation against observed data. To address this gap, we constructed a large-scale, empirically grounded dataset from Reddit to support the development and evaluation of agent-based social simulations. The dataset includes 33 technology-focused, 14 climate-focused, and 7 COVID-related aggregated agents, encompassing around one million posts and comments. Using publicly available posts and comments, we define agent categories based on content and interaction patterns, derive inter-agent relationships from temporal commenting behaviors, and build a directed, weighted network that reflects empirically observed user connections. The resulting dataset enables researchers to calibrate and benchmark agent behavior, network structure, and information diffusion processes against real social dynamics. Our quantitative analysis reveals clear topic-dependent differences in how users interact. Climate discussions show dense, highly connected networks with sustained engagement, COVID-related interactions are sparse and mostly one-directional, and technology discussions are organized around a small number of central hubs. Manual qualitative analysis further shows that agent interactions follow realistic patterns of timing, similarity between users, and sentiment change.
https://arxiv.org/abs/2601.12628
Academic Papers
svg
52bb7a5bc9e7ff3ff867766eb6a1f640e67060a2b1d0ebf3a80d88fe9ef93d99
2026-01-21T00:00:00-05:00
BioPulse-QA: A Dynamic Biomedical Question-Answering Benchmark for Evaluating Factuality, Robustness, and Bias in Large Language Models
arXiv:2601.12632v1 Announce Type: new Abstract: Objective: Large language models (LLMs) are increasingly applied in biomedical settings, and existing benchmark datasets have played an important role in supporting model development and evaluation. However, these benchmarks often have limitations. Many rely on static or outdated datasets that fail to capture the dynamic, context-rich, and high-stakes nature of biomedical knowledge. They also carry increasing risk of data leakage due to overlap with model pretraining corpora and often overlook critical dimensions such as robustness to linguistic variation and potential demographic biases. Materials and Methods: To address these gaps, we introduce BioPulse-QA, a benchmark that evaluates LLMs on answering questions from newly published biomedical documents including drug labels, trial protocols, and clinical guidelines. BioPulse-QA includes 2,280 expert-verified question answering (QA) pairs and perturbed variants, covering both extractive and abstractive formats. We evaluate four LLMs - GPT-4o, GPT-o1, Gemini-2.0-Flash, and LLaMA-3.1 8B Instruct - released prior to the publication dates of the benchmark documents. Results: GPT-o1 achieves the highest relaxed F1 score (0.92), followed by Gemini-2.0-Flash (0.90) on drug labels. Clinical trials are the most challenging source, with extractive F1 scores as low as 0.36. Discussion and Conclusion: Performance differences are larger for paraphrasing than for typographical errors, while bias testing shows negligible differences. BioPulse-QA provides a scalable and clinically relevant framework for evaluating biomedical LLMs.
https://arxiv.org/abs/2601.12632
Academic Papers
svg
26b39d54b6f2bedaae870b6657358b253ea7a6f7c82e9f115e5e85d665a57a6b
2026-01-21T00:00:00-05:00
The Cost of Convenience: Identifying, Analyzing, and Mitigating Predatory Loan Applications on Android
arXiv:2601.12634v1 Announce Type: new Abstract: Digital lending applications, commonly referred to as loan apps, have become a primary channel for microcredit in emerging markets. However, many of these apps demand excessive permissions and misuse sensitive user data for coercive debt-recovery practices, including harassment, blackmail, and public shaming that affect both borrowers and their contacts. This paper presents the first cross-country measurement of loan app compliance against both national regulations and Google's Financial Services Policy. We analyze 434 apps drawn from official registries and app markets from Indonesia, Kenya, Nigeria, Pakistan, and the Philippines. To operationalize policy requirements at scale, we translate policy text into testable permission checks using LLM-assisted policy-to-permission mapping and combine this with static and dynamic analyses of loan apps' code and runtime behavior. Our findings reveal pervasive non-compliance among approved apps: 141 violate national regulatory policy and 147 violate Google policy. Dynamic analysis further shows that several apps transmit sensitive data (contacts, SMS, location, media) before user signup or registration, undermining informed consent and enabling downstream harassment of borrowers and third parties. Following our disclosures, Google removed 93 flagged apps from Google Play, representing over 300M cumulative installs. We advocate for adopting our methodology as a proactive compliance-monitoring tool and offer targeted recommendations for regulators, platforms, and developers to strengthen privacy protections. Overall, our results highlight the need for coordinated enforcement and robust technical safeguards to ensure that digital lending supports financial inclusion without compromising user privacy or safety.
https://arxiv.org/abs/2601.12634
Academic Papers
svg
1ede5facc97b08cb1dca031119d3fe4bfbd22aaa08c848aaaee25aaf65aaa57b
2026-01-21T00:00:00-05:00
From Bands to Depth: Understanding Bathymetry Decisions on Sentinel-2
arXiv:2601.12636v1 Announce Type: new Abstract: Deploying Sentinel-2 satellite derived bathymetry (SDB) robustly across sites remains challenging. We analyze a Swin-Transformer based U-Net model (Swin-BathyUNet) to understand how it infers depth and when its predictions are trustworthy. A leave-one-band out study ranks spectral importance to the different bands consistent with shallow water optics. We adapt ablation-based CAM to regression (A-CAM-R) and validate the reliability via a performance retention test: keeping only the top-p% salient pixels while neutralizing the rest causes large, monotonic RMSE increase, indicating explanations localize on evidence the model relies on. Attention ablations show decoder conditioned cross attention on skips is an effective upgrade, improving robustness to glint/foam. Cross-region inference (train on one site, test on another) reveals depth-dependent degradation: MAE rises nearly linearly with depth, and bimodal depth distributions exacerbate mid/deep errors. Practical guidance follows: maintain wide receptive fields, preserve radiometric fidelity in green/blue channels, pre-filter bright high variance near shore, and pair light target site fine tuning with depth aware calibration to transfer across regions.
https://arxiv.org/abs/2601.12636
Academic Papers
svg
dd1b7dd333aded3a51e588666a513ec7b901955653139a971ad208e0b6dec902
2026-01-21T00:00:00-05:00
Topology-Aware Multiscale Mixture of Experts for Efficient Molecular Property Prediction
arXiv:2601.12637v1 Announce Type: new Abstract: Many molecular properties depend on 3D geometry, where non-covalent interactions, stereochemical effects, and medium- to long-range forces are determined by spatial distances and angles that cannot be uniquely captured by a 2D bond graph. Yet most 3D molecular graph neural networks still rely on globally fixed neighborhood heuristics, typically defined by distance cutoffs and maximum neighbor limits, to define local message-passing neighborhoods, leading to rigid, data-agnostic interaction budgets. We propose Multiscale Interaction Mixture of Experts (MI-MoE) to adapt interaction modeling across geometric regimes. Our contributions are threefold: (1) we introduce a distance-cutoff expert ensemble that explicitly captures short-, mid-, and long-range interactions without committing to a single cutoff; (2) we design a topological gating encoder that routes inputs to experts using filtration-based descriptors, including persistent homology features, summarizing how connectivity evolves across radii; and (3) we show that MI-MoE is a plug-in module that consistently improves multiple strong 3D molecular backbones across diverse molecular and polymer property prediction benchmark datasets, covering both regression and classification tasks. These results highlight topology-aware multiscale routing as an effective principle for 3D molecular graph learning.
https://arxiv.org/abs/2601.12637
Academic Papers
svg
25639cb3df417af8c3c782bfc2f717b9f2bb58a25b161fce624ad3e8265b6ec3
2026-01-21T00:00:00-05:00
Mixed Precision PointPillars for Efficient 3D Object Detection with TensorRT
arXiv:2601.12638v1 Announce Type: new Abstract: LIDAR 3D object detection is one of the important tasks for autonomous vehicles. Ensuring that this task operates in real-time is crucial. Toward this, model quantization can be used to accelerate the runtime. However, directly applying model quantization often leads to performance degradation due to LIDAR's wide numerical distributions and extreme outliers. To address the wide numerical distribution, we proposed a mixed precision framework designed for PointPillars. Our framework first searches for sensitive layers with post-training quantization (PTQ) by quantizing one layer at a time to 8-bit integer (INT8) and evaluating each model for average precision (AP). The top-k most sensitive layers are assigned as floating point (FP). Combinations of these layers are greedily searched to produce candidate mixed precision models, which are finalized with either PTQ or quantization-aware training (QAT). Furthermore, to handle outliers, we observe that using a very small number of calibration data reduces the likelihood of encountering outliers, thereby improving PTQ performance. Our methods provides mixed precision models without training in the PTQ pipeline, while our QAT pipeline achieves the performance competitive to FP models. With TensorRT deployment, our models offer less latency and sizes by up to 2.35 and 2.26 times, respectively.
https://arxiv.org/abs/2601.12638
Academic Papers
svg
aadb6563490a7ba8a659cb8eab29e215914eb67c28549abdf0ccd9021a49344c
2026-01-21T00:00:00-05:00
Objective Matters: Fine-Tuning Objectives Shape Safety, Robustness, and Persona Drift
arXiv:2601.12639v1 Announce Type: new Abstract: Fine-tuning LLMs on benign data can still degrade alignment and adversarial robustness, yet direct analysis of the role of fine-tuning objectives in shaping these safety outcomes remain limited. We present a controlled comparison of six fine-tuning objectives -- Supervised Fine-Tuning, Direct Preference Optimization, Conditional Fine-Tuning, Inoculation Prompting, Odds Ratio Preference Optimization, and KL-regularized fine-tuning -- holding data, domain, architecture, and optimization fixed. Across closed-form reasoning and open-ended generation tasks, we find that objective choice induces systematic, scale-dependent shifts along the safety-capability frontier. At small training budgets, robustness is similar across objectives but capability differs. At larger budgets, objectives diverge sharply: supervised and preference-based tuning tightly couple capability gains to increased adversarial vulnerability and persona drift, while objectives that constrain learning signals -- especially ORPO and KL-regularization -- substantially mitigate both. Fine-tuning objectives therefore matter little for safety at small scales but become a primary driver of adversarial robustness and latent persona stability as training scale increases.
https://arxiv.org/abs/2601.12639
Academic Papers
svg
9efa9f8c1b3c791b2febed00b6ceb93c6ea49183726ac7cb1772e2d7e10fadc4
2026-01-21T00:00:00-05:00
Beyond Identification: Computing Boolean Functions via Channels
arXiv:2601.12640v1 Announce Type: new Abstract: Consider a point-to-point communication system in which the transmitter holds a binary message of length $m$ and transmits a corresponding codeword of length $n$. The receiver's goal is to recover a Boolean function of that message, where the function is unknown to the transmitter, but chosen from a known class $F$. We are interested in the asymptotic relationship of $m$ and $n$: given $n$, how large can $m$ be (asymptotically), such that the value of the Boolean function can be recovered reliably? This problem generalizes the identification-via-channels framework introduced by Ahlswede and Dueck. We formulate the notion of computation capacity, and derive achievability and converse results for selected classes of functions $F$, characterized by the Hamming weight of functions. Our obtained results are tight in the sense of the scaling behavior for all cases of $F$ considered in the paper.
https://arxiv.org/abs/2601.12640
Academic Papers
svg
49ab3b35a6adf17eab6acb195105bd44f8d257732b03906fc9f4bd91b17584e1
2026-01-21T00:00:00-05:00
STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models
arXiv:2601.12641v1 Announce Type: new Abstract: Computer-aided design (CAD) is vital to modern manufacturing, yet model creation remains labor-intensive and expertise-heavy. To enable non-experts to translate intuitive design intent into manufacturable artifacts, recent large language models-based text-to-CAD efforts focus on command sequences or script-based formats like CadQuery. However, these formats are kernel-dependent and lack universality for manufacturing. In contrast, the Standard for the Exchange of Product Data (STEP, ISO 10303) file is a widely adopted, neutral boundary representation (B-rep) format directly compatible with manufacturing, but its graph-structured, cross-referenced nature poses unique challenges for auto-regressive LLMs. To address this, we curate a dataset of ~40K STEP-caption pairs and introduce novel preprocessing tailored for the graph-structured format of STEP, including a depth-first search-based reserialization that linearizes cross-references while preserving locality and chain-of-thought(CoT)-style structural annotations that guide global coherence. We integrate retrieval-augmented generation to ground predictions in relevant examples for supervised fine-tuning, and refine generation quality through reinforcement learning with a specific Chamfer Distance-based geometric reward. Experiments demonstrate consistent gains of our STEP-LLM in geometric fidelity over the Text2CAD baseline, with improvements arising from multiple stages of our framework: the RAG module substantially enhances completeness and renderability, the DFS-based reserialization strengthens overall accuracy, and the RL further reduces geometric discrepancy. Both metrics and visual comparisons confirm that STEP-LLM generates shapes with higher fidelity than Text2CAD. These results show the feasibility of LLM-driven STEP model generation from natural language, showing its potential to democratize CAD design for manufacturing.
https://arxiv.org/abs/2601.12641
Academic Papers
svg
8b4135bcbc0628e5482b4a0bdac90f2207f9b522bb5e31772ae7ef80a28a0535
2026-01-21T00:00:00-05:00
Unbounded Harms, Bounded Law: Liability in the Age of Borderless AI
arXiv:2601.12646v1 Announce Type: new Abstract: The rapid proliferation of artificial intelligence (AI) has exposed significant deficiencies in risk governance. While ex-ante harm identification and prevention have advanced, Responsible AI scholarship remains underdeveloped in addressing ex-post liability. Core legal questions regarding liability allocation, responsibility attribution, and remedial effectiveness remain insufficiently theorized and institutionalized, particularly for transboundary harms and risks that transcend national jurisdictions. Drawing on contemporary AI risk analyses, we argue that such harms are structurally embedded in global AI supply chains and are likely to escalate in frequency and severity due to cross-border deployment, data infrastructures, and uneven national oversight capacities. Consequently, territorially bounded liability regimes are increasingly inadequate. Using a comparative and interdisciplinary approach, this paper examines compensation and liability frameworks from high-risk transnational domains - including vaccine injury schemes, systemic financial risk governance, commercial nuclear liability, and international environmental regimes - to distill transferable legal design principles such as strict liability, risk pooling, collective risk-sharing, and liability channelling, while highlighting potential structural constraints on their application to AI-related harms. Situated within an international order shaped more by AI arms race dynamics than cooperative governance, the paper outlines the contours of a global AI accountability and compensation architecture, emphasizing the tension between geopolitical rivalry and the collective action required to govern transboundary AI risks effectively.
https://arxiv.org/abs/2601.12646
Academic Papers
svg
bfd14138edbb0a49fe66d0c92b11258cb3862a2a651ed301c50cf7b03f99c71d
2026-01-21T00:00:00-05:00
Intelligent Documentation in Medical Education: Can AI Replace Manual Case Logging?
arXiv:2601.12648v1 Announce Type: new Abstract: Procedural case logs are a core requirement in radiology training, yet they are time-consuming to complete and prone to inconsistency when authored manually. This study investigates whether large language models (LLMs) can automate procedural case log documentation directly from free-text radiology reports. We evaluate multiple local and commercial LLMs under instruction-based and chain-of-thought prompting to extract structured procedural information from 414 curated interventional radiology reports authored by nine residents between 2018 and 2024. Model performance is assessed using sensitivity, specificity, and F1-score, alongside inference latency and token efficiency to estimate operational cost. Results show that both local and commercial models achieve strong extraction performance, with best F1-scores approaching 0.87, while exhibiting different trade-offs between speed and cost. Automation using LLMs has the potential to substantially reduce clerical burden for trainees and improve consistency in case logging. These findings demonstrate the feasibility of AI-assisted documentation in medical education and highlight the need for further validation across institutions and clinical workflows.
https://arxiv.org/abs/2601.12648
Academic Papers
svg
d786f32f895a244997d8a515f211f18744a3f285674c1d4485b51d648147f40e
2026-01-21T00:00:00-05:00
Ethical Risks in Deploying Large Language Models: An Evaluation of Medical Ethics Jailbreaking
arXiv:2601.12652v1 Announce Type: new Abstract: Background: While Large Language Models (LLMs) have achieved widespread adoption, malicious prompt engineering specifically "jailbreak attacks" poses severe security risks by inducing models to bypass internal safety mechanisms. Current benchmarks predominantly focus on public safety and Western cultural norms, leaving a critical gap in evaluating the niche, high-risk domain of medical ethics within the Chinese context. Objective: To establish a specialized jailbreak evaluation framework for Chinese medical ethics and to systematically assess the defensive resilience and ethical alignment of seven prominent LLMs when subjected to sophisticated adversarial simulations. Methodology: We evaluated seven prominent models (e.g., GPT-5, Claude-Sonnet-4-Reasoning, DeepSeek-R1) using a "role-playing + scenario simulation + multi-turn dialogue" vector within the DeepInception framework. The testing focused on eight high-risk themes, including commercial surrogacy and organ trading, utilizing a hierarchical scoring matrix to quantify the Attack Success Rate (ASR) and ASR Gain. Results: A systemic collapse of defenses was observed, whereas models demonstrated high baseline compliance, the jailbreak ASR reached 82.1%, representing an ASR Gain of over 80 percentage points. Claude-Sonnet-4-Reasoning emerged as the most robust model, while five models including Gemini-2.5-Pro and GPT-4.1 exhibited near-total failure with ASRs between 96% and 100%. Conclusions: Current LLMs are highly vulnerable to contextual manipulation in medical ethics, often prioritizing "helpfulness" over safety constraints. To enhance security, we recommend a transition from outcome to process supervision, the implementation of multi-factor identity verification, and the establishment of cross-model "joint defense" mechanisms.
https://arxiv.org/abs/2601.12652
Academic Papers
svg
f32227cb5e761e503785a680a960df7340280fdb47539d650cca0ec96e552e75
2026-01-21T00:00:00-05:00
Explanation Multiplicity in SHAP: Characterization and Assessment
arXiv:2601.12654v1 Announce Type: new Abstract: Post-hoc explanations are widely used to justify, contest, and audit automated decisions in high-stakes domains. SHAP, in particular, is often treated as a reliable account of which features drove an individual prediction. Yet SHAP explanations can vary substantially across repeated runs even when the input, task, and trained model are held fixed. We term this phenomenon explanation multiplicity: multiple internally valid but substantively different explanations for the same decision. We present a methodology to characterize multiplicity in feature-attribution explanations and to disentangle sources due to model training/selection from stochasticity intrinsic to the explanation pipeline. We further show that apparent stability depends on the metric: magnitude-based distances can remain near zero while rank-based measures reveal substantial churn in the identity and ordering of top features. To contextualize observed disagreement, we derive randomized baseline values under plausible null models. Across datasets, model classes, and confidence regimes, we find explanation multiplicity is pervasive and persists even for high-confidence predictions, highlighting the need for metrics and baselines that match the intended use of explanations.
https://arxiv.org/abs/2601.12654
Academic Papers
svg
302090d67290ddb1b99911a33c3d5361fd2c89943e839d963379df283bc36c84
2026-01-21T00:00:00-05:00
Multiagent Reinforcement Learning in Enhancing Resilience of Microgrids under Extreme Weather Events
arXiv:2601.12657v1 Announce Type: new Abstract: Grid resilience is crucial in light of power interruptions caused by increasingly frequent extreme weather events. Well-designed energy management systems (EMS) have made progress in improving microgrid resilience through the coordination of distributed energy resources (DERs), but still face significant challenges in addressing the uncertainty of load demand caused by extreme weather. The integration of deep reinforcement learning (DRL) into EMS design enables optimized microgrid control strategies for coordinating DERs. Building on this, we proposed a cooperative multi-agent deep reinforcement learning (MADRL)-based EMS framework to provide flexible scalability for microgrids, enhance resilience and reduce operational costs during power outages. Specifically, the gated recurrent unit with a gating mechanism was introduced to extract features from temporal data, which enables the EMS to coordinate DERs more efficiently. Next, the proposed MADRL method incorporating action masking techniques was evaluated in the IEEE 33-Bus system using real-world data on renewable generation and power load. Finally, the numerical results demonstrated the superiority of the proposed method in reducing operating costs as well as the effectiveness in enhancing microgrid resilience during power interruptions.
https://arxiv.org/abs/2601.12657
Academic Papers
svg
bdd4e0874b54ae6b841d5e625c224cfada2ba45da2e4af7d09b956680c386ce1
2026-01-21T00:00:00-05:00
Augmenting Question Answering with A Hybrid RAG Approach
arXiv:2601.12658v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) has emerged as a powerful technique for enhancing the quality of responses in Question-Answering (QA) tasks. However, existing approaches often struggle with retrieving contextually relevant information, leading to incomplete or suboptimal answers. In this paper, we introduce Structured-Semantic RAG (SSRAG), a hybrid architecture that enhances QA quality by integrating query augmentation, agentic routing, and a structured retrieval mechanism combining vector and graph based techniques with context unification. By refining retrieval processes and improving contextual grounding, our approach improves both answer accuracy and informativeness. We conduct extensive evaluations on three popular QA datasets, TruthfulQA, SQuAD and WikiQA, across five Large Language Models (LLMs), demonstrating that our proposed approach consistently improves response quality over standard RAG implementations.
https://arxiv.org/abs/2601.12658
Academic Papers
svg
401615b5fc831cfa5866021875b5688021fcbd869bb596fa10a7a5d181e5f5a5
2026-01-21T00:00:00-05:00
Toward Faithful Explanations in Acoustic Anomaly Detection
arXiv:2601.12660v1 Announce Type: new Abstract: Interpretability is essential for user trust in real-world anomaly detection applications. However, deep learning models, despite their strong performance, often lack transparency. In this work, we study the interpretability of autoencoder-based models for audio anomaly detection, by comparing a standard autoencoder (AE) with a mask autoencoder (MAE) in terms of detection performance and interpretability. We applied several attribution methods, including error maps, saliency maps, SmoothGrad, Integrated Gradients, GradSHAP, and Grad-CAM. Although MAE shows a slightly lower detection, it consistently provides more faithful and temporally precise explanations, suggesting a better alignment with true anomalies. To assess the relevance of the regions highlighted by the explanation method, we propose a perturbation-based faithfulness metric that replaces them with their reconstructions to simulate normal input. Our findings, based on experiments in a real industrial scenario, highlight the importance of incorporating interpretability into anomaly detection pipelines and show that masked training improves explanation quality without compromising performance.
https://arxiv.org/abs/2601.12660
Academic Papers
svg
1d007c860327784dd1eefe2365279b5107627f0247ed47ac5d2e719ddb695e94
2026-01-21T00:00:00-05:00
MedConsultBench: A Full-Cycle, Fine-Grained, Process-Aware Benchmark for Medical Consultation Agents
arXiv:2601.12661v1 Announce Type: new Abstract: Current evaluations of medical consultation agents often prioritize outcome-oriented tasks, frequently overlooking the end-to-end process integrity and clinical safety essential for real-world practice. While recent interactive benchmarks have introduced dynamic scenarios, they often remain fragmented and coarse-grained, failing to capture the structured inquiry logic and diagnostic rigor required in professional consultations. To bridge this gap, we propose MedConsultBench, a comprehensive framework designed to evaluate the complete online consultation cycle by covering the entire clinical workflow from history taking and diagnosis to treatment planning and follow-up Q\&amp;A. Our methodology introduces Atomic Information Units (AIUs) to track clinical information acquisition at a sub-turn level, enabling precise monitoring of how key facts are elicited through 22 fine-grained metrics. By addressing the underspecification and ambiguity inherent in online consultations, the benchmark evaluates uncertainty-aware yet concise inquiry while emphasizing medication regimen compatibility and the ability to handle realistic post-prescription follow-up Q\&amp;A via constraint-respecting plan revisions. Systematic evaluation of 19 large language models reveals that high diagnostic accuracy often masks significant deficiencies in information-gathering efficiency and medication safety. These results underscore a critical gap between theoretical medical knowledge and clinical practice ability, establishing MedConsultBench as a rigorous foundation for aligning medical AI with the nuanced requirements of real-world clinical care.
https://arxiv.org/abs/2601.12661
Academic Papers
svg
d14daa3f49fab1e8d28e4b556fff0e4e928f537bfbc508a6fbfd13c7644f4902
2026-01-21T00:00:00-05:00
Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks
arXiv:2601.12662v1 Announce Type: new Abstract: We address real-time sampling and estimation of autoregressive Markovian sources in dynamic yet structurally similar multi-hop wireless networks. Each node caches samples from others and communicates over wireless collision channels, aiming to minimize time-average estimation error via decentralized policies. Due to the high dimensionality of action spaces and complexity of network topologies, deriving optimal policies analytically is intractable. To address this, we propose a graphical multi-agent reinforcement learning framework for policy optimization. Theoretically, we demonstrate that our proposed policies are transferable, allowing a policy trained on one graph to be effectively applied to structurally similar graphs. Numerical experiments demonstrate that (i) our proposed policy outperforms state-of-the-art baselines; (ii) the trained policies are transferable to larger networks, with performance gains increasing with the number of agents; (iii) the graphical training procedure withstands non-stationarity, even when using independent learning techniques; and (iv) recurrence is pivotal in both independent learning and centralized training and decentralized execution, and improves the resilience to non-stationarity.
https://arxiv.org/abs/2601.12662
Academic Papers
svg
36948ddc50b54916375e02ea6ed8ded2997f4ccdb63affe9e35d9b592aa2ba58
2026-01-21T00:00:00-05:00
Generalizable Hyperparameter Optimization for Federated Learning on Non-IID Cancer Images
arXiv:2601.12664v1 Announce Type: new Abstract: Deep learning for cancer histopathology training conflicts with privacy constraints in clinical settings. Federated Learning (FL) mitigates this by keeping data local; however, its performance depends on hyperparameter choices under non-independent and identically distributed (non-IID) client datasets. This paper examined whether hyperparameters optimized on one cancer imaging dataset generalized across non-IID federated scenarios. We considered binary histopathology tasks for ovarian and colorectal cancers. We perform centralized Bayesian hyperparameter optimization and transfer dataset-specific optima to the non-IID FL setup. The main contribution of this study is the introduction of a simple cross-dataset aggregation heuristic by combining configurations by averaging the learning rates and considering the modal optimizers and batch sizes. This combined configuration achieves a competitive classification performance.
https://arxiv.org/abs/2601.12664
Academic Papers
svg
bf70d7b6327856dbe6f0be33419086e2dad7a7fbabb5572b669834c1257a2982
2026-01-21T00:00:00-05:00
Near-Light Color Photometric Stereo for mono-Chromaticity non-lambertian surface
arXiv:2601.12666v1 Announce Type: new Abstract: Color photometric stereo enables single-shot surface reconstruction, extending conventional photometric stereo that requires multiple images of a static scene under varying illumination to dynamic scenarios. However, most existing approaches assume ideal distant lighting and Lambertian reflectance, leaving more practical near-light conditions and non-Lambertian surfaces underexplored. To overcome this limitation, we propose a framework that leverages neural implicit representations for depth and BRDF modeling under the assumption of mono-chromaticity (uniform chromaticity and homogeneous material), which alleviates the inherent ill-posedness of color photometric stereo and allows for detailed surface recovery from just one image. Furthermore, we design a compact optical tactile sensor to validate our approach. Experiments on both synthetic and real-world datasets demonstrate that our method achieves accurate and robust surface reconstruction.
https://arxiv.org/abs/2601.12666
Academic Papers
svg
e1878f5b0cd2810d3d7d0a46bdfa8feb6b2adfbf35fa04c6100a878a337747a6
2026-01-21T00:00:00-05:00
Empowering All-in-Loop Health Management of Spacecraft Power System in the Mega-Constellation Era via Human-AI Collaboration
arXiv:2601.12667v1 Announce Type: new Abstract: It is foreseeable that the number of spacecraft will increase exponentially, ushering in an era dominated by satellite mega-constellations (SMC). This necessitates a focus on energy in space: spacecraft power systems (SPS), especially their health management (HM), given their role in power supply and high failure rates. Providing health management for dozens of SPS and for thousands of SPS represents two fundamentally different paradigms. Therefore, to adapt the health management in the SMC era, this work proposes a principle of aligning underlying capabilities (AUC principle) and develops SpaceHMchat, an open-source Human-AI collaboration (HAIC) framework for all-in-loop health management (AIL HM). SpaceHMchat serves across the entire loop of work condition recognition, anomaly detection, fault localization, and maintenance decision making, achieving goals such as conversational task completion, adaptive human-in-the-loop learning, personnel structure optimization, knowledge sharing, efficiency enhancement, as well as transparent reasoning and improved interpretability. Meanwhile, to validate this exploration, a hardware-realistic fault injection experimental platform is established, and its simulation model is built and open-sourced, both fully replicating the real SPS. The corresponding experimental results demonstrate that SpaceHMchat achieves excellent performance across 23 quantitative metrics, such as 100% conclusion accuracy in logical reasoning of work condition recognition, over 99% success rate in anomaly detection tool invocation, over 90% precision in fault localization, and knowledge base search time under 3 minutes in maintenance decision-making. Another contribution of this work is the release of the first-ever AIL HM dataset of SPS. This dataset contains four sub-datasets, involving 4 types of AIL HM sub-tasks, 17 types of faults, and over 700,000 timestamps.
https://arxiv.org/abs/2601.12667
Academic Papers
svg
3c2432cb1bc0ab8c553ea9f1e7748ebafcff7d2e14132ae2f147522456fbed88
2026-01-21T00:00:00-05:00
Exploiting Test-Time Augmentation in Federated Learning for Brain Tumor MRI Classification
arXiv:2601.12671v1 Announce Type: new Abstract: Efficient brain tumor diagnosis is crucial for early treatment; however, it is challenging because of lesion variability and image complexity. We evaluated convolutional neural networks (CNNs) in a federated learning (FL) setting, comparing models trained on original versus preprocessed MRI images (resizing, grayscale conversion, normalization, filtering, and histogram equalization). Preprocessing alone yielded negligible gains; combined with test-time augmentation (TTA), it delivered consistent, statistically significant improvements in federated MRI classification (p<0.001). In practice, TTA should be the default inference strategy in FL-based medical imaging; when the computational budget permits, pairing TTA with light preprocessing provides additional reliable gains.
https://arxiv.org/abs/2601.12671
Academic Papers
svg
e2e721a1a370de4cb3f0d646cd6b53497f9c083f542bd428f77eb21776868e24
2026-01-21T00:00:00-05:00
VILTA: A VLM-in-the-Loop Adversary for Enhancing Driving Policy Robustness
arXiv:2601.12672v1 Announce Type: new Abstract: The safe deployment of autonomous driving (AD) systems is fundamentally hindered by the long-tail problem, where rare yet critical driving scenarios are severely underrepresented in real-world data. Existing solutions including safety-critical scenario generation and closed-loop learning often rely on rule-based heuristics, resampling methods and generative models learned from offline datasets, limiting their ability to produce diverse and novel challenges. While recent works leverage Vision Language Models (VLMs) to produce scene descriptions that guide a separate, downstream model in generating hazardous trajectories for agents, such two-stage framework constrains the generative potential of VLMs, as the diversity of the final trajectories is ultimately limited by the generalization ceiling of the downstream algorithm. To overcome these limitations, we introduce VILTA (VLM-In-the-Loop Trajectory Adversary), a novel framework that integrates a VLM into the closed-loop training of AD agents. Unlike prior works, VILTA actively participates in the training loop by comprehending the dynamic driving environment and strategically generating challenging scenarios through direct, fine-grained editing of surrounding agents' future trajectories. This direct-editing approach fully leverages the VLM's powerful generalization capabilities to create a diverse curriculum of plausible yet challenging scenarios that extend beyond the scope of traditional methods. We demonstrate that our approach substantially enhances the safety and robustness of the resulting AD policy, particularly in its ability to navigate critical long-tail events.
https://arxiv.org/abs/2601.12672
Academic Papers
svg
71f54d682f9974ccff8c5e170775020670dfa12daaa2b0bb4f2505ce4626999c
2026-01-21T00:00:00-05:00
Physics-informed machine learning for reconstruction of dynamical systems with invariant measure score matching
arXiv:2601.12675v1 Announce Type: new Abstract: In this paper, we develop a novel mesh-free framework, termed physics-informed neural networks with invariant measure score matching (PINN-IMSM), for reconstructing dynamical systems from unlabeled point-cloud data that capture the system's invariant measure. The invariant density satisfies the steady-state Fokker-Planck (FP) equation. We reformulate this equation in terms of its score function (the gradient of the log-density), which is estimated directly from data via denoising score matching, thereby bypassing explicit density estimation. This learned score is then embedded into a physics-informed neural network (PINN) to reconstruct the drift velocity field under the resulting score-based FP equation. The mesh-free nature of PINNs allows the framework to scale to higher dimensions, avoiding the curse of dimensionality inherent in mesh-based methods. To address the ill-posedness of high-dimensional inverse problems, we recast the problem as a PDE-constrained optimization that seeks the minimal-energy velocity field. Under suitable conditions, we prove that this problem admits a unique solution that depends continuously on the score function. The constrained formulation is solved using a stochastic augmented Lagrangian method. Numerical experiments on representative dynamical systems, including the Van der Pol oscillator, an active swimmer in an anharmonic trap, and the chaotic Lorenz-63 and Lorenz-96 systems, demonstrate that PINN-IMSM accurately recovers invariant measures and reconstructs faithful dynamical behavior for problems in up to five dimensions.
https://arxiv.org/abs/2601.12675
Academic Papers
svg
be91582a3e1bd7ee2bd37b63fb7a2b2372a43f9304ff3a99292fbe44b70756d6
2026-01-21T00:00:00-05:00
MetaToolAgent: Towards Generalizable Tool Usage in LLMs through Meta-Learning
arXiv:2601.12680v1 Announce Type: new Abstract: Tool learning is increasingly important for large language models (LLMs) to effectively coordinate and utilize a diverse set of tools in order to solve complex real-world tasks. By selecting and integrating appropriate tools, LLMs extend their capabilities beyond pure language understanding to perform specialized functions. However, existing methods for tool selection often focus on limited tool sets and struggle to generalize to novel tools encountered in practical deployments. To address these challenges, we introduce a comprehensive dataset spanning 7 domains, containing 155 tools and 9,377 question-answer pairs, which simulates realistic integration scenarios. Additionally, we propose MetaToolAgent (MTA), a meta-learning approach designed to improve cross-tool generalization. Experimental results show that MTA significantly outperforms baseline methods on unseen tools, demonstrating its promise for building flexible and scalable systems that require dynamic tool coordination.
https://arxiv.org/abs/2601.12680
Academic Papers
svg
d43addb2d3025ac28350e44fef149fdb9d4a312f1ad385ffed1466043b981257
2026-01-21T00:00:00-05:00
HyFormer: Revisiting the Roles of Sequence Modeling and Feature Interaction in CTR Prediction
arXiv:2601.12681v1 Announce Type: new Abstract: Industrial large-scale recommendation models (LRMs) face the challenge of jointly modeling long-range user behavior sequences and heterogeneous non-sequential features under strict efficiency constraints. However, most existing architectures employ a decoupled pipeline: long sequences are first compressed with a query-token based sequence compressor like LONGER, followed by fusion with dense features through token-mixing modules like RankMixer, which thereby limits both the representation capacity and the interaction flexibility. This paper presents HyFormer, a unified hybrid transformer architecture that tightly integrates long-sequence modeling and feature interaction into a single backbone. From the perspective of sequence modeling, we revisit and redesign query tokens in LRMs, and frame the LRM modeling task as an alternating optimization process that integrates two core components: Query Decoding which expands non-sequential features into Global Tokens and performs long sequence decoding over layer-wise key-value representations of long behavioral sequences; and Query Boosting which enhances cross-query and cross-sequence heterogeneous interactions via efficient token mixing. The two complementary mechanisms are performed iteratively to refine semantic representations across layers. Extensive experiments on billion-scale industrial datasets demonstrate that HyFormer consistently outperforms strong LONGER and RankMixer baselines under comparable parameter and FLOPs budgets, while exhibiting superior scaling behavior with increasing parameters and FLOPs. Large-scale online A/B tests in high-traffic production systems further validate its effectiveness, showing significant gains over deployed state-of-the-art models. These results highlight the practicality and scalability of HyFormer as a unified modeling framework for industrial LRMs.
https://arxiv.org/abs/2601.12681
Academic Papers
svg
d50b3dca1d476c1a604c4ad419d3377cdabfc7eda1c37a737408d2d3ed956353
2026-01-21T00:00:00-05:00
Fusion-Restoration Image Processing Algorithm to Improve the High-Temperature Deformation Measurement
arXiv:2601.12682v1 Announce Type: new Abstract: In the deformation measurement of high-temperature structures, image degradation caused by thermal radiation and random errors introduced by heat haze restrict the accuracy and effectiveness of deformation measurement. To suppress thermal radiation and heat haze using fusion-restoration image processing methods, thereby improving the accuracy and effectiveness of DIC in the measurement of high-temperature deformation. For image degradation caused by thermal radiation, based on the image layered representation, the image is decomposed into positive and negative channels for parallel processing, and then optimized for quality by multi-exposure image fusion. To counteract the high-frequency, random errors introduced by heat haze, we adopt the FSIM as the objective function to guide the iterative optimization of model parameters, and the grayscale average algorithm is applied to equalize anomalous gray values, thereby reducing measurement error. The proposed multi-exposure image fusion algorithm effectively suppresses image degradation caused by complex illumination conditions, boosting the effective computation area from 26% to 50% for under-exposed images and from 32% to 40% for over-exposed images without degrading measurement accuracy in the experiment. Meanwhile, the image restoration combined with the grayscale average algorithm reduces static thermal deformation measurement errors. The error in {\epsilon}_xx is reduced by 85.3%, while the errors in {\epsilon}_yy and {\gamma}_xy are reduced by 36.0% and 36.4%, respectively. We present image processing methods to suppress the interference of thermal radiation and heat haze in high-temperature deformation measurement using DIC. The experimental results verify that the proposed method can effectively improve image quality, reduce deformation measurement errors, and has potential application value in thermal deformation measurement.
https://arxiv.org/abs/2601.12682
Academic Papers
svg
e3e2661457c286928d9705ecc455bbbe7dd1de17bf8652310f7e51f648b0015a
2026-01-21T00:00:00-05:00
GaussianTrimmer: Online Trimming Boundaries for 3DGS Segmentation
arXiv:2601.12683v1 Announce Type: new Abstract: With the widespread application of 3D Gaussians in 3D scene representation, 3D scene segmentation methods based on 3D Gaussians have also gradually emerged. However, existing 3D Gaussian segmentation methods basically segment on the basis of Gaussian primitives. Due to the large variation range of the scale of 3D Gaussians, large-sized Gaussians that often span the foreground and background lead to jagged boundaries of segmented objects. To this end, we propose an online boundary trimming method, GaussianTrimmer, which is an efficient and plug-and-play post-processing method capable of trimming coarse boundaries for existing 3D Gaussian segmentation methods. Our method consists of two core steps: 1. Generating uniformly and well-covered virtual cameras; 2. Trimming Gaussian at the primitive level based on 2D segmentation results on virtual cameras. Extensive quantitative and qualitative experiments demonstrate that our method can improve the segmentation quality of existing 3D Gaussian segmentation methods as a plug-and-play method.
https://arxiv.org/abs/2601.12683
Academic Papers
svg
e94376cbec528a393dc9c9bb766e766bf289ce0004aba87353f23960bf5a28a0
2026-01-21T00:00:00-05:00
A Model Fusion Approach for Enhancing Credit Approval Decision Making
arXiv:2601.12684v1 Announce Type: new Abstract: Credit default poses significant challenges to financial institutions and consumers, resulting in substantial financial losses and diminished trust. As such, credit default risk management has been a critical topic in the financial industry. In this paper, we present Combinatorial Fusion Analysis (CFA), a model fusion framework, that combines multiple machine learning algorithms to detect and predict credit card approval with high accuracy. We present the design methodology and implementation using five pre-trained models. The CFA results show an accuracy of 89.13% which is better than conventional machine learning and ensemble methods.
https://arxiv.org/abs/2601.12684
Academic Papers
svg
86328840ef14bebb4a27127116116ccfc7b5629cd2256a399c9380cb75559aa9
2026-01-21T00:00:00-05:00
Persuasion in Online Conversations Is Associated with Alignment in Expressed Human Values
arXiv:2601.12685v1 Announce Type: new Abstract: Online disagreements often fail to produce understanding, instead reinforcing existing positions or escalating conflict. Prior work on predictors of successful persuasion in online discourse has largely focused on surface features such as linguistic style or conversational structure, leaving open the role of underlying principles or concerns that participants bring to an interaction. In this paper, we investigate how the expression and alignment of human values in back-and-forth online discussions relate to persuasion. Using data from Reddit's ChangeMyView subreddit, where successful persuasion is explicitly signaled through the awarding of deltas, we analyze one-on-one exchanges and characterize participants' value expression by drawing from Schwartz's Refined Theory of Basic Human Values. We find that successful persuasion is associated with two complementary processes: pre-existing compatibility between participants' value priorities even before the exchange happens, and the emergence of value alignment over the course of a conversation. At the same time, successful persuasion does not depend on commenters making large departures from their typical value expression patterns. We discuss implications of our findings for the design of online social platforms that aim to support constructive engagement across disagreement.
https://arxiv.org/abs/2601.12685
Academic Papers
svg
7dc6c06fdbe23347a1b192bc1d6e3c11784ac0185f04484b84c0b8332be1822b
2026-01-21T00:00:00-05:00
Best Practices for Large Load Interconnections: A North American Perspective on Data Centers
arXiv:2601.12686v1 Announce Type: new Abstract: Large loads are expanding rapidly across North America, led by data centers, cryptocurrency mining, hydrogen production facilities, and heavy-duty charging stations. Each class presents distinct electrical characteristics, but data centers are drawing particular attention as AI deployment drives unprecedented capacity growth. Their scale, duty cycles, and converter-dominated interfaces introduce new challenges for transmission interconnections, especially regarding disturbance behavior, steady-state performance, and operational visibility. This paper reviews best practices for large-load interconnections across North America, synthesizing utility and system operator guidelines into a coherent set of technical requirements. The approach combines handbook and manual analysis with cross-utility comparisons and an outlook on European directions. The review highlights requirements on power quality, telemetry, commissioning tests, and protection coordination, while noting gaps in ride-through specifications, load-variation management, and post-disturbance recovery targets. Building on these findings, the paper proposes practical guidance for developers and utilities.
https://arxiv.org/abs/2601.12686
Academic Papers
svg
08e6b14564f4bf46112713d0c71aa47d0691c4ef603b9f3943328ee5c4cfa46d
2026-01-21T00:00:00-05:00
Network Slicing Resource Management in Uplink User-Centric Cell-Free Massive MIMO Systems
arXiv:2601.12687v1 Announce Type: new Abstract: This paper addresses the joint optimization of per-user equipment (UE) bandwidth allocation and UE-access point (AP) association to maximize weighted sum-rate while satisfying heterogeneous quality-of-service (QoS) requirements across enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) slices in the uplink of a network slicing-enabled user-centric cell-free (CF) massive multiple-input multiple-output (mMIMO) system. The formulated problem is NP-hard, rendering global optimality computationally intractable. To address this challenge, it is decomposed into two sub-problems, each solved by a computationally efficient heuristic scheme, and jointly optimized through an alternating optimization framework. We then propose (i) a bandwidth allocation scheme that balances UE priority, spectral efficiency, and minimum bandwidth demand under limited resources to ensure fair QoS distribution, and (ii) a priority-based UE-AP association assignment approach that balances UE service quality with system capacity constraints. Together, these approaches provide a practical and computationally efficient solution for resource-constrained network slicing scenarios, where QoS feasibility is often violated under dense deployments and limited bandwidth, necessitating graceful degradation and fair QoS preservation rather than solely maximizing the aggregate sum-rate. Simulation results demonstrate that the proposed scheme achieves up to 52% higher weighted sum-rate, 140% and 58% higher QoS success rates for eMBB and URLLC slices, respectively, while reducing runtime by up to 97% compared to considered benchmarks.
https://arxiv.org/abs/2601.12687
Academic Papers
svg
cd254fa221d0d057d16b22f1d925dc92d9c7e85621d2ef7ba4411e76bd85e696
2026-01-21T00:00:00-05:00
Logic-Guided Multistage Inference for Explainable Multidefendant Judgment Prediction
arXiv:2601.12688v1 Announce Type: new Abstract: Crime disrupts societal stability, making law essential for balance. In multidefendant cases, assigning responsibility is complex and challenges fairness, requiring precise role differentiation. However, judicial phrasing often obscures the roles of the defendants, hindering effective AI-driven analyses. To address this issue, we incorporate sentencing logic into a pretrained Transformer encoder framework to enhance the intelligent assistance in multidefendant cases while ensuring legal interpretability. Within this framework an oriented masking mechanism clarifies roles and a comparative data construction strategy improves the model's sensitivity to culpability distinctions between principals and accomplices. Predicted guilt labels are further incorporated into a regression model through broadcasting, consolidating crime descriptions and court views. Our proposed masked multistage inference (MMSI) framework, evaluated on the custom IMLJP dataset for intentional injury cases, achieves significant accuracy improvements, outperforming baselines in role-based culpability differentiation. This work offers a robust solution for enhancing intelligent judicial systems, with publicly code available.
https://arxiv.org/abs/2601.12688
Academic Papers
svg
7b035a16cf76ab2b4da5f85b2a5a1ab94a9bf98c203f47590f1d67b25cfc3e13
2026-01-21T00:00:00-05:00
Priority-Based Bandwidth Allocation in Network Slicing-Enabled Cell-Free Massive MIMO Systems
arXiv:2601.12689v1 Announce Type: new Abstract: This paper addresses joint admission control and per-user equipment (UE) bandwidth allocation to maximize weighted sum-rate in network slicing-enabled user-centric cell-free (CF) massive multiple-input multiple-output (mMIMO) systems when aggregate quality-of-service (QoS) demand may exceed available bandwidth. Specifically, we optimize bandwidth allocation while satisfying heterogeneous QoS requirements across enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) slices in the uplink. The formulated problem is NP-hard, rendering global optimality computationally intractable. We decompose it into two sub-problems and solve them via computationally efficient heuristics within a sequential framework. We propose (i) a hierarchical admission control scheme that selectively admits UEs under bandwidth scarcity, prioritizing URLLC to ensure latency-sensitive QoS compliance, and (ii) an iterative gradient-based bandwidth allocation scheme that transfers bandwidth across slices guided by marginal utility and reallocates resources within slices. Simulation results demonstrate that the proposed scheme achieves near-optimal performance, deviating from a CVX-based benchmark by at most 2.2% in weighted sum-rate while reducing runtime by 99.7%, thereby enabling practical real-time deployment. Compared to a baseline round-robin scheme without admission control, the proposed approach achieves up to 1085% and 7% higher success rates for eMBB and URLLC slices, respectively, by intentionally sacrificing sum-rate to guarantee QoS. Sensitivity analysis further reveals that the proposed solution adapts effectively to diverse eMBB/URLLC traffic compositions, maintaining 47-51% eMBB and 93-94% URLLC success rates across varying load scenarios, confirming its robustness for resource-constrained large-scale deployments.
https://arxiv.org/abs/2601.12689
Academic Papers
svg
947ccc41858a0de720dc5ad248e986ae68980ca78347280526d355f88eed6a1e
2026-01-21T00:00:00-05:00
"Are we writing an advice column for Spock here?" Understanding Stereotypes in AI Advice for Autistic Users
arXiv:2601.12690v1 Announce Type: new Abstract: Autistic individuals sometimes disclose autism when asking LLMs for social advice, hoping for more personalized responses. However, they also recognize that these systems may reproduce stereotypes, raising uncertainty about the risks and benefits of disclosure. We conducted a mixed-methods study combining a large-scale LLM audit experiment with interviews involving 11 autistic participants. We developed a six-step pipeline operationalizing 12 documented autism stereotypes into decision-making scenarios framed as users requesting advice (e.g., "Should I do A or B?"). We generated 345,000 responses from six LLMs and measured how advice shifted when prompts disclosed autism versus when they did not. When autism was disclosed, LLMs disproportionately recommended avoiding stereotypically stressful situations, including social events, confrontations, new experiences, and romantic relationships. While some participants viewed this as affirming, others criticized it as infantilizing or undermining opportunities for growth. Our study illuminates how the intermingling of affirmation and stereotyping complicates the personalization of LLMs.
https://arxiv.org/abs/2601.12690
Academic Papers
svg