id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
47b825f3095991c214d726926a67dbb4b3aa70fb07f97bdcfa552e5e5694abaa
|
2026-01-20T00:00:00
|
Fossil-fuel phase out is not enough: countries must remove atmospheric carbon
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00211-w
|
https://www.nature.com/articles/d41586-026-00211-w
|
Academic Papers
|
svg
|
6614c58cfb6f9a729c38367f91211998bb64974458a44d2c828588b4c0b2b574
|
2026-01-20T00:00:00
|
Mistaken identity and the psychology of human recognition
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00190-y
|
https://www.nature.com/articles/d41586-026-00190-y
|
Academic Papers
|
svg
|
72d21de2b5da9929d424ca5930f5072091257e0ee731dda7236cecb3aba3b05a
|
2026-01-20T00:00:00
|
US Congress set to reject Trump’s sweeping science budget cuts
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00163-1
|
https://www.nature.com/articles/d41586-026-00163-1
|
Academic Papers
|
svg
|
dec092ffbced0ffa9725ad60f18a6fed65a01b7180da0d41d5f945db4422b8e7
|
2026-01-20T00:00:00
|
Trump one year on: How six US researchers plan to protect science amid chaos and cuts
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00090-1
|
https://www.nature.com/articles/d41586-026-00090-1
|
Academic Papers
|
svg
|
cbf00e37a398f88631a1b82a9bd82c342d3c0495222be567f7307dd5de0be2bb
|
2026-01-20T00:00:00
|
The US is quitting 66 global agencies: what does it mean for science?
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00102-0
|
https://www.nature.com/articles/d41586-026-00102-0
|
Academic Papers
|
svg
|
bbf571eb2955791480733dcc2990cf17c4f2b7833f49adc3bf9af15ec69966aa
|
2026-01-20T00:00:00
|
US science after a year of Trump: what has been lost and what remains
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00088-9
|
https://www.nature.com/articles/d41586-026-00088-9
|
Academic Papers
|
svg
|
1b996701cd20e0dd2a32aea31458806f7e8d332e2b0ecfe100604f590e831c3d
|
2026-01-20T00:00:00
|
‘Shattered’: US scientists speak out about how Trump policies disrupted their careers
|
Nature, Published online: 20 January 2026; doi:10.1038/d41586-026-00091-0
|
https://www.nature.com/articles/d41586-026-00091-0
|
Academic Papers
|
svg
|
d26e680d6bdbce6a47b98b98597205d4b9cecc35ffb48a0ff758262a919789c8
|
2026-01-19T00:00:00
|
Collective intelligence for AI-assisted chemical synthesis
|
Nature, Published online: 19 January 2026; doi:10.1038/s41586-026-10131-4
|
https://www.nature.com/articles/s41586-026-10131-4
|
Academic Papers
|
svg
|
51a83b2e9258f2f9853640f89c5736413d46ae7e62c9d3063039a96aaa63a7f1
|
2026-01-19T00:00:00
|
Publisher Correction: A fault-tolerant neutral-atom architecture for universal quantum computation
|
Nature, Published online: 19 January 2026; doi:10.1038/s41586-026-10108-3
|
https://www.nature.com/articles/s41586-026-10108-3
|
Academic Papers
|
svg
|
4c618c7e6cd9f0c845e76a77cdfd7fe8569e2328735aefc75df8053778da4930
|
2026-01-19T00:00:00
|
Editorial Expression of Concern: <i>En passant</i> neurotrophic action of an intermediate axonal target in the developing mammalian CNS
|
Nature, Published online: 19 January 2026; doi:10.1038/s41586-025-10080-4
|
https://www.nature.com/articles/s41586-025-10080-4
|
Academic Papers
|
svg
|
f2d7b271b37ce3291f7dd358662471d9070fde0eb21077953525621707a7f5ea
|
2026-01-19T00:00:00
|
Author Correction: An autonomous laboratory for the accelerated synthesis of inorganic materials
|
Nature, Published online: 19 January 2026; doi:10.1038/s41586-025-09992-y
|
https://www.nature.com/articles/s41586-025-09992-y
|
Academic Papers
|
svg
|
4071db67aa41b354aa1fba6a883db5a68603c63257831451b962c42a6166855c
|
2026-01-19T00:00:00
|
Floating science stations: my month on a research vessel looking after buoys
|
Nature, Published online: 19 January 2026; doi:10.1038/d41586-026-00189-5
|
https://www.nature.com/articles/d41586-026-00189-5
|
Academic Papers
|
svg
|
bccfb78b8b98e662ae52d8da00ad2e24eb252264ec23111d22db564dc7aec528
|
2026-01-19T00:00:00
|
‘Greed is the iron cage of our times’ — why nationalism is here to stay
|
Nature, Published online: 19 January 2026; doi:10.1038/d41586-026-00186-8
|
https://www.nature.com/articles/d41586-026-00186-8
|
Academic Papers
|
svg
|
cfcba1d5f9b7eded7571f0a795316c783277204f49ed4512047e9e8e0ebb4935
|
2026-01-19T00:00:00
|
Can ‘toxic masculinity’ be measured? Scientists try to quantify controversial term
|
Nature, Published online: 19 January 2026; doi:10.1038/d41586-026-00144-4
|
https://www.nature.com/articles/d41586-026-00144-4
|
Academic Papers
|
svg
|
b6ea0522030107eb4bf672f056503945e761a54af63226b845e8045e5c350ac7
|
2026-01-19T00:00:00
|
Forget formalism: mathematics was built on infighting and emotional turmoil
|
Nature, Published online: 19 January 2026; doi:10.1038/d41586-026-00187-7
|
https://www.nature.com/articles/d41586-026-00187-7
|
Academic Papers
|
svg
|
6cc57c4f5dc05bee74e1734ca20b8dd0b47b5a8b54b1aacaec29468f27f17740
|
2026-01-19T00:00:00
|
Daily briefing: Gifted dogs have word-learning skills on a par with human toddlers
|
Nature, Published online: 19 January 2026; doi:10.1038/d41586-026-00213-8
|
https://www.nature.com/articles/d41586-026-00213-8
|
Academic Papers
|
svg
|
0233e0f2f2290c1fde0c6b187e39daec9ff2f7d31923e9a739f1ca5d0078fd1b
|
2026-01-19T00:00:00
|
I’m going to halve my publication output. You should consider slow science, too
|
Nature, Published online: 19 January 2026; doi:10.1038/d41586-025-04061-w
|
https://www.nature.com/articles/d41586-025-04061-w
|
Academic Papers
|
svg
|
495f54e274c4b7b3f02ac405d8b04b990a78d98074626c5e268198362659a22b
|
2026-01-16T00:00:00
|
Daily briefing: Symbols on ancient pottery could be earliest evidence of mathematics
|
Nature, Published online: 16 January 2026; doi:10.1038/d41586-026-00201-y
|
https://www.nature.com/articles/d41586-026-00201-y
|
Academic Papers
|
svg
|
2e0c502be812b88dbe9cdf4bbfca1a274afa28e70920ee6a06ef85ce5e060d7c
|
2026-01-21T00:00:00-05:00
|
SynQP: A Framework and Metrics for Evaluating the Quality and Privacy Risk of Synthetic Data
|
arXiv:2601.12124v1 Announce Type: new Abstract: The use of synthetic data in health applications raises privacy concerns, yet the lack of open frameworks for privacy evaluations has slowed its adoption. A major challenge is the absence of accessible benchmark datasets for evaluating privacy risks, due to difficulties in acquiring sensitive data. To address this, we introduce SynQP, an open framework for benchmarking privacy in synthetic data generation (SDG) using simulated sensitive data, ensuring that original data remains confidential. We also highlight the need for privacy metrics that fairly account for the probabilistic nature of machine learning models. As a demonstration, we use SynQP to benchmark CTGAN and propose a new identity disclosure risk metric that offers a more accurate estimation of privacy risks compared to existing approaches. Our work provides a critical tool for improving the transparency and reliability of privacy evaluations, enabling safer use of synthetic data in health-related applications. % In our quality evaluations, non-private models achieved near-perfect machine-learning efficacy \(\ge0.97\). Our privacy assessments (Table II) reveal that DP consistently lowers both identity disclosure risk (SD-IDR) and membership-inference attack risk (SD-MIA), with all DP-augmented models staying below the 0.09 regulatory threshold. Code available at https://github.com/CAN-SYNH/SynQP
|
https://arxiv.org/abs/2601.12124
|
Academic Papers
|
svg
|
5f5a59a3a6931e72aebe1598b66edc3eb8f6c24cb1bbe2a35de472a35d529ab9
|
2026-01-21T00:00:00-05:00
|
UniMo: Unified Motion Generation and Understanding with Chain of Thought
|
arXiv:2601.12126v1 Announce Type: new Abstract: Existing 3D human motion generation and understanding methods often exhibit limited interpretability, restricting effective mutual enhancement between these inherently related tasks. While current unified frameworks based on large language models (LLMs) leverage linguistic priors, they frequently encounter challenges in semantic alignment and task coherence. Moreover, the next-token prediction paradigm in LLMs is ill-suited for motion sequences, causing cumulative prediction errors. To address these limitations, we propose UniMo, a novel framework that integrates motion-language information and interpretable chain of thought (CoT) reasoning into the LLM via supervised fine-tuning (SFT). We further introduce reinforcement learning with Group Relative Policy Optimization (GRPO) as a post-training strategy that optimizes over groups of tokens to enforce structural correctness and semantic alignment, mitigating cumulative errors in motion token prediction. Extensive experiments demonstrate that UniMo significantly outperforms existing unified and task-specific models, achieving state-of-the-art performance in both motion generation and understanding.
|
https://arxiv.org/abs/2601.12126
|
Academic Papers
|
svg
|
86ab09de93d08e481199ebf3a17cfb5fcf968d9afe4f3be55c69e93e7d7f0aaf
|
2026-01-21T00:00:00-05:00
|
SolarGPT-QA: A Domain-Adaptive Large Language Model for Educational Question Answering in Space Weather and Heliophysics
|
arXiv:2601.12131v1 Announce Type: new Abstract: Solar activity, including solar flares, coronal mass ejections (CMEs), and geomagnetic storms, can significantly impact satellites, aviation, power grids, data centers, and space missions. Extreme solar events can cause substantial economic damage if not predicted in advance, highlighting the importance of accurate forecasting and effective education in space science. Although large language models (LLMs) perform well on general tasks, they often lack domain-specific knowledge and pedagogical capability to clearly explain complex space science concepts. We introduce SolarGPT-QA, a question answering system based on a domain-adapted large language model built on the LLaMA-3 base model. The model is trained using scientific literature and large-scale question-answer data generated with GPT-4 and refined using Grok-3 in a student-friendly storytelling style. Human pairwise evaluations show that SolarGPT-QA outperforms general-purpose models in zero-shot settings and achieves competitive performance compared to instruction-tuned models for educational explanations in space weather and heliophysics. A small pilot student comprehension study further suggests improved clarity and accessibility of the generated explanations. Ablation experiments indicate that combining domain-adaptive pretraining with pedagogical fine-tuning is important for balancing scientific accuracy and educational effectiveness. This work represents an initial step toward a broader SolarGPT framework for space science education and forecasting.
|
https://arxiv.org/abs/2601.12131
|
Academic Papers
|
svg
|
83a5c8281d1a1065843d61a2b195bfc715c103fee3bb741a8971753cbdaba195
|
2026-01-21T00:00:00-05:00
|
Bengali Text Classification: An Evaluation of Large Language Model Approaches
|
arXiv:2601.12132v1 Announce Type: new Abstract: Bengali text classification is a Significant task in natural language processing (NLP), where text is categorized into predefined labels. Unlike English, Bengali faces challenges due to the lack of extensive annotated datasets and pre-trained language models. This study explores the effectiveness of large language models (LLMs) in classifying Bengali newspaper articles. The dataset used, obtained from Kaggle, consists of articles from Prothom Alo, a major Bangladeshi newspaper. Three instruction-tuned LLMs LLaMA 3.1 8B Instruct, LLaMA 3.2 3B Instruct, and Qwen 2.5 7B Instruct were evaluated for this task under the same classification framework. Among the evaluated models, Qwen 2.5 achieved the highest classification accuracy of 72%, showing particular strength in the "Sports" category. In comparison, LLaMA 3.1 and LLaMA 3.2 attained accuracies of 53% and 56%, respectively. The findings highlight the effectiveness of LLMs in Bengali text classification, despite the scarcity of resources for Bengali NLP. Future research will focus on exploring additional models, addressing class imbalance issues, and refining fine-tuning approaches to improve classification performance.
|
https://arxiv.org/abs/2601.12132
|
Academic Papers
|
svg
|
fc85b2ab0b3edecb3b49912de190ecd173fef1725f0569a5b2d60f182c86abba
|
2026-01-21T00:00:00-05:00
|
Human-Human-AI Triadic Programming: Uncovering the Role of AI Agent and the Value of Human Partner in Collaborative Learning
|
arXiv:2601.12134v1 Announce Type: new Abstract: As AI assistance becomes embedded in programming practice, researchers have increasingly examined how these systems help learners generate code and work more efficiently. However, these studies often position AI as a replacement for human collaboration and overlook the social and learning-oriented aspects that emerge in collaborative programming. Our work introduces human-human-AI (HHAI) triadic programming, where an AI agent serves as an additional collaborator rather than a substitute for a human partner. Through a within-subjects study with 20 participants, we show that triadic collaboration enhances collaborative learning and social presence compared to the dyadic human-AI (HAI) baseline. In the triadic HHAI conditions, participants relied significantly less on AI-generated code in their work. This effect was strongest in the HHAI-shared condition, where participants had an increased sense of responsibility to understand AI suggestions before applying them. These findings demonstrate how triadic settings activate socially shared regulation of learning by making AI use visible and accountable to a human peer, suggesting that AI systems that augment rather than automate peer collaboration can better preserve the learning processes that collaborative programming relies on.
|
https://arxiv.org/abs/2601.12134
|
Academic Papers
|
svg
|
368a72f476f145ab0eb6b3be0d8060cbb37f69c86dba86770229cd003c8c62be
|
2026-01-21T00:00:00-05:00
|
CoSMeTIC: Zero-Knowledge Computational Sparse Merkle Trees with Inclusion-Exclusion Proofs for Clinical Research
|
arXiv:2601.12136v1 Announce Type: new Abstract: Analysis of clinical data is a cornerstone of biomedical research with applications in areas such as genomic testing and response characterization of therapeutic drugs. Maintaining strict privacy controls is essential because such data typically contains personally identifiable health information of patients. At the same time, regulatory compliance often requires study managers to demonstrate the integrity and authenticity of participant data used in analyses. Balancing these competing requirements, privacy preservation and verifiable accountability, remains a critical challenge. In this paper, we present CoSMeTIC, a zero-knowledge computational framework that proposes computational Sparse Merkle Trees (SMTs) as a means to generate verifiable inclusion and exclusion proofs for individual participants' data in clinical studies. We formally analyze the zero-knowledge properties of CoSMeTIC and evaluate its computational efficiency through extensive experiments. Using the Kolmogorov-Smirnov and likelihood-ratio hypothesis tests, along with logistic-regression-based genomic analyses on real-world Huntington's disease datasets, we demonstrate that CoSMeTIC achieves strong privacy guarantees while maintaining statistical fidelity. Our results suggest that CoSMeTIC provides a scalable and practical alternative for achieving regulatory compliance with rigorous privacy protection in large-scale clinical research.
|
https://arxiv.org/abs/2601.12136
|
Academic Papers
|
svg
|
038244204fdb9c38f96010975de4caeb57079e6cdedca2dc6e5dbba81717b732
|
2026-01-21T00:00:00-05:00
|
EMoE: Eigenbasis-Guided Routing for Mixture-of-Experts
|
arXiv:2601.12137v1 Announce Type: new Abstract: The relentless scaling of deep learning models has led to unsustainable computational demands, positioning Mixture-of-Experts (MoE) architectures as a promising path towards greater efficiency. However, MoE models are plagued by two fundamental challenges: 1) a load imbalance problem known as the``rich get richer" phenomenon, where a few experts are over-utilized, and 2) an expert homogeneity problem, where experts learn redundant representations, negating their purpose. Current solutions typically employ an auxiliary load-balancing loss that, while mitigating imbalance, often exacerbates homogeneity by enforcing uniform routing at the expense of specialization. To resolve this, we introduce the Eigen-Mixture-of-Experts (EMoE), a novel architecture that leverages a routing mechanism based on a learned orthonormal eigenbasis. EMoE projects input tokens onto this shared eigenbasis and routes them based on their alignment with the principal components of the feature space. This principled, geometric partitioning of data intrinsically promotes both balanced expert utilization and the development of diverse, specialized experts, all without the need for a conflicting auxiliary loss function. Our code is publicly available at https://github.com/Belis0811/EMoE.
|
https://arxiv.org/abs/2601.12137
|
Academic Papers
|
svg
|
445724db67b79726eb21dedb3604b033ffed8e72a815676c50d959075ac50863
|
2026-01-21T00:00:00-05:00
|
DriveSafe: A Hierarchical Risk Taxonomy for Safety-Critical LLM-Based Driving Assistants
|
arXiv:2601.12138v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly integrated into vehicle-based digital assistants, where unsafe, ambiguous, or legally incorrect responses can lead to serious safety, ethical, and regulatory consequences. Despite growing interest in LLM safety, existing taxonomies and evaluation frameworks remain largely general-purpose and fail to capture the domain-specific risks inherent to real-world driving scenarios. In this paper, we introduce DriveSafe, a hierarchical, four-level risk taxonomy designed to systematically characterize safety-critical failure modes of LLM-based driving assistants. The taxonomy comprises 129 fine-grained atomic risk categories spanning technical, legal, societal, and ethical dimensions, grounded in real-world driving regulations and safety principles and reviewed by domain experts. To validate the safety relevance and realism of the constructed prompts, we evaluate their refusal behavior across six widely deployed LLMs. Our analysis shows that the evaluated models often fail to appropriately refuse unsafe or non-compliant driving-related queries, underscoring the limitations of general-purpose safety alignment in driving contexts.
|
https://arxiv.org/abs/2601.12138
|
Academic Papers
|
svg
|
839a7c93c091d27c5fa6399610f14b6fb8e541f6a08a7617613618e133aea437
|
2026-01-21T00:00:00-05:00
|
TIDE: A Trace-Informed Depth-First Exploration for Planning with Temporally Extended Goals
|
arXiv:2601.12141v1 Announce Type: new Abstract: Task planning with temporally extended goals (TEGs) is a critical challenge in AI and robotics, enabling agents to achieve complex sequences of objectives over time rather than addressing isolated, immediate tasks. Linear Temporal Logic on finite traces (LTLf ) provides a robust formalism for encoding these temporal goals. Traditional LTLf task planning approaches often transform the temporal planning problem into a classical planning problem with reachability goals, which are then solved using off-the-shelf planners. However, these methods often lack informed heuristics to provide a guided search for temporal goals. We introduce TIDE (Trace-Informed Depth-first Exploration), a novel approach that addresses this limitation by decomposing a temporal problem into a sequence of smaller, manageable reach-avoid sub-problems, each solvable using an off-the-shelf planner. TIDE identifies and prioritizes promising automaton traces within the domain graph, using cost-driven heuristics to guide exploration. Its adaptive backtracking mechanism systematically recovers from failed plans by recalculating costs and penalizing infeasible transitions, ensuring completeness and efficiency. Experimental results demonstrate that TIDE achieves promising performance and is a valuable addition to the portfolio of planning methods for temporally extended goals.
|
https://arxiv.org/abs/2601.12141
|
Academic Papers
|
svg
|
a6f99938c6084692e29805f8b80973ab8f0c271582c481460f154a1df5ad856b
|
2026-01-21T00:00:00-05:00
|
Neural Process-Based Reactive Controller for Autonomous Racing
|
arXiv:2601.12143v1 Announce Type: new Abstract: Attention-based neural architectures have become central to state-of-the-art methods in real-time nonlinear control. As these data-driven models continue to be integrated into increasingly safety-critical domains, ensuring statistically grounded and provably safe decision-making becomes essential. This paper introduces a novel reactive control framework for gap-based navigation using the Attentive Neural Process (AttNP) and a physics-informed extension, the PI-AttNP. Both models are evaluated in a simulated F1TENTH-style Ackermann steering racecar environment, chosen as a fast-paced proxy for safety-critical autonomous driving scenarios. The PI-AttNP augments the AttNP architecture with approximate model-based priors to inject physical inductive bias, enabling faster convergence and improved prediction accuracy suited for real-time control. To further ensure safety, we derive and implement a control barrier function (CBF)-based filtering mechanism that analytically enforces collision avoidance constraints. This CBF formulation is fully compatible with the learned AttNP controller and generalizes across a wide range of racing scenarios, providing a lightweight and certifiable safety layer. Our results demonstrate competitive closed-loop performance while ensuring real-time constraint satisfaction.
|
https://arxiv.org/abs/2601.12143
|
Academic Papers
|
svg
|
12135c426ccb4655c478f61fee33eb237d7c89a82d05126b21664af14b99e912
|
2026-01-21T00:00:00-05:00
|
Threshold Differential Attention for Sink-Free, Ultra-Sparse, and Non-Dispersive Language Modeling
|
arXiv:2601.12145v1 Announce Type: new Abstract: Softmax attention struggles with long contexts due to structural limitations: the strict sum-to-one constraint forces attention sinks on irrelevant tokens, and probability mass disperses as sequence lengths increase. We tackle these problems with Threshold Differential Attention (TDA), a sink-free attention mechanism that achieves ultra-sparsity and improved robustness at longer sequence lengths without the computational overhead of projection methods or the performance degradation caused by noise accumulation of standard rectified attention. TDA applies row-wise extreme-value thresholding with a length-dependent gate, retaining only exceedances. Inspired by the differential transformer, TDA also subtracts an inhibitory view to enhance expressivity. Theoretically, we prove that TDA controls the expected number of spurious survivors per row to $O(1)$ and that consensus spurious matches across independent views vanish as context grows. Empirically, TDA produces $>99\%$ exact zeros and eliminates attention sinks while maintaining competitive performance on standard and long-context benchmarks.
|
https://arxiv.org/abs/2601.12145
|
Academic Papers
|
svg
|
7d57a69b1bafa609c01b8ec45e71d07cfacf2552176ce8e7b22f69cafbaa0e49
|
2026-01-21T00:00:00-05:00
|
From LLMs to Agents in Programming: The Impact of Providing an LLM with a Compiler
|
arXiv:2601.12146v1 Announce Type: new Abstract: Large Language Models have demonstrated a remarkable capability in natural language and program generation and software development. However, the source code generated by the LLMs does not always meet quality requirements and may fail to compile. Therefore, many studies evolve into agents that can reason about the problem before generating the source code for the solution. The goal of this paper is to study the degree to which such agents benefit from access to software development tools, in our case, a \texttt{gcc} compiler. We conduct a computational experiment on the RosettaCode dataset, on 699 programming tasks in C. We evaluate how the integration with a compiler shifts the role of the language model from a passive generator to an active agent capable of iteratively developing runnable programs based on feedback from the compiler. We evaluated 16 language models with sizes ranging from small (135 million) to medium (3 billion) and large (70 billion). Our results show that access to a compiler improved the compilation success by 5.3 to 79.4 percentage units in compilation without affecting the semantics of the generated program. Syntax errors dropped by 75\%, and errors related to undefined references dropped by 87\% for the tasks where the agents outperformed the baselines. We also observed that in some cases, smaller models with a compiler outperform larger models with a compiler. We conclude that it is essential for LLMs to have access to software engineering tools to enhance their performance and reduce the need for large models in software engineering, such as reducing our energy footprint.
|
https://arxiv.org/abs/2601.12146
|
Academic Papers
|
svg
|
0e4ce11b1caf84b9c8dcdbedc0f4ceea9354f31bcd28f6d1dc732086d72247d7
|
2026-01-21T00:00:00-05:00
|
Segment and Matte Anything in a Unified Model
|
arXiv:2601.12147v1 Announce Type: new Abstract: Segment Anything (SAM) has recently pushed the boundaries of segmentation by demonstrating zero-shot generalization and flexible prompting after training on over one billion masks. Despite this, its mask prediction accuracy often falls short of the precision required in real-world applications. While several refinement modules have been proposed to boost SAM's segmentation quality, achieving highly accurate object delineation within a single, unified framework remains an open challenge. Furthermore, interactive image matting, which aims to generate fine-grained alpha mattes guided by diverse user hints, has not yet been explored in the context of SAM. Insights from recent studies highlight strong correlations between segmentation and matting, suggesting the feasibility of a unified model capable of both tasks. In this paper, we introduce Segment And Matte Anything (SAMA), a lightweight extension of SAM that delivers high-quality interactive image segmentation and matting with minimal extra parameters. Our Multi-View Localization Encoder (MVLE) captures detailed features from local views, while the Localization Adapter (Local-Adapter) refines mask outputs by recovering subtle boundary details. We also incorporate two prediction heads for each task into the architecture to generate segmentation and matting masks, simultaneously. Trained on a diverse dataset aggregated from publicly available sources, SAMA achieves state-of-the-art performance across multiple segmentation and matting benchmarks, showcasing its adaptability and effectiveness in a wide range of downstream tasks.
|
https://arxiv.org/abs/2601.12147
|
Academic Papers
|
svg
|
8aa3c3b10851fba335ad4b61affe39a3d57f856a31e522e6fed20e0edee52f06
|
2026-01-21T00:00:00-05:00
|
Many Hands Make Light Work: An LLM-based Multi-Agent System for Detecting Malicious PyPI Packages
|
arXiv:2601.12148v1 Announce Type: new Abstract: Malicious code in open-source repositories such as PyPI poses a growing threat to software supply chains. Traditional rule-based tools often overlook the semantic patterns in source code that are crucial for identifying adversarial components. Large language models (LLMs) show promise for software analysis, yet their use in interpretable and modular security pipelines remains limited. This paper presents LAMPS, a multi-agent system that employs collaborative LLMs to detect malicious PyPI packages. The system consists of four role-specific agents for package retrieval, file extraction, classification, and verdict aggregation, coordinated through the CrewAI framework. A prototype combines a fine-tuned CodeBERT model for classification with LLaMA-3 agents for contextual reasoning. LAMPS has been evaluated on two complementary datasets: D1, a balanced collection of 6,000 setup.py files, and D2, a realistic multi-file dataset with 1,296 files and natural class imbalance. On D1, LAMPS achieves 97.7% accuracy, surpassing MPHunter--one of the state-of-the-art approaches. On D2, it reaches 99.5% accuracy and 99.5% balanced accuracy, outperforming RAG-based approaches and fine-tuned single-agent baselines. McNemar's test confirmed these improvements as highly significant. The results demonstrate the feasibility of distributed LLM reasoning for malicious code detection and highlight the benefits of modular multi-agent designs in software supply chain security.
|
https://arxiv.org/abs/2601.12148
|
Academic Papers
|
svg
|
3d93ce93e5a236b3c8d6b04fed9b63fce88fbdc339936084e4f5d764da0371c5
|
2026-01-21T00:00:00-05:00
|
Principal Component Analysis-Based Terahertz Self-Supervised Denoising and Deblurring Deep Neural Networks
|
arXiv:2601.12149v1 Announce Type: new Abstract: Terahertz (THz) systems inherently introduce frequency-dependent degradation effects, resulting in low-frequency blurring and high-frequency noise in amplitude images. Conventional image processing techniques cannot simultaneously address both issues, and manual intervention is often required due to the unknown boundary between denoising and deblurring. To tackle this challenge, we propose a principal component analysis (PCA)-based THz self-supervised denoising and deblurring network (THz-SSDD). The network employs a Recorrupted-to-Recorrupted self-supervised learning strategy to capture the intrinsic features of noise by exploiting invariance under repeated corruption. PCA decomposition and reconstruction are then applied to restore images across both low and high frequencies. The performance of the THz-SSDD network was evaluated on four types of samples. Training requires only a small set of unlabeled noisy images, and testing across samples with different material properties and measurement modes demonstrates effective denoising and deblurring. Quantitative analysis further validates the network feasibility, showing improvements in image quality while preserving the physical characteristics of the original signals.
|
https://arxiv.org/abs/2601.12149
|
Academic Papers
|
svg
|
5e4a91a52ed4f5392a5fb4be686654ce822ee9c51de6b5045c223585c8b0fdc6
|
2026-01-21T00:00:00-05:00
|
Enhanced Diagnostic Performance via Large-Resolution Inference Optimization for Pathology Foundation Models
|
arXiv:2601.12150v1 Announce Type: new Abstract: Despite their prominent performance on tasks such as ROI classification and segmentation, many pathology foundation models remain constrained by a specific input size e.g. 224 x 224, creating substantial inefficiencies when applied to whole-slide images (WSIs), which span thousands of resolutions. A naive strategy is to either enlarge inputs or downsample the WSIs. However, enlarging inputs results in prohibitive GPU memory consumption, while downsampling alters the microns-per-pixel resolution and obscures critical morphological details. To overcome these limitations, we propose an space- and time- efficient inference strategy that sparsifies attention using spatially aware neighboring blocks and filters out non-informative tokens through global attention scores. This design substantially reduces GPU memory and runtime during high-resolution WSI inference while preserving and even improving the downstream performance, enabling inference at higher resolutions under the same GPU budget. The experimental results show that our method can achieves up to an 7.67% improvement in the ROI classification and compatible results in segmentation.
|
https://arxiv.org/abs/2601.12150
|
Academic Papers
|
svg
|
43022866ca9e3f62e08cbc23971162006d4a7ffb4b77395cddddcba304c2aca4
|
2026-01-21T00:00:00-05:00
|
Who Owns Creativity and Who Does the Work? Trade-offs in LLM-Supported Research Ideation
|
arXiv:2601.12152v1 Announce Type: new Abstract: LLM-based agents offer new potential to accelerate science and reshape research work. However, the quality of researcher contributions can vary significantly depending on human ability to steer agent behaviors. How can we best use these tools to augment scientific creativity without undermining aspects of contribution and ownership that drive research? To investigate this, we developed an agentic research ideation system integrating three roles -- Ideator, Writer, and Evaluator -- across three control levels -- Low, Medium, and Intensive. Our mixed-methods study with 54 researchers suggests three key findings in how LLM-based agents reshape scientific creativity: 1) perceived creativity support does not simply increase linearly with greater control; 2) human effort shifts from ideating to verifying ideas; and 3) ownership becomes a negotiated outcome between human and AI. Our findings suggest that LLM agent design should emphasize researcher empowerment, fostering a sense of ownership over strong ideas rather than reducing researchers to operating an automated AI-driven process.
|
https://arxiv.org/abs/2601.12152
|
Academic Papers
|
svg
|
7c5a1b3042f3715f9d29ec854b253e6ae0527aee46912a49249fbb1b60e8a410
|
2026-01-21T00:00:00-05:00
|
Analyzing Cancer Patients' Experiences with Embedding-based Topic Modeling and LLMs
|
arXiv:2601.12154v1 Announce Type: new Abstract: This study investigates the use of neural topic modeling and LLMs to uncover meaningful themes from patient storytelling data, to offer insights that could contribute to more patient-oriented healthcare practices. We analyze a collection of transcribed interviews with cancer patients (132,722 words in 13 interviews). We first evaluate BERTopic and Top2Vec for individual interview summarization by using similar preprocessing, chunking, and clustering configurations to ensure a fair comparison on Keyword Extraction. LLMs (GPT4) are then used for the next step topic labeling. Their outputs for a single interview (I0) are rated through a small-scale human evaluation, focusing on {coherence}, {clarity}, and {relevance}. Based on the preliminary results and evaluation, BERTopic shows stronger performance and is selected for further experimentation using three {clinically oriented embedding} models. We then analyzed the full interview collection with the best model setting. Results show that domain-specific embeddings improved topic \textit{precision} and \textit{interpretability}, with BioClinicalBERT producing the most consistent results across transcripts. The global analysis of the full dataset of 13 interviews, using the BioClinicalBERT embedding model, reveals the most dominant topics throughout all 13 interviews, namely ``Coordination and Communication in Cancer Care Management" and ``Patient Decision-Making in Cancer Treatment Journey''. Although the interviews are machine translations from Dutch to English, and clinical professionals are not involved in this evaluation, the findings suggest that neural topic modeling, particularly BERTopic, can help provide useful feedback to clinicians from patient interviews. This pipeline could support more efficient document navigation and strengthen the role of patients' voices in healthcare workflows.
|
https://arxiv.org/abs/2601.12154
|
Academic Papers
|
svg
|
98d7877819bfbc164bf259320abf0755946e799c581f310c2e0343617e6fa24a
|
2026-01-21T00:00:00-05:00
|
Inverse Rendering for High-Genus 3D Surface Meshes from Multi-view Images with Persistent Homology Priors
|
arXiv:2601.12155v1 Announce Type: new Abstract: Reconstructing 3D objects from images is inherently an ill-posed problem due to ambiguities in geometry, appearance, and topology. This paper introduces collaborative inverse rendering with persistent homology priors, a novel strategy that leverages topological constraints to resolve these ambiguities. By incorporating priors that capture critical features such as tunnel loops and handle loops, our approach directly addresses the difficulty of reconstructing high-genus surfaces. The collaboration between photometric consistency from multi-view images and homology-based guidance enables recovery of complex high-genus geometry while circumventing catastrophic failures such as collapsing tunnels or losing high-genus structure. Instead of neural networks, our method relies on gradient-based optimization within a mesh-based inverse rendering framework to highlight the role of topological priors. Experimental results show that incorporating persistent homology priors leads to lower Chamfer Distance (CD) and higher Volume IoU compared to state-of-the-art mesh-based methods, demonstrating improved geometric accuracy and robustness against topological failure.
|
https://arxiv.org/abs/2601.12155
|
Academic Papers
|
svg
|
aceb93f7090c675aed026f0b85af612efc34d86a6b3616549933533e0c2c9a83
|
2026-01-21T00:00:00-05:00
|
Biological Intuition on Digital Hardware: An RTL Implementation of Poisson-Encoded SNNs for Static Image Classification
|
arXiv:2601.12156v1 Announce Type: new Abstract: The deployment of Artificial Intelligence on edge devices (TinyML) is often constrained by the high power consumption and latency associated with traditional Artificial Neural Networks (ANNs) and their reliance on intensive Matrix-Multiply (MAC) operations. Neuromorphic computing offers a compelling alternative by mimicking biological efficiency through event-driven processing. This paper presents the design and implementation of a cycle-accurate, hardware-oriented Spiking Neural Network (SNN) core implemented in SystemVerilog. Unlike conventional accelerators, this design utilizes a Leaky Integrate-and-Fire (LIF) neuron model powered by fixed-point arithmetic and bit-wise primitives (shifts and additions) to eliminate the need for complex floating-point hardware. The architecture features an on-chip Poisson encoder for stochastic spike generation and a novel active pruning mechanism that dynamically disables neurons post-classification to minimize dynamic power consumption. We demonstrate the hardware's efficacy through a fully connected layer implementation targeting digit classification. Simulation results indicate that the design achieves rapid convergence (89% accuracy) within limited timesteps while maintaining a significantly reduced computational footprint compared to traditional dense architectures. This work serves as a foundational building block for scalable, energy-efficient neuromorphic hardware on FPGA and ASIC platforms.
|
https://arxiv.org/abs/2601.12156
|
Academic Papers
|
svg
|
eeb1cea37645a5214756b3f106f794da13a3e1db39401cfbe7c7538506be41f7
|
2026-01-21T00:00:00-05:00
|
Streaming Operator Inference for Model Reduction of Large-Scale Dynamical Systems
|
arXiv:2601.12161v1 Announce Type: new Abstract: Projection-based model reduction enables efficient simulation of complex dynamical systems by constructing low-dimensional surrogate models from high-dimensional data. The Operator Inference (OpInf) approach learns such reduced surrogate models through a two-step process: constructing a low-dimensional basis via Singular Value Decomposition (SVD) to compress the data, then solving a linear least-squares (LS) problem to infer reduced operators that govern the dynamics in this compressed space, all without access to the underlying code or full model operators, i.e., non-intrusively. Traditional OpInf operates as a batch learning method, where both the SVD and LS steps process all data simultaneously. This poses a barrier to deployment of the approach on large-scale applications where dataset sizes prevent the loading of all data into memory at once. Additionally, the traditional batch approach does not naturally allow model updates using new data acquired during online computation. To address these limitations, we propose Streaming OpInf, which learns reduced models from sequentially arriving data streams. Our approach employs incremental SVD for adaptive basis construction and recursive LS for streaming operator updates, eliminating the need to store complete data sets while enabling online model adaptation. The approach can flexibly combine different choices of streaming algorithms for numerical linear algebra: we systematically explore the impact of these choices both analytically and numerically to identify effective combinations for accurate reduced model learning. Numerical experiments on benchmark problems and a large-scale turbulent channel flow demonstrate that Streaming OpInf achieves accuracy comparable to batch OpInf while reducing memory requirements by over 99% and enabling dimension reductions exceeding 31,000x, resulting in orders-of-magnitude faster predictions.
|
https://arxiv.org/abs/2601.12161
|
Academic Papers
|
svg
|
086022e362e01357a2332be6ea71f6abcd5d242bf0d6bcea1bb5011741dd19b2
|
2026-01-21T00:00:00-05:00
|
The Language You Ask In: Language-Conditioned Ideological Divergence in LLM Analysis of Contested Political Documents
|
arXiv:2601.12164v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed as analytical tools across multilingual contexts, yet their outputs may carry systematic biases conditioned by the language of the prompt. This study presents an experimental comparison of LLM-generated political analyses of a Ukrainian civil society document, using semantically equivalent prompts in Russian and Ukrainian. Despite identical source material and parallel query structures, the resulting analyses varied substantially in rhetorical positioning, ideological orientation, and interpretive conclusions. The Russian-language output echoed narratives common in Russian state discourse, characterizing civil society actors as illegitimate elites undermining democratic mandates. The Ukrainian-language output adopted vocabulary characteristic of Western liberal-democratic political science, treating the same actors as legitimate stakeholders within democratic contestation. These findings demonstrate that prompt language alone can produce systematically different ideological orientations from identical models analyzing identical content, with significant implications for AI deployment in polarized information environments, cross-lingual research applications, and the governance of AI systems in multilingual societies.
|
https://arxiv.org/abs/2601.12164
|
Academic Papers
|
svg
|
d523d87f0685534888f6b3c97173f07377cca943437222a00ec5115b523d11c9
|
2026-01-21T00:00:00-05:00
|
Learning Legged MPC with Smooth Neural Surrogates
|
arXiv:2601.12169v1 Announce Type: new Abstract: Deep learning and model predictive control (MPC) can play complementary roles in legged robotics. However, integrating learned models with online planning remains challenging. When dynamics are learned with neural networks, three key difficulties arise: (1) stiff transitions from contact events may be inherited from the data; (2) additional non-physical local nonsmoothness can occur; and (3) training datasets can induce non-Gaussian model errors due to rapid state changes. We address (1) and (2) by introducing the smooth neural surrogate, a neural network with tunable smoothness designed to provide informative predictions and derivatives for trajectory optimization through contact. To address (3), we train these models using a heavy-tailed likelihood that better matches the empirical error distributions observed in legged-robot dynamics. Together, these design choices substantially improve the reliability, scalability, and generalizability of learned legged MPC. Across zero-shot locomotion tasks of increasing difficulty, smooth neural surrogates with robust learning yield consistent reductions in cumulative cost on simple, well-conditioned behaviors (typically 10-50%), while providing substantially larger gains in regimes where standard neural dynamics often fail outright. In these regimes, smoothing enables reliable execution (from 0/5 to 5/5 success) and produces about 2-50x lower cumulative cost, reflecting orders-of-magnitude absolute improvements in robustness rather than incremental performance gains.
|
https://arxiv.org/abs/2601.12169
|
Academic Papers
|
svg
|
35bfe1ff893daa4d0cfbc6e0070bc68a00e92a6a9c7377c1d2438462e591e789
|
2026-01-21T00:00:00-05:00
|
Federated Learning for the Design of Parametric Insurance Indices under Heterogeneous Renewable Production Losses
|
arXiv:2601.12178v1 Announce Type: new Abstract: We propose a federated learning framework for the calibration of parametric insurance indices under heterogeneous renewable energy production losses. Producers locally model their losses using Tweedie generalized linear models and private data, while a common index is learned through federated optimization without sharing raw observations. The approach accommodates heterogeneity in variance and link functions and directly minimizes a global deviance objective in a distributed setting. We implement and compare FedAvg, FedProx and FedOpt, and benchmark them against an existing approximation-based aggregation method. An empirical application to solar power production in Germany shows that federated learning recovers comparable index coefficients under moderate heterogeneity, while providing a more general and scalable framework.
|
https://arxiv.org/abs/2601.12178
|
Academic Papers
|
svg
|
261556517a374fb4e989282cf8c2a7ccc87688bf03b867f3b02e6ecc361bccb7
|
2026-01-21T00:00:00-05:00
|
Tolerance Principle and Small Language Model Learning
|
arXiv:2601.12179v1 Announce Type: new Abstract: Modern language models like GPT-3, BERT, and LLaMA require massive training data, yet with sufficient training they reliably learn to distinguish grammatical from ungrammatical sentences. Children aged as young as 14 months already have the capacity to learn abstract grammar rules from very few exemplars, even in the presence of non-rule-following exceptions. Yang's (2016) Tolerance Principle defines a precise threshold for how many exceptions a rule can tolerate and still be learnable. The present study explored the minimal amount and quality of training data necessary for rules to be generalized by a transformer-based language model to test the predictions of the Tolerance Principle. We trained BabyBERTa (Huebner et al. 2021), a transformer model optimized for small datasets, on artificial grammars. The training sets varied in size, number of unique sentence types, and proportion of rule-following versus exception exemplars. We found that, unlike human infants, BabyBERTa's learning dynamics do not align with the Tolerance Principle.
|
https://arxiv.org/abs/2601.12179
|
Academic Papers
|
svg
|
fa118e54a35aa7692d2c06455c977b21cd4ba2f270ce69cbe509cb81d198a86b
|
2026-01-21T00:00:00-05:00
|
VidTune: Creating Video Soundtracks with Generative Music and Contextual Thumbnails
|
arXiv:2601.12180v1 Announce Type: new Abstract: Music shapes the tone of videos, yet creators often struggle to find soundtracks that match their video's mood and narrative. Recent text-to-music models let creators generate music from text prompts, but our formative study (N=8) shows creators struggle to construct diverse prompts, quickly review and compare tracks, and understand their impact on the video. We present VidTune, a system that supports soundtrack creation by generating diverse music options from a creator's prompt and producing contextual thumbnails for rapid review. VidTune extracts representative video subjects to ground thumbnails in context, maps each track's valence and energy onto visual cues like color and brightness, and depicts prominent genres and instruments. Creators can refine tracks through natural language edits, which VidTune expands into new generations. In a controlled user study (N=12) and an exploratory case study (N=6), participants found VidTune helpful for efficiently reviewing and comparing music options and described the process as playful and enriching.
|
https://arxiv.org/abs/2601.12180
|
Academic Papers
|
svg
|
be6d1530ecf26768fd190cd198e77950eadc35a8786062c87329c7bf9fb3bdaa
|
2026-01-21T00:00:00-05:00
|
Negotiating Digital Identities with AI Companions: Motivations, Strategies, and Emotional Outcomes
|
arXiv:2601.12181v1 Announce Type: new Abstract: AI companions enable deep emotional relationships by engaging a user's sense of identity, but they also pose risks like unhealthy emotional dependence. Mitigating these risks requires first understanding the underlying process of identity construction and negotiation with AI companions. Focusing on Character.AI (C.AI), a popular AI companion, we conducted an LLM-assisted thematic analysis of 22,374 online discussions on its subreddit. Using Identity Negotiation Theory as an analytical lens, we identified a three-stage process: 1) five user motivations; 2) an identity negotiation process involving three communication expectations and four identity co-construction strategies; and 3) three emotional outcomes. Our findings surface the identity work users perform as both performers and directors to co-construct identities in negotiation with C.AI. This process takes place within a socio-emotional sandbox where users can experiment with social roles and express emotions without non-human partners. Finally, we offer design implications for emotionally supporting users while mitigating the risks.
|
https://arxiv.org/abs/2601.12181
|
Academic Papers
|
svg
|
2132431e39cdc3e79ef09e063632b44f03db17b55b545048fcff2fa9b4b532ad
|
2026-01-21T00:00:00-05:00
|
Aletheia: What Makes RLVR For Code Verifiers Tick?
|
arXiv:2601.12186v1 Announce Type: new Abstract: Multi-domain thinking verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline, owing to their ability to robustly rate and rerank model outputs. However, the adoption of such verifiers towards code generation has been comparatively sparse, with execution feedback constituting the dominant signal. Nonetheless, code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain and are a potentially powerful addition to the code generation post-training toolbox. To this end, we create and open-source Aletheia, a controlled testbed that enables execution-grounded evaluation of code verifiers' robustness across disparate policy models and covariate shifts. We examine components of the RLVR-based verifier training recipe widely credited for its success: (1) intermediate thinking traces, (2) learning from negative samples, and (3) on-policy training. While experiments show the optimality of RLVR, we uncover important opportunities to simplify the recipe. Particularly, despite code verification exhibiting positive training- and inference-time scaling, on-policy learning stands out as the key component at small verifier sizes, and thinking-based training emerges as the most important component at larger scales.
|
https://arxiv.org/abs/2601.12186
|
Academic Papers
|
svg
|
fe4a4cee132c648203bc2ebf3c9eccea00b105f7533f1153a018ff51cc7c7606
|
2026-01-21T00:00:00-05:00
|
VIRTUE: Versatile Video Retrieval Through Unified Embeddings
|
arXiv:2601.12193v1 Announce Type: new Abstract: Modern video retrieval systems are expected to handle diverse tasks ranging from corpus-level retrieval and fine-grained moment localization to flexible multimodal querying. Specialized architectures achieve strong retrieval performance by training modality-specific encoders on massive datasets, but they lack the ability to process composed multimodal queries. In contrast, multimodal LLM (MLLM)-based methods support rich multimodal search but their retrieval performance remains well below that of specialized systems. We present VIRTUE, an MLLM-based versatile video retrieval framework that integrates corpus and moment-level retrieval capabilities while accommodating composed multimodal queries within a single architecture. We use contrastive alignment of visual and textual embeddings generated using a shared MLLM backbone to facilitate efficient embedding-based candidate search. Our embedding model, trained efficiently using low-rank adaptation (LoRA) on 700K paired visual-text data samples, surpasses other MLLM-based methods on zero-shot video retrieval tasks. Additionally, we demonstrate that the same model can be adapted without further training to achieve competitive results on zero-shot moment retrieval, and state of the art results for zero-shot composed video retrieval. With additional training for reranking candidates identified in the embedding-based search, our model substantially outperforms existing MLLM-based retrieval systems and achieves retrieval performance comparable to state of the art specialized models which are trained on orders of magnitude larger data.
|
https://arxiv.org/abs/2601.12193
|
Academic Papers
|
svg
|
9ee3b806a07fca7e5df441c5f193054f4ada3bad246721020f222319192972d7
|
2026-01-21T00:00:00-05:00
|
Coherent Comparison as Information Cost: A Cost-First Ledger Framework for Discrete Dynamics
|
arXiv:2601.12194v1 Announce Type: new Abstract: We develop an information-theoretic framework for discrete dynamics grounded in a comparison-cost functional on ratios. Given two quantities compared via their ratio \(x=a/b\), we assign a cost \(F(x)\) measuring deviation from equilibrium (\(x=1\)). Requiring coherent composition under multiplicative chaining imposes a d'Alembert functional equation; together with normalization (\(F(1)=0\)) and quadratic calibration at unity, this yields a unique reciprocal cost functional (proved in a companion paper): \[ J(x) = \tfrac{1}{2}\bigl(x + x^{-1}\bigr) - 1. \] This cost exhibits reciprocity \(J(x)=J(x^{-1})\), vanishes only at \(x=1\), and diverges at boundary regimes \(x\to 0^+\) and \(x\to\infty\), excluding ``nothingness'' configurations. Using \(J\) as input, we introduce a discrete ledger as a minimal lossless encoding of recognition events on directed graphs. Under deterministic update semantics and minimality (no intra-tick ordering metadata), we derive atomic ticks (at most one event per tick). Explicit structural assumptions (conservation, no sources/sinks, pairwise locality, quantization in \(\delta\mathbb{Z}\)) force balanced double-entry postings and discrete ledger units. To obtain scalar potentials on graphs with cycles while retaining single-edge impulses per tick, we impose time-aggregated cycle closure (no-arbitrage/clearing over finite windows). Under this hypothesis, cycle closure is equivalent to path-independence, and the cleared cumulative flow admits a unique scalar potential on each connected component (up to additive constant), via a discrete Poincar\'e lemma. On hypercube graphs \(Q_d\), atomicity imposes a \(2^d\)-tick minimal period, with explicit Gray-code realization at \(d=3\). The framework connects ratio-based divergences, conservative graph flows, and discrete potential theory through a coherence-forced cost structure.
|
https://arxiv.org/abs/2601.12194
|
Academic Papers
|
svg
|
c00fb859d044b68ada873b148f6998eff38351194060869175163320160d2d87
|
2026-01-21T00:00:00-05:00
|
Understanding Partial Reachability in the Internet Core
|
arXiv:2601.12196v1 Announce Type: new Abstract: Routing strives to connect all the Internet, but compete: political pressure threatens routing fragmentation; architectural changes such as private clouds, carrier-grade NAT, and firewalls make connectivity conditional; and commercial disputes create partial reachability for days or years. This paper suggests *persistent, partial reachability is fundamental to the Internet* and an underexplored problem. We first *derive a conceptual definition of the Internet core* based on connectivity, not authority. We identify *peninsulas*: persistent, partial connectivity; and *islands*: when computers are partitioned from the Internet core. Second, we develop algorithms to observe each across the Internet, and apply them to two existing measurement systems: Trinocular, where 6 locations observe 5M networks frequently, and RIPE Atlas, where 13k locations scan the DNS roots frequently. Cross-validation shows our findings are stable over *three years of data*, and consistent with as few as 3 geographically-distributed observers. We validate peninsulas and islands against CAIDA Ark, showing good recall (0.94) and bounding precision between 0.42 and 0.82. Finally, our work has broad practical impact: we show that *peninsulas are more common than Internet outages*. Factoring out peninsulas and islands as noise can *improve existing measurement systems*; their ``noise'' is $5\times$ to $9.7\times$ larger than the operational events in RIPE's DNSmon. We show that most peninsula events are routing transients (45\%), but most peninsula-time (90\%) is due to a few (7\%) long-lived events. Our work helps inform Internet policy and governance, with our neutral definition showing no single country or organization can unilaterally control the Internet core.
|
https://arxiv.org/abs/2601.12196
|
Academic Papers
|
svg
|
fd98ea25225ec974ee049e97f528bb4d937a0c198cd9cf4e9f15fec6a94278e3
|
2026-01-21T00:00:00-05:00
|
CTC-DID: CTC-Based Arabic dialect identification for streaming applications
|
arXiv:2601.12199v1 Announce Type: new Abstract: This paper proposes a Dialect Identification (DID) approach inspired by the Connectionist Temporal Classification (CTC) loss function as used in Automatic Speech Recognition (ASR). CTC-DID frames the dialect identification task as a limited-vocabulary ASR system, where dialect tags are treated as a sequence of labels for a given utterance. For training, the repetition of dialect tags in transcriptions is estimated either using a proposed Language-Agnostic Heuristic (LAH) approach or a pre-trained ASR model. The method is evaluated on the low-resource Arabic Dialect Identification (ADI) task, with experimental results demonstrating that an SSL-based CTC-DID model, trained on a limited dataset, outperforms both fine-tuned Whisper and ECAPA-TDNN models. Notably, CTC-DID also surpasses these models in zero-shot evaluation on the Casablanca dataset. The proposed approach is found to be more robust to shorter utterances and is shown to be easily adaptable for streaming, real-time applications, with minimal performance degradation.
|
https://arxiv.org/abs/2601.12199
|
Academic Papers
|
svg
|
ae7bee5e48248879ea3053824da951d0549fca4e017a6899b92796a842e2691b
|
2026-01-21T00:00:00-05:00
|
Computing Maximal Repeating Subsequences in a String
|
arXiv:2601.12200v1 Announce Type: new Abstract: In this paper we initiate the study of computing a maximal (not necessarily maximum) repeating pattern in a single input string, where the corresponding problems have been studied (e.g., a maximal common subsequence) only in two or more input strings by Hirota and Sakai starting 2019. Given an input string $S$ of length $n$, we can compute a maximal square subsequence of $S$ in $O(n\log n)$ time, greatly improving the $O(n^2)$ bound for computing the longest square subsequence of $S$. For a maximal $k$-repeating subsequence, our bound is $O(f(k)n\log n)$, where \(f(k)\) is a computable function such that $f(k) < k\cdot 4^k$. This greatly improves the $O(n^{2k-1})$ bound for computing a longest $k$-repeating subsequence of $S$, for $k\geq 3$. Both results hold for the constrained case, i.e., when the solution must contain a subsequence $X$ of $S$, though with higher running times.
|
https://arxiv.org/abs/2601.12200
|
Academic Papers
|
svg
|
aed7f4d3b2a02261b0cd236350724a2862770749ec457630c76b49eec98336ab
|
2026-01-21T00:00:00-05:00
|
Embryonic Exposure to VPA Influences Chick Vocalisations: A Computational Study
|
arXiv:2601.12203v1 Announce Type: new Abstract: In young animals like poultry chicks (Gallus gallus), vocalisations convey information about affective and behavioural states. Traditional approaches to vocalisation analysis, relying on manual annotation and predefined categories, introduce biases, limit scalability, and fail to capture the full complexity of vocal repertoires. We introduce a computational framework for the automated detection, acoustic feature extraction, and unsupervised learning of chick vocalisations. Applying this framework to a dataset of newly hatched chicks, we identified two primary vocal clusters. We then tested our computational framework on an independent dataset of chicks exposed during embryonic development to vehicle or Valproic Acid (VPA), a compound that disrupts neural development and is linked to autistic-like symptoms. Clustering analysis on the experimental dataset confirmed two primary vocal clusters and revealed systematic differences between groups. VPA-exposed chicks showed an altered repertoire, with a relative increase in softer calls. VPA differentially affected call clusters, modulating temporal, frequency, and energy domain features. Overall, VPA-exposed chicks produced vocalisations with shorter duration, reduced pitch variability, and modified energy profiles, with the strongest alterations observed in louder calls. This study provides a computational framework for analysing animal vocalisations, advancing knowledge of early-life communication in typical and atypical vocal development.
|
https://arxiv.org/abs/2601.12203
|
Academic Papers
|
svg
|
d7d06e9d5af6ebaefbfec42726494e0d562c7b9c95c7e4ac1576a8b9487e0216
|
2026-01-21T00:00:00-05:00
|
Do Neural Codecs Generalize? A Controlled Study Across Unseen Languages and Non-Speech Tasks
|
arXiv:2601.12205v1 Announce Type: new Abstract: This paper investigates three crucial yet underexplored aspects of the generalization capabilities of neural audio codecs (NACs): (i) whether NACs can generalize to unseen languages during pre-training, (ii) whether speech-only pre-trained NACs can effectively generalize to non-speech applications such as environmental sounds, music, and animal vocalizations, and (iii) whether incorporating non-speech data during pre-training can improve performance on both speech and non-speech tasks. Existing studies typically rely on off-the-shelf NACs for comparison, which limits insight due to variations in implementation. In this work, we train NACs from scratch using strictly controlled configurations and carefully curated pre-training data to enable fair comparisons. We conduct a comprehensive evaluation of NAC performance on both signal reconstruction quality and downstream applications using 11 metrics. Our results show that NACs can generalize to unseen languages during pre-training, speech-only pre-trained NACs exhibit degraded performance on non-speech tasks, and incorporating non-speech data during pre-training improves performance on non-speech tasks while maintaining comparable performance on speech tasks.
|
https://arxiv.org/abs/2601.12205
|
Academic Papers
|
svg
|
27147cd5f8ee17265eb4a0ba09fb3ff0fb89ebd69290d4f166480559dc72708d
|
2026-01-21T00:00:00-05:00
|
CoReflect: Conversational Evaluation via Co-Evolutionary Simulation and Reflective Rubric Refinement
|
arXiv:2601.12208v1 Announce Type: new Abstract: Evaluating conversational systems in multi-turn settings remains a fundamental challenge. Conventional pipelines typically rely on manually defined rubrics and fixed conversational context$-$a static approach that limits coverage and fails to capture the diverse, emergent behaviors of dialogue models. To address this, we introduce CoReflect (Conversational Evaluation via Co-Evolutionary Simulation and Reflective Rubric Refinement), which unifies dialogue simulation and evaluation into an adaptive, iterative process. CoReflect employs a conversation planner that generates structured templates to guide a user simulator through diverse, goal-directed dialogues. Subsequently, a reflective analyzer processes these dialogues to identify systematic behavioral patterns and automatically refine the evaluation rubrics. Crucially, the insights from the conversation analysis are fed back into the planner to update conversation templates for subsequent iterations. This co-evolution loop ensures that the complexity of test cases and the diagnostic precision of rubrics improve in tandem. By minimizing human intervention, CoReflect provides a scalable and self-refining methodology that allows evaluation protocols to adapt alongside the rapidly advancing capabilities of dialogue models.
|
https://arxiv.org/abs/2601.12208
|
Academic Papers
|
svg
|
c0001da266176612ff4d195e6c6750be7c45113d7e115e797829e2991955a558
|
2026-01-21T00:00:00-05:00
|
DaggerFFT: A Distributed FFT Framework Using Task Scheduling in Julia
|
arXiv:2601.12209v1 Announce Type: new Abstract: The Fast Fourier Transform (FFT) is a fundamental numerical technique with widespread application in a range of scientific problems. As scientific simulations attempt to exploit exascale systems, there has been a growing demand for distributed FFT algorithms that can effectively utilize modern heterogeneous high-performance computing (HPC) systems. Conventional FFT algorithms commonly encounter performance bottlenecks, especially when run on heterogeneous platforms. Most distributed FFT approaches rely on static task distribution and require synchronization barriers, limiting scalability and impacting overall resource utilization. In this paper we present DaggerFFT, a distributed FFT framework, developed in Julia, that treats highly parallel FFT computations as a dynamically scheduled task graph. Each FFT stage operates on a separately defined distributed array. FFT operations are expressed as DTasks operating on pencil or slab partitioned DArrays. Each FFT stage owns its own DArray, and the runtime assigns DTasks across devices using Dagger's dynamic scheduler that uses work stealing. We demonstrate how DaggerFFT's dynamic scheduler can outperform state-of-the-art distributed FFT libraries on both CPU and GPU backends, achieving up to a 2.6x speedup on CPU clusters and up to a 1.35x speedup on GPU clusters. We have integrated DaggerFFT into Oceananigans.jl, a geophysical fluid dynamics framework, demonstrating that high-level, task-based runtimes can deliver both superior performance and modularity in large-scale, real-world simulations.
|
https://arxiv.org/abs/2601.12209
|
Academic Papers
|
svg
|
4386ce11ff49a2ef1bfb69a6eabac8a6b1cbe3f5f89a716b6a1030c80ed0536f
|
2026-01-21T00:00:00-05:00
|
Solvability of The Output Corridor Control Problem by Pulse-Modulated Feedback
|
arXiv:2601.12210v1 Announce Type: new Abstract: The problem of maintaining the output of a positive time-invariant single-input single-output system within a predefined corridor of values is treated. For third-order plants possessing a certain structure, it is proven that the problem is always solvable under stationary conditions by means of pulse-modulated feedback. The obtained result is utilized to assess the feasibility of patient-specific pharmacokinetic-pharmacodynamic models with respect to patient safety. A population of Wiener models capturing the dynamics of a neuromuscular blockade agent is studied to investigate whether or not they can be driven into the desired output corridor by clinically acceptable sequential drug doses (boluses). It is demonstrated that low values of a parameter in the nonlinear pharmacodynamic part lie behind the detected model infeasibility.
|
https://arxiv.org/abs/2601.12210
|
Academic Papers
|
svg
|
5d7ca72a5364c9d8d859e14828e912074de8abd6021d430d4b410b5cb0646d5f
|
2026-01-21T00:00:00-05:00
|
Speculative Sampling with Reinforcement Learning
|
arXiv:2601.12212v1 Announce Type: new Abstract: Inference time latency has remained an open challenge for real world applications of large language models (LLMs). State-of-the-art (SOTA) speculative sampling (SpS) methods for LLMs, like EAGLE-3, use tree-based drafting to explore multiple candidate continuations in parallel. However, the hyperparameters controlling the tree structure are static, which limits flexibility and efficiency across diverse contexts and domains. We introduce Reinforcement learning for Speculative Sampling (Re-SpS), the first reinforcement learning (RL)-based framework for draft tree hyperparameter optimization. Re-SpS dynamically adjusts draft tree hyperparameters in real-time, learning context-aware policies that maximize generation speed by balancing speculative aggression with computational overhead. It leverages efficient state representations from target model hidden states and introduces multi-step action persistence for better context modeling. Evaluation results across five diverse benchmarks demonstrate consistent improvements over the SOTA method EAGLE-3, achieving up to 5.45$\times$ speedup over the backbone LLM and up to 1.12$\times$ speedup compared to EAGLE-3 across five diverse benchmarks, with no loss in output fidelity.
|
https://arxiv.org/abs/2601.12212
|
Academic Papers
|
svg
|
7dc1a78e653bbaa088b209533164eede62187d9d4c9018491355f117ab9b8fba
|
2026-01-21T00:00:00-05:00
|
One-Sided Matrix Completion from Ultra-Sparse Samples
|
arXiv:2601.12213v1 Announce Type: new Abstract: Matrix completion is a classical problem that has received recurring interest across a wide range of fields. In this paper, we revisit this problem in an ultra-sparse sampling regime, where each entry of an unknown, $n\times d$ matrix $M$ (with $n \ge d$) is observed independently with probability $p = C / d$, for a fixed integer $C \ge 2$. This setting is motivated by applications involving large, sparse panel datasets, where the number of rows far exceeds the number of columns. When each row contains only $C$ entries -- fewer than the rank of $M$ -- accurate imputation of $M$ is impossible. Instead, we estimate the row span of $M$ or the averaged second-moment matrix $T = M^{\top} M / n$. The empirical second-moment matrix computed from observed entries exhibits non-random and sparse missingness. We propose an unbiased estimator that normalizes each nonzero entry of the second moment by its observed frequency, followed by gradient descent to impute the missing entries of $T$. The normalization divides a weighted sum of $n$ binomial random variables by the total number of ones. We show that the estimator is unbiased for any $p$ and enjoys low variance. When the row vectors of $M$ are drawn uniformly from a rank-$r$ factor model satisfying an incoherence condition, we prove that if $n \ge O({d r^5 \epsilon^{-2} C^{-2} \log d})$, any local minimum of the gradient-descent objective is approximately global and recovers $T$ with error at most $\epsilon^2$. Experiments on both synthetic and real-world data validate our approach. On three MovieLens datasets, our algorithm reduces bias by $88\%$ relative to baseline estimators. We also empirically validate the linear sampling complexity of $n$ relative to $d$ on synthetic data. On an Amazon reviews dataset with sparsity $10^{-7}$, our method reduces the recovery error of $T$ by $59\%$ and $M$ by $38\%$ compared to baseline methods.
|
https://arxiv.org/abs/2601.12213
|
Academic Papers
|
svg
|
15631046b94c1c7db8135baf146064c6420f031dcdef56315dca5a37f0d67075
|
2026-01-21T00:00:00-05:00
|
Wavelet-Driven Masked Multiscale Reconstruction for PPG Foundation Models
|
arXiv:2601.12215v1 Announce Type: new Abstract: Wearable foundation models have the potential to transform digital health by learning transferable representations from large-scale biosignals collected in everyday settings. While recent progress has been made in large-scale pretraining, most approaches overlook the spectral structure of photoplethysmography (PPG) signals, wherein physiological rhythms unfold across multiple frequency bands. Motivated by the insight that many downstream health-related tasks depend on multi-resolution features spanning fine-grained waveform morphology to global rhythmic dynamics, we introduce Masked Multiscale Reconstruction (MMR) for PPG representation learning - a self-supervised pretraining framework that explicitly learns from hierarchical time-frequency scales of PPG data. The pretraining task is designed to reconstruct randomly masked out coefficients obtained from a wavelet-based multiresolution decomposition of PPG signals, forcing the transformer encoder to integrate information across temporal and spectral scales. We pretrain our model with MMR using ~17 million unlabeled 10-second PPG segments from ~32,000 smartwatch users. On 17 of 19 diverse health-related tasks, MMR trained on large-scale wearable PPG data improves over or matches state-of-the-art open-source PPG foundation models, time-series foundation models, and other self-supervised baselines. Extensive analysis of our learned embeddings and systematic ablations underscores the value of wavelet-based representations, showing that they capture robust and physiologically-grounded features. Together, these results highlight the potential of MMR as a step toward generalizable PPG foundation models.
|
https://arxiv.org/abs/2601.12215
|
Academic Papers
|
svg
|
6f7af8d38198f027e7551df327f339bfe3afda8652038d9814424e0fe9a00e2b
|
2026-01-21T00:00:00-05:00
|
Canonicalization of Batched Einstein Summations for Tuning Retrieval
|
arXiv:2601.12220v1 Announce Type: new Abstract: We present an algorithm for normalizing \emph{Batched Einstein Summation} expressions by mapping mathematically equivalent formulations to a unique normal form. Batches of einsums with the same Einstein notation that exhibit substantial data reuse appear frequently in finite element methods (FEM), numerical linear algebra, and computational chemistry. To effectively exploit this temporal locality for high performance, we consider groups of einsums in batched form. Representations of equivalent batched einsums may differ due to index renaming, permutations within the batch, and, due to the commutativity and associativity of multiplication operation. The lack of a canonical representation hinders the reuse of optimization and tuning knowledge in software systems. To this end, we develop a novel encoding of batched einsums as colored graphs and apply graph canonicalization to derive a normal form. In addition to the canonicalization algorithm, we propose a representation of einsums using functional array operands and provide a strategy to transfer transformations operating on the normal form to \emph{functional batched einsums} that exhibit the same normal form; crucial for fusing surrounding computations for memory bound einsums. We evaluate our approach against JAX, and observe a geomean speedup of $4.7\times$ for einsums from the TCCG benchmark suite and an FEM solver.
|
https://arxiv.org/abs/2601.12220
|
Academic Papers
|
svg
|
a7a9b68a171826a8c00265d2c05a11a36cdd9b2a242c717d1bf5d8085ccfb215
|
2026-01-21T00:00:00-05:00
|
Song Aesthetics Evaluation with Multi-Stem Attention and Hierarchical Uncertainty Modeling
|
arXiv:2601.12222v1 Announce Type: new Abstract: Music generative artificial intelligence (AI) is rapidly expanding music content, necessitating automated song aesthetics evaluation. However, existing studies largely focus on speech, audio or singing quality, leaving song aesthetics underexplored. Moreover, conventional approaches often predict a precise Mean Opinion Score (MOS) value directly, which struggles to capture the nuances of human perception in song aesthetics evaluation. This paper proposes a song-oriented aesthetics evaluation framework, featuring two novel modules: 1) Multi-Stem Attention Fusion (MSAF) builds bidirectional cross-attention between mixture-vocal and mixture-accompaniment pairs, fusing them to capture complex musical features; 2) Hierarchical Granularity-Aware Interval Aggregation (HiGIA) learns multi-granularity score probability distributions, aggregates them into a score interval, and applies a regression within the interval to produce the final score. We evaluated on two datasets of full-length songs: SongEval dataset (AI-generated) and an internal aesthetics dataset (human-created), and compared with two state-of-the-art (SOTA) models. Results show that the proposed method achieves stronger performance for multi-dimensional song aesthetics evaluation.
|
https://arxiv.org/abs/2601.12222
|
Academic Papers
|
svg
|
7d9819fe436e08243ba3f0769422402a9e7487e9e5f2d9c10a634f5140e5f233
|
2026-01-21T00:00:00-05:00
|
Where It Moves, It Matters: Referring Surgical Instrument Segmentation via Motion
|
arXiv:2601.12224v1 Announce Type: new Abstract: Enabling intuitive, language-driven interaction with surgical scenes is a critical step toward intelligent operating rooms and autonomous surgical robotic assistance. However, the task of referring segmentation, localizing surgical instruments based on natural language descriptions, remains underexplored in surgical videos, with existing approaches struggling to generalize due to reliance on static visual cues and predefined instrument names. In this work, we introduce SurgRef, a novel motion-guided framework that grounds free-form language expressions in instrument motion, capturing how tools move and interact across time, rather than what they look like. This allows models to understand and segment instruments even under occlusion, ambiguity, or unfamiliar terminology. To train and evaluate SurgRef, we present Ref-IMotion, a diverse, multi-institutional video dataset with dense spatiotemporal masks and rich motion-centric expressions. SurgRef achieves state-of-the-art accuracy and generalization across surgical procedures, setting a new benchmark for robust, language-driven surgical video segmentation.
|
https://arxiv.org/abs/2601.12224
|
Academic Papers
|
svg
|
c28f308373acc9d9056cf720b64d0f8c9e164f6efac1a07db0e2ccc2b737229d
|
2026-01-21T00:00:00-05:00
|
Learning Longitudinal Health Representations from EHR and Wearable Data
|
arXiv:2601.12227v1 Announce Type: new Abstract: Foundation models trained on electronic health records show strong performance on many clinical prediction tasks but are limited by sparse and irregular documentation. Wearable devices provide dense continuous physiological signals but lack semantic grounding. Existing methods usually model these data sources separately or combine them through late fusion. We propose a multimodal foundation model that jointly represents electronic health records and wearable data as a continuous time latent process. The model uses modality specific encoders and a shared temporal backbone pretrained with self supervised and cross modal objectives. This design produces representations that are temporally coherent and clinically grounded. Across forecasting physiological and risk modeling tasks the model outperforms strong electronic health record only and wearable only baselines especially at long horizons and under missing data. These results show that joint electronic health record and wearable pretraining yields more faithful representations of longitudinal health.
|
https://arxiv.org/abs/2601.12227
|
Academic Papers
|
svg
|
fe1f0d612a7146e43bbd7cae68174965dcaffc1b08f27644e7376fc3d2ff12c1
|
2026-01-21T00:00:00-05:00
|
Classical-Quantum Channel Resolvability Using Matrix Multiplicative Weight Update Algorithm
|
arXiv:2601.12230v1 Announce Type: new Abstract: We study classical-quantum (C-Q) channel resolvability. C-Q channel resolvability has been proved by only random coding in the literature. In our previous study, we proved channel resolvability by deterministic coding, using multiplicative weight update algorithm. We extend this approach to C-Q channels and prove C-Q channel resolvability by deterministic coding, using the matrix multiplicative weight update algorithm. This is the first approach to C-Q channel resolvability using deterministic coding.
|
https://arxiv.org/abs/2601.12230
|
Academic Papers
|
svg
|
62c60e42cd0a193723a9ad6871152672fcbedbfeca1cf7ebbeedaba294095c8d
|
2026-01-21T00:00:00-05:00
|
Wavelet-Aware Anomaly Detection in Multi-Channel User Logs via Deviation Modulation and Resolution-Adaptive Attention
|
arXiv:2601.12231v1 Announce Type: new Abstract: Insider threat detection is a key challenge in enterprise security, relying on user activity logs that capture rich and complex behavioral patterns. These logs are often multi-channel, non-stationary, and anomalies are rare, making anomaly detection challenging. To address these issues, we propose a novel framework that integrates wavelet-aware modulation, multi-resolution wavelet decomposition, and resolution-adaptive attention for robust anomaly detection. Our approach first applies a deviation-aware modulation scheme to suppress routine behaviors while amplifying anomalous deviations. Next, discrete wavelet transform (DWT) decomposes the log signals into multi-resolution representations, capturing both long-term trends and short-term anomalies. Finally, a learnable attention mechanism dynamically reweights the most discriminative frequency bands for detection. On the CERT r4.2 benchmark, our approach consistently outperforms existing baselines in precision, recall, and F1 score across various time granularities and scenarios.
|
https://arxiv.org/abs/2601.12231
|
Academic Papers
|
svg
|
a0ebfff3ccb151606d7326a9b56b21e377d2fddc8f753e553b19d044d399c1d8
|
2026-01-21T00:00:00-05:00
|
DiffusionQC: Artifact Detection in Histopathology via Diffusion Model
|
arXiv:2601.12233v1 Announce Type: new Abstract: Digital pathology plays a vital role across modern medicine, offering critical insights for disease diagnosis, prognosis, and treatment. However, histopathology images often contain artifacts introduced during slide preparation and digitization. Detecting and excluding them is essential to ensure reliable downstream analysis. Traditional supervised models typically require large annotated datasets, which is resource-intensive and not generalizable to novel artifact types. To address this, we propose DiffusionQC, which detects artifacts as outliers among clean images using a diffusion model. It requires only a set of clean images for training rather than pixel-level artifact annotations and predefined artifact types. Furthermore, we introduce a contrastive learning module to explicitly enlarge the distribution separation between artifact and clean images, yielding an enhanced version of our method. Empirical results demonstrate superior performance to state-of-the-art and offer cross-stain generalization capacity, with significantly less data and annotations.
|
https://arxiv.org/abs/2601.12233
|
Academic Papers
|
svg
|
be3232b9708b15004b6f4067594bd50b6cfcd10efdd24e624ca5baa0ca5c6869
|
2026-01-21T00:00:00-05:00
|
Proc3D: Procedural 3D Generation and Parametric Editing of 3D Shapes with Large Language Models
|
arXiv:2601.12234v1 Announce Type: new Abstract: Generating 3D models has traditionally been a complex task requiring specialized expertise. While recent advances in generative AI have sought to automate this process, existing methods produce non-editable representation, such as meshes or point clouds, limiting their adaptability for iterative design. In this paper, we introduce Proc3D, a system designed to generate editable 3D models while enabling real-time modifications. At its core, Proc3D introduces procedural compact graph (PCG), a graph representation of 3D models, that encodes the algorithmic rules and structures necessary for generating the model. This representation exposes key parameters, allowing intuitive manual adjustments via sliders and checkboxes, as well as real-time, automated modifications through natural language prompts using Large Language Models (LLMs). We demonstrate Proc3D's capabilities using two generative approaches: GPT-4o with in-context learning (ICL) and a fine-tuned LLAMA-3 model. Experimental results show that Proc3D outperforms existing methods in editing efficiency, achieving more than 400x speedup over conventional approaches that require full regeneration for each modification. Additionally, Proc3D improves ULIP scores by 28%, a metric that evaluates the alignment between generated 3D models and text prompts. By enabling text-aligned 3D model generation along with precise, real-time parametric edits, Proc3D facilitates highly accurate text-based image editing applications.
|
https://arxiv.org/abs/2601.12234
|
Academic Papers
|
svg
|
9fc2b5b634f0d9e14496ed8284f612fd571dffc52b8afd515e7e0ea5f7b42769
|
2026-01-21T00:00:00-05:00
|
Analyzing the Impact of EV Battery Charging on the Distribution Network
|
arXiv:2601.12236v1 Announce Type: new Abstract: Many countries are rapidly adopting electric vehicles (EVs) due to their meager running cost and environment-friendly nature. EVs are likely to dominate the internal combustion (IC) engine cars entirely over the next few years. With the rise in popularity of EVs, adverse effects of EV charging loads on the grid system have been observed. Since the distribution system (DS) does not cope with the high overloading capacity, the negative impact of EV charging load on the distribution network (DN) cannot be neglected. A high level of EV penetration with uncoordinated charging is the primary cause of voltage instability, increased peak load demand, and reliability issues of the DN. In this paper, a detailed overview of all the notable impacts of EV charging on voltage profile, power quality, and DS performance is discussed. This work also reviews the different topologies of EV chargers and the issues introduced by power converters on the utility grid. Finally, the strategies for improving the charging of EVs proposed in the literature to consider the random nature of EVs charging, the management of peak loads, and bidirectional power flow are summarized.
|
https://arxiv.org/abs/2601.12236
|
Academic Papers
|
svg
|
db662957f60180dee6c9c7156aa8e6063058246dbdd14245e0b0a518da4c3c8f
|
2026-01-21T00:00:00-05:00
|
Power Aware Dynamic Reallocation For Inference
|
arXiv:2601.12241v1 Announce Type: new Abstract: Disaggregation has emerged as a powerful strategy for optimizing large language model (LLM) inference by separating compute-intensive prefill and memory-bound decode phases across specialized GPUs. This separation improves utilization and throughput under fixed hardware capacity. However, as model and cluster scales grow, power, rather than compute, has become the dominant limiter of overall performance and cost efficiency. In this paper, we propose RAPID, a power-aware disaggregated inference framework that jointly manages GPU roles and power budgets to sustain goodput within strict power caps. RAPID utilizes static and dynamic power reallocation in addition to GPU reallocation to improve performance under fixed power bounds. RAPID improves overall performance and application consistency beyond what is achievable in current disaggregation solutions, resulting in up to a 2x improvement in SLO attainment at peak load when compared to a static assignment without an increase in complexity or cost.
|
https://arxiv.org/abs/2601.12241
|
Academic Papers
|
svg
|
a6d6e5f3313588b16288506a16f714bf63d680d8498522fdadf6ab60f3f58d22
|
2026-01-21T00:00:00-05:00
|
Optimal Power Allocation and Sub-Optimal Channel Assignment for Downlink NOMA Systems Using Deep Reinforcement Learning
|
arXiv:2601.12242v1 Announce Type: new Abstract: In recent years, Non-Orthogonal Multiple Access (NOMA) system has emerged as a promising candidate for multiple access frameworks due to the evolution of deep machine learning, trying to incorporate deep machine learning into the NOMA system. The main motivation for such active studies is the growing need to optimize the utilization of network resources as the expansion of the internet of things (IoT) caused a scarcity of network resources. The NOMA addresses this need by power multiplexing, allowing multiple users to access the network simultaneously. Nevertheless, the NOMA system has few limitations. Several works have proposed to mitigate this, including the optimization of power allocation known as joint resource allocation(JRA) method, and integration of the JRA method and deep reinforcement learning (JRA-DRL). Despite this, the channel assignment problem remains unclear and requires further investigation. In this paper, we propose a deep reinforcement learning framework incorporating replay memory with an on-policy algorithm, allocating network resources in a NOMA system to generalize the learning. Also, we provide extensive simulations to evaluate the effects of varying the learning rate, batch size, type of model, and the number of features in the state.
|
https://arxiv.org/abs/2601.12242
|
Academic Papers
|
svg
|
a0ec56b5752c7ab1fb9e48192c4666bb9d1a11f778fef10ac7f756f64836f069
|
2026-01-21T00:00:00-05:00
|
Less is More: Label-Guided Summarization of Procedural and Instructional Videos
|
arXiv:2601.12243v1 Announce Type: new Abstract: Video summarization helps turn long videos into clear, concise representations that are easier to review, document, and analyze, especially in high-stakes domains like surgical training. Prior work has progressed from using basic visual features like color, motion, and structural changes to using pre-trained vision-language models that can better understand what's happening in the video (semantics) and capture temporal flow, resulting in more context-aware video summarization. We propose a three-stage framework, PRISM: Procedural Representation via Integrated Semantic and Multimodal analysis, that produces semantically grounded video summaries. PRISM combines adaptive visual sampling, label-driven keyframe anchoring, and contextual validation using a large language model (LLM). Our method ensures that selected frames reflect meaningful and procedural transitions while filtering out generic or hallucinated content, resulting in contextually coherent summaries across both domain-specific and instructional videos. We evaluate our method on instructional and activity datasets, using reference summaries for instructional videos. Despite sampling fewer than 5% of the original frames, our summaries retain 84% semantic content while improving over baselines by as much as 33%. Our approach generalizes across procedural and domain-specific video tasks, achieving strong performance with both semantic alignment and precision.
|
https://arxiv.org/abs/2601.12243
|
Academic Papers
|
svg
|
985c525197ca8f89319b4acd8e1873d0b4953f44652db283b2c7a18c605ba34d
|
2026-01-21T00:00:00-05:00
|
A Comprehensive Review of Bio-Inspired Approaches to Coordination, Communication, and System Architecture in Underwater Swarm Robotics
|
arXiv:2601.12244v1 Announce Type: new Abstract: The increasing complexity of marine operations has intensified the need for intelligent robotic systems to support ocean observation, exploration, and resource management. Underwater swarm robotics offers a promising framework that extends the capabilities of individual autonomous platforms through collective coordination. Inspired by natural systems, such as fish schools and insect colonies, bio-inspired swarm approaches enable distributed decision-making, adaptability, and resilience under challenging marine conditions. Yet research in this field remains fragmented, with limited integration across algorithmic, communication, and hardware design perspectives. This review synthesises bio-inspired coordination mechanisms, communication strategies, and system design considerations for underwater swarm robotics. It examines key marine-specific algorithms, including the Artificial Fish Swarm Algorithm, Whale Optimisation Algorithm, Coral Reef Optimisation, and Marine Predators Algorithm, highlighting their applications in formation control, task allocation, and environmental interaction. The review also analyses communication constraints unique to the underwater domain and emerging acoustic, optical, and hybrid solutions that support cooperative operation. Additionally, it examines hardware and system design advances that enhance system efficiency and scalability. A multi-dimensional classification framework evaluates existing approaches across communication dependency, environmental adaptability, energy efficiency, and swarm scalability. Through this integrated analysis, the review unifies bio-inspired coordination algorithms, communication modalities, and system design approaches. It also identifies converging trends, key challenges, and future research directions for real-world deployment of underwater swarm systems.
|
https://arxiv.org/abs/2601.12244
|
Academic Papers
|
svg
|
f4a8ef5d86aa941e6709cb74a723c971f7df64ad32edaa650c30ba907b822fb7
|
2026-01-21T00:00:00-05:00
|
Sound2Hap: Learning Audio-to-Vibrotactile Haptic Generation from Human Ratings
|
arXiv:2601.12245v1 Announce Type: new Abstract: Environmental sounds like footsteps, keyboard typing, or dog barking carry rich information and emotional context, making them valuable for designing haptics in user applications. Existing audio-to-vibration methods, however, rely on signal-processing rules tuned for music or games and often fail to generalize across diverse sounds. To address this, we first investigated user perception of four existing audio-to-haptic algorithms, then created a data-driven model for environmental sounds. In Study 1, 34 participants rated vibrations generated by the four algorithms for 1,000 sounds, revealing no consistent algorithm preferences. Using this dataset, we trained Sound2Hap, a CNN-based autoencoder, to generate perceptually meaningful vibrations from diverse sounds with low latency. In Study 2, 15 participants rated its output higher than signal-processing baselines on both audio-vibration match and Haptic Experience Index (HXI), finding it more harmonious with diverse sounds. This work demonstrates a perceptually validated approach to audio-haptic translation, broadening the reach of sound-driven haptics.
|
https://arxiv.org/abs/2601.12245
|
Academic Papers
|
svg
|
3e0892457d580be9b50caf20fa4aa3fc4402ff5aca0a29c35f87c19654f0fda8
|
2026-01-21T00:00:00-05:00
|
Explicit symmetric low-regularity integrators for the semilinear Klein-Gordon equation
|
arXiv:2601.12246v1 Announce Type: new Abstract: This paper is concerned with the design and analysis of symmetric low-regularity integrators for the semilinear Klein-Gordon equation. We first propose a general symmetrization procedure that allows for the systematic construction of symmetric schemes from existing explicit (non-symmetric) integrators. Applying this procedure, we derive two novel schemes. Error analyses show that both integrators achieve their optimal convergence orders in the energy space under significantly relaxed regularity assumptions. Furthermore, the symmetry property ensures that the convergence order of a first-order symmetric scheme improves as the regularity of the exact solution increases. A numerical experiment demonstrates that the proposed second-order symmetric scheme nearly preserves the system energy over extended periods.
|
https://arxiv.org/abs/2601.12246
|
Academic Papers
|
svg
|
3d3601de15446cc5dc93175b074d08a2c5e248295892cb98bf1aea3f7551d7d3
|
2026-01-21T00:00:00-05:00
|
Plan, Verify and Fill: A Structured Parallel Decoding Approach for Diffusion Language Models
|
arXiv:2601.12247v1 Announce Type: new Abstract: Diffusion Language Models (DLMs) present a promising non-sequential paradigm for text generation, distinct from standard autoregressive (AR) approaches. However, current decoding strategies often adopt a reactive stance, underutilizing the global bidirectional context to dictate global trajectories. To address this, we propose Plan-Verify-Fill (PVF), a training-free paradigm that grounds planning via quantitative validation. PVF actively constructs a hierarchical skeleton by prioritizing high-leverage semantic anchors and employs a verification protocol to operationalize pragmatic structural stopping where further deliberation yields diminishing returns. Extensive evaluations on LLaDA-8B-Instruct and Dream-7B-Instruct demonstrate that PVF reduces the Number of Function Evaluations (NFE) by up to 65% compared to confidence-based parallel decoding across benchmark datasets, unlocking superior efficiency without compromising accuracy.
|
https://arxiv.org/abs/2601.12247
|
Academic Papers
|
svg
|
0d9b1bc72260fac4673119491dba9ec93d810327648b05f15ebc6bb297d8a992
|
2026-01-21T00:00:00-05:00
|
An Innovative Framework for Breast Cancer Detection Using Pyramid Adaptive Atrous Convolution, Transformer Integration, and Multi-Scale Feature Fusion
|
arXiv:2601.12249v1 Announce Type: new Abstract: Breast cancer is one of the most common cancers among women worldwide, and its accurate and timely diagnosis plays a critical role in improving treatment outcomes. This thesis presents an innovative framework for detecting malignant masses in mammographic images by integrating the Pyramid Adaptive Atrous Convolution (PAAC) and Transformer architectures. The proposed approach utilizes Multi-Scale Feature Fusion to enhance the extraction of features from benign and malignant tissues and combines Dice Loss and Focal Loss functions to improve the model's learning process, effectively reducing errors in binary breast cancer classification and achieving high accuracy and efficiency. In this study, a comprehensive dataset of breast cancer images from INbreast, MIAS, and DDSM was preprocessed through data augmentation and contrast enhancement and resized to 227x227 pixels for model training. Leveraging the Transformer's ability to manage long-range dependencies with Self-Attention mechanisms, the proposed model achieved high accuracy in detecting cancerous masses, outperforming foundational models such as BreastNet, DeepMammo, Multi-Scale CNN, Swin-Unet, and SegFormer. The final evaluation results for the proposed model include an accuracy of 98.5\%, sensitivity of 97.8\%, specificity of 96.3\%, F1-score of 98.2\%, and overall precision of 97.9\%. These metrics demonstrate a significant improvement over traditional methods and confirm the model's effectiveness in identifying cancerous masses in complex scenarios and large datasets. This model shows potential as a reliable and efficient tool for breast cancer diagnosis and can be effectively integrated into medical diagnostic systems.
|
https://arxiv.org/abs/2601.12249
|
Academic Papers
|
svg
|
5ec53cb64091923f4beeed069b086e7926910d2121fe10b0d98109068ee2821a
|
2026-01-21T00:00:00-05:00
|
Breaking Coordinate Overfitting: Geometry-Aware WiFi Sensing for Cross-Layout 3D Pose Estimation
|
arXiv:2601.12252v1 Announce Type: new Abstract: WiFi-based 3D human pose estimation offers a low-cost and privacy-preserving alternative to vision-based systems for smart interaction. However, existing approaches rely on visual 3D poses as supervision and directly regress CSI to a camera-based coordinate system. We find that this practice leads to coordinate overfitting: models memorize deployment-specific WiFi transceiver layouts rather than only learning activity-relevant representations, resulting in severe generalization failures. To address this challenge, we present PerceptAlign, the first geometry-conditioned framework for WiFi-based cross-layout pose estimation. PerceptAlign introduces a lightweight coordinate unification procedure that aligns WiFi and vision measurements in a shared 3D space using only two checkerboards and a few photos. Within this unified space, it encodes calibrated transceiver positions into high-dimensional embeddings and fuses them with CSI features, making the model explicitly aware of device geometry as a conditional variable. This design forces the network to disentangle human motion from deployment layouts, enabling robust and, for the first time, layout-invariant WiFi pose estimation. To support systematic evaluation, we construct the largest cross-domain 3D WiFi pose estimation dataset to date, comprising 21 subjects, 5 scenes, 18 actions, and 7 device layouts. Experiments show that PerceptAlign reduces in-domain error by 12.3% and cross-domain error by more than 60% compared to state-of-the-art baselines. These results establish geometry-conditioned learning as a viable path toward scalable and practical WiFi sensing.
|
https://arxiv.org/abs/2601.12252
|
Academic Papers
|
svg
|
5c27168477f26e2a9f58a158884dfc0dc4443c2be8405d45d15147153d054f60
|
2026-01-21T00:00:00-05:00
|
Federated Joint Learning for Domain and Class Generalization
|
arXiv:2601.12253v1 Announce Type: new Abstract: Efficient fine-tuning of visual-language models like CLIP has become crucial due to their large-scale parameter size and extensive pretraining requirements. Existing methods typically address either the issue of unseen classes or unseen domains in isolation, without considering a joint framework for both. In this paper, we propose \textbf{Fed}erated Joint Learning for \textbf{D}omain and \textbf{C}lass \textbf{G}eneralization, termed \textbf{FedDCG}, a novel approach that addresses both class and domain generalization in federated learning settings. Our method introduces a domain grouping strategy where class-generalized networks are trained within each group to prevent decision boundary confusion. During inference, we aggregate class-generalized results based on domain similarity, effectively integrating knowledge from both class and domain generalization. Specifically, a learnable network is employed to enhance class generalization capabilities, and a decoupling mechanism separates general and domain-specific knowledge, improving generalization to unseen domains. Extensive experiments across various datasets show that \textbf{FedDCG} outperforms state-of-the-art baselines in terms of accuracy and robustness.
|
https://arxiv.org/abs/2601.12253
|
Academic Papers
|
svg
|
1967c8f7c4dea3f89bbb9daf122343fa3bb50318d2bd975fa02c7ad54b8866c3
|
2026-01-21T00:00:00-05:00
|
Confidence-based Filtering for Speech Dataset Curation with Generative Speech Enhancement Using Discrete Tokens
|
arXiv:2601.12254v1 Announce Type: new Abstract: Generative speech enhancement (GSE) models show great promise in producing high-quality clean speech from noisy inputs, enabling applications such as curating noisy text-to-speech (TTS) datasets into high-quality ones. However, GSE models are prone to hallucination errors, such as phoneme omissions and speaker inconsistency, which conventional error filtering based on non-intrusive speech quality metrics often fails to detect. To address this issue, we propose a non-intrusive method for filtering hallucination errors from discrete token-based GSE models. Our method leverages the log-probabilities of generated tokens as confidence scores to detect potential errors. Experimental results show that the confidence scores strongly correlate with a suite of intrusive SE metrics, and that our method effectively identifies hallucination errors missed by conventional filtering methods. Furthermore, we demonstrate the practical utility of our method: curating an in-the-wild TTS dataset with our confidence-based filtering improves the performance of subsequently trained TTS models.
|
https://arxiv.org/abs/2601.12254
|
Academic Papers
|
svg
|
32458ac5daf09645d1104fcc68582d8e6c9eaefcea23089cedcf57ae3e8711a6
|
2026-01-21T00:00:00-05:00
|
Improving Large Molecular Language Model via Relation-aware Multimodal Collaboration
|
arXiv:2601.12256v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated their instruction-following capabilities and achieved powerful performance on various tasks. Inspired by their success, recent works in the molecular domain have led to the development of large molecular language models (LMLMs) that integrate 1D molecular strings or 2D molecular graphs into the language models. However, existing LMLMs often suffer from hallucination and limited robustness, largely due to inadequate integration of diverse molecular modalities such as 1D sequences, 2D molecular graphs, and 3D conformations. To address these limitations, we propose CoLLaMo, a large language model-based molecular assistant equipped with a multi-level molecular modality-collaborative projector. The relation-aware modality-collaborative attention mechanism in the projector facilitates fine-grained and relation-guided information exchange between atoms by incorporating 2D structural and 3D spatial relations. Furthermore, we present a molecule-centric new automatic measurement, including a hallucination assessment metric and GPT-based caption quality evaluation to address the limitations of token-based generic evaluation metrics (i.e., BLEU) widely used in assessing molecular comprehension of LMLMs. Our extensive experiments demonstrate that our CoLLaMo enhances the molecular modality generalization capabilities of LMLMs, achieving the best performance on multiple tasks, including molecule captioning, computed property QA, descriptive property QA, motif counting, and IUPAC name prediction.
|
https://arxiv.org/abs/2601.12256
|
Academic Papers
|
svg
|
a1f4ee0a444acf11ccf0f1100b019f534a3b8515713f28f4d8b9c98e0869f7fc
|
2026-01-21T00:00:00-05:00
|
Soft Shadow Diffusion (SSD): Physics-inspired Learning for 3D Computational Periscopy
|
arXiv:2601.12257v1 Announce Type: new Abstract: Conventional imaging requires a line of sight to create accurate visual representations of a scene. In certain circumstances, however, obtaining a suitable line of sight may be impractical, dangerous, or even impossible. Non-line-of-sight (NLOS) imaging addresses this challenge by reconstructing the scene from indirect measurements. Recently, passive NLOS methods that use an ordinary photograph of the subtle shadow cast onto a visible wall by the hidden scene have gained interest. These methods are currently limited to 1D or low-resolution 2D color imaging or to localizing a hidden object whose shape is approximately known. Here, we generalize this class of methods and demonstrate a 3D reconstruction of a hidden scene from an ordinary NLOS photograph. To achieve this, we propose a novel reformulation of the light transport model that conveniently decomposes the hidden scene into \textit{light-occluding} and \textit{non-light-occluding} components to yield a separable non-linear least squares (SNLLS) inverse problem. We develop two solutions: A gradient-based optimization method and a physics-inspired neural network approach, which we call Soft Shadow diffusion (SSD). Despite the challenging ill-conditioned inverse problem encountered here, our approaches are effective on numerous 3D scenes in real experimental scenarios. Moreover, SSD is trained in simulation but generalizes well to unseen classes in simulation and real-world NLOS scenes. SSD also shows surprising robustness to noise and ambient illumination.
|
https://arxiv.org/abs/2601.12257
|
Academic Papers
|
svg
|
90dc66a37c4f244240960d1ed395145dc8ea5b0761015b262fb1dcabd6434aa4
|
2026-01-21T00:00:00-05:00
|
FutureX-Pro: Extending Future Prediction to High-Value Vertical Domains
|
arXiv:2601.12259v1 Announce Type: new Abstract: Building upon FutureX, which established a live benchmark for general-purpose future prediction, this report introduces FutureX-Pro, including FutureX-Finance, FutureX-Retail, FutureX-PublicHealth, FutureX-NaturalDisaster, and FutureX-Search. These together form a specialized framework extending agentic future prediction to high-value vertical domains. While generalist agents demonstrate proficiency in open-domain search, their reliability in capital-intensive and safety-critical sectors remains under-explored. FutureX-Pro targets four economically and socially pivotal verticals: Finance, Retail, Public Health, and Natural Disaster. We benchmark agentic Large Language Models (LLMs) on entry-level yet foundational prediction tasks -- ranging from forecasting market indicators and supply chain demands to tracking epidemic trends and natural disasters. By adapting the contamination-free, live-evaluation pipeline of FutureX, we assess whether current State-of-the-Art (SOTA) agentic LLMs possess the domain grounding necessary for industrial deployment. Our findings reveal the performance gap between generalist reasoning and the precision required for high-value vertical applications.
|
https://arxiv.org/abs/2601.12259
|
Academic Papers
|
svg
|
6815f3ef2292448ba5caaa8747eba860d95f83db3a7296b4186f2c5491f44022
|
2026-01-21T00:00:00-05:00
|
Docs2Synth: A Synthetic Data Trained Retriever Framework for Scanned Visually Rich Documents Understanding
|
arXiv:2601.12260v1 Announce Type: new Abstract: Document understanding (VRDU) in regulated domains is particularly challenging, since scanned documents often contain sensitive, evolving, and domain specific knowledge. This leads to two major challenges: the lack of manual annotations for model adaptation and the difficulty for pretrained models to stay up-to-date with domain-specific facts. While Multimodal Large Language Models (MLLMs) show strong zero-shot abilities, they still suffer from hallucination and limited domain grounding. In contrast, discriminative Vision-Language Pre-trained Models (VLPMs) provide reliable grounding but require costly annotations to cover new domains. We introduce Docs2Synth, a synthetic-supervision framework that enables retrieval-guided inference for private and low-resource domains. Docs2Synth automatically processes raw document collections, generates and verifies diverse QA pairs via an agent-based system, and trains a lightweight visual retriever to extract domain-relevant evidence. During inference, the retriever collaborates with an MLLM through an iterative retrieval--generation loop, reducing hallucination and improving response consistency. We further deliver Docs2Synth as an easy-to-use Python package, enabling plug-and-play deployment across diverse real-world scenarios. Experiments on multiple VRDU benchmarks show that Docs2Synth substantially enhances grounding and domain generalization without requiring human annotations.
|
https://arxiv.org/abs/2601.12260
|
Academic Papers
|
svg
|
05a80d107cd7f3e431f501c58770e45efafd57f043cb82ea55cbc8b01d236a60
|
2026-01-21T00:00:00-05:00
|
Environment-Aware Code Generation: How far are We?
|
arXiv:2601.12262v1 Announce Type: new Abstract: Recent progress in large language models (LLMs) has improved code generation, but most evaluations still test isolated, small-scale code (e.g., a single function) under default or unspecified software environments. As a result, it is unclear whether LLMs can reliably generate executable code tailored to a user's specific environment. We present the first systematic study of Environment-Aware Code Generation (EACG), where generated code must be functionally correct and directly executable under arbitrary software configurations. To enable realistic evaluation, we introduce VersiBCB, a benchmark that is multi-package, execution-verified, and deprecation-aware, capturing complex and evolving environments that prior datasets often overlook. Using VersiBCB, we investigate three complementary adaptation axes: data, parameters, and cache, and develop representative strategies for each. Our results show that current LLMs struggle with environment-specific code generation, while our adaptations improve environment compatibility and executability. These findings highlight key challenges and opportunities for deploying LLMs in practical software engineering workflows.
|
https://arxiv.org/abs/2601.12262
|
Academic Papers
|
svg
|
563ff8b9c7695d1e5d41cf0204e4b6aa806e1cefeb7a48cce85f1b5bfcb89e77
|
2026-01-21T00:00:00-05:00
|
Multimodal Generative Engine Optimization: Rank Manipulation for Vision-Language Model Rankers
|
arXiv:2601.12263v1 Announce Type: new Abstract: Vision-Language Models (VLMs) are rapidly replacing unimodal encoders in modern retrieval and recommendation systems. While their capabilities are well-documented, their robustness against adversarial manipulation in competitive ranking scenarios remains largely unexplored. In this paper, we uncover a critical vulnerability in VLM-based product search: multimodal ranking attacks. We present Multimodal Generative Engine Optimization (MGEO), a novel adversarial framework that enables a malicious actor to unfairly promote a target product by jointly optimizing imperceptible image perturbations and fluent textual suffixes. Unlike existing attacks that treat modalities in isolation, MGEO employs an alternating gradient-based optimization strategy to exploit the deep cross-modal coupling within the VLM. Extensive experiments on real-world datasets using state-of-the-art models demonstrate that our coordinated attack significantly outperforms text-only and image-only baselines. These findings reveal that multimodal synergy, typically a strength of VLMs, can be weaponized to compromise the integrity of search rankings without triggering conventional content filters.
|
https://arxiv.org/abs/2601.12263
|
Academic Papers
|
svg
|
3f78a156a485aafcf8f61d6413ec4fd7c2ef11978f20d7cd7c9f5b32f57af0a7
|
2026-01-21T00:00:00-05:00
|
Statistical Firefly Algorithm for Truss Topology Optimization
|
arXiv:2601.12265v1 Announce Type: new Abstract: This study proposes an algorithm titled a statistical firefly algorithm (SFA) for truss topology optimization. In the proposed algorithm, historical results of fireflies' motions are used in hypothesis testing to limit the motions of fireflies that are suggested by current information exchanges between fireflies only to those that are potentially useful. Hypothesis testing is applied to the mechanism of an ordinary firefly algorithm (FA) without changing its structure. As a result, the implementation of the proposed algorithm is simple and straightforward. Limiting the motions of fireflies to those that are potential useful results in reduction of firefly evaluations, and, subsequently, reduction of computational efforts. To test the validity and efficiency of the proposed algorithm, it is used to solve several truss topology optimization problems, including some benchmark problems. It is found that the added statistical strategy in the SFA significantly enhances the performance of the original FA in terms of computational efforts while still maintains the quality of the obtained results.
|
https://arxiv.org/abs/2601.12265
|
Academic Papers
|
svg
|
dec94b61b451a670cf1c473739c7c1b53ac377cde9de5f1ad8dc3227a094f86a
|
2026-01-21T00:00:00-05:00
|
Opportunistic Scheduling for Optimal Spot Instance Savings in the Cloud
|
arXiv:2601.12266v1 Announce Type: new Abstract: We study the problem of scheduling delay-sensitive jobs over spot and on-demand cloud instances to minimize average cost while meeting an average delay constraint. Jobs arrive as a general stochastic process, and incur different costs based on the instance type. This work provides the first analytical treatment of this problem using tools from queuing theory, stochastic processes, and optimization. We derive cost expressions for general policies, prove queue length one is optimal for low target delays, and characterize the optimal wait-time distribution. For high target delays, we identify a knapsack structure and design a scheduling policy that exploits it. An adaptive algorithm is proposed to fully utilize the allowed delay, and empirical results confirm its near-optimality.
|
https://arxiv.org/abs/2601.12266
|
Academic Papers
|
svg
|
eb6ff1c47d393c652f8c3978e2b37734e08692c22b66286a6ba26ea7d865d6c4
|
2026-01-21T00:00:00-05:00
|
Simulated Annealing Enhances Theory-of-Mind Reasoning in Autoregressive Language Models
|
arXiv:2601.12269v1 Announce Type: new Abstract: Autoregressive language models are next-token predictors and have been criticized for only optimizing surface plausibility (i.e., local coherence) rather than maintaining correct latent-state representations (i.e., global coherence). Because Theory of Mind (ToM) tasks crucially depend on reasoning about latent mental states of oneself and others, such models are therefore often thought to fail at ToM. While post-training methods can improve ToM performance, we show that strong ToM capability can be recovered directly from the base model without any additional weight updates or verifications. Our approach builds on recent power-sampling methods (Karan & Du, 2025) that use Markov chain Monte Carlo (MCMC) to sample from sharpened sequence-level (rather than token-level) probability distributions of autoregressive language models. We further find that incorporating annealing, where the tempered distribution is gradually shifted from high to low temperature, substantially improves ToM performance over fixed-temperature power sampling. Together, these results suggest that sampling-based optimization provides a powerful way to extract latent capabilities from language models without retraining.
|
https://arxiv.org/abs/2601.12269
|
Academic Papers
|
svg
|
c3b3f989ba86ee631f31915713bde5828ee9c94bd897c42c36f8eb65497cf602
|
2026-01-21T00:00:00-05:00
|
SplittingSecrets: A Compiler-Based Defense for Preventing Data Memory-Dependent Prefetcher Side-Channels
|
arXiv:2601.12270v1 Announce Type: new Abstract: Traditional side-channels take advantage of secrets being used as inputs to unsafe instructions, used for memory accesses, or used in control flow decisions. Constant-time programming, which restricts such code patterns, has been widely adopted as a defense against these vulnerabilities. However, new hardware optimizations in the form of Data Memory-dependent Prefetchers (DMP) present in Apple, Intel, and ARM CPUs have shown such defenses are not sufficient. These prefetchers, unlike classical prefetchers, use the content of memory as well as the trace of prior accesses to determine prefetch targets. An adversary abusing such a prefetcher has been shown to be able to mount attacks leaking data-at-rest; data that is never used by the program, even speculatively, in an unsafe manner. In response, this paper introduces SplittingSecrets, a compiler-based tool that can harden software libraries against side-channels arising from DMPs. SplittingSecrets's approach avoids reasoning about the complex internals of different DMPs and instead relies on one key aspect of all DMPs: activation requires data to resemble addresses. To prevent secret data from leaking, SplittingSecrets transforms memory operations to ensure that secrets are never stored in memory in a manner resembling an address, thereby avoiding DMP activation on those secrets. Rather than disable a DMP entirely, SplittingSecrets can provide targeted hardening for only specific secrets entirely in software. We have implemented SplittingSecrets using LLVM, supporting both source-level memory operations and those generated by the compiler backend for the AArch64 architecture, We have analyzed the performance overhead involved in safeguarding secrets from DMP-induced attacks using common primitives in libsodium, a popular cryptographic library when built for Apple M-series CPUs.
|
https://arxiv.org/abs/2601.12270
|
Academic Papers
|
svg
|
510249bc3b93ed7039225597f4376580e1dda512d56b511db8f07bc76f2805f5
|
2026-01-21T00:00:00-05:00
|
AgenticPruner: MAC-Constrained Neural Network Compression via LLM-Driven Strategy Search
|
arXiv:2601.12272v1 Announce Type: new Abstract: Neural network pruning remains essential for deploying deep learning models on resource-constrained devices, yet existing approaches primarily target parameter reduction without directly controlling computational cost. This yields unpredictable inference latency in deployment scenarios where strict Multiply-Accumulate (MAC) operation budgets must be met. We propose AgenticPruner, a framework utilizing large language models to achieve MAC-constrained optimization through iterative strategy learning. Our approach coordinates three specialized agents: a Profiling Agent that analyzes model architecture and MAC distributions, a Master Agent that orchestrates the workflow with divergence monitoring, and an Analysis Agent powered by Claude 3.5 Sonnet that learns optimal strategies from historical attempts. Through in-context learning, the Analysis Agent improves convergence success rate from 48% to 71% compared to grid search. Building upon isomorphic pruning's graph-based structural grouping, our method adds context-aware adaptation by analyzing patterns across pruning iterations, enabling automatic convergence to target MAC budgets within user-defined tolerance bands. We validate our framework on ImageNet-1K across ResNet, ConvNeXt, and DeiT architectures. On CNNs, our approach achieves MAC targeting while maintaining or improving accuracy: ResNet-50 reaches 1.77G MACs with 77.04% accuracy (+0.91% vs baseline); ResNet-101 achieves 4.22G MACs with 78.94% accuracy (+1.56% vs baseline). For ConvNeXt-Small, pruning to 8.17G MACs yields 1.41x GPU and 1.07x CPU speedup with 45% parameter reduction. On Vision Transformers, we demonstrate MAC-budget compliance within user-defined tolerance bands (typically +1% to +5% overshoot, -5% to -15% undershoot), establishing feasibility for deployment scenarios requiring strict computational guarantees.
|
https://arxiv.org/abs/2601.12272
|
Academic Papers
|
svg
|
7bc4812a6c706c1a1858925e47952f65e4ae6429c769c972c782b451893d6003
|
2026-01-21T00:00:00-05:00
|
Leveraging Mutation Analysis for LLM-based Repair of Quantum Programs
|
arXiv:2601.12273v1 Announce Type: new Abstract: In recent years, Automated Program Repair (APR) techniques specifically designed for quantum programs have been proposed. However, existing approaches often suffer from low repair success rates or poor understandability of the generated patches. In this study, we construct a framework in which a large language model (LLM) generates code repairs along with a natural language explanation of the applied repairs. To investigate how the contextual information included in prompts influences APR performance for quantum programs, we design four prompt configurations with different combinations of static information, dynamic information, and mutation analysis results. Mutation analysis evaluates how small changes to specific parts of a program affect its execution results and provides more detailed dynamic information than simple execution outputs such as stack traces. Our experimental results show that mutation analysis can provide valuable contextual information for LLM-based APR of quantum programs, improving repair success rates (achieving 94.4% in our experiment) and in some cases also improving the quality of generated explanations. Our findings point toward new directions for developing APR techniques for quantum programs that enhance both reliability and explainability.
|
https://arxiv.org/abs/2601.12273
|
Academic Papers
|
svg
|
2982009f8c075671762d3e587c9076b8513218b0b7df37e8ff0640945dda5c04
|
2026-01-21T00:00:00-05:00
|
Hybrid Concolic Testing with Large Language Models for Guided Path Exploration
|
arXiv:2601.12274v1 Announce Type: new Abstract: Concolic testing, a powerful hybrid software testing technique, has historically been plagued by fundamental limitations such as path explosion and the high cost of constraint solving, which hinder its practical application in large-scale, real-world software systems. This paper introduces a novel algorithmic framework that synergistically integrates concolic execution with Large Language Models (LLMs) to overcome these challenges. Our hybrid approach leverages the semantic reasoning capabilities of LLMs to guide path exploration, prioritize interesting execution paths, and assist in constraint solving. We formally define the system architecture and algorithms that constitute this new paradigm. Through a series of experiments on both synthetic and real-world Fintech applications, we demonstrate that our approach significantly outperforms traditional concolic testing, random testing, and genetic algorithm-based methods in terms of branch coverage, path coverage, and time-to-coverage. The results indicate that by combining the strengths of both concolic execution and LLMs, our method achieves a more efficient and effective exploration of the program state space, leading to improved bug detection capabilities.
|
https://arxiv.org/abs/2601.12274
|
Academic Papers
|
svg
|
d514c3b1120306a2c28411e0ade5b5475e00b53a1a26e2ed85ff029d649b80e2
|
2026-01-21T00:00:00-05:00
|
Predictive Prototyping: Evaluating Design Concepts with ChatGPT
|
arXiv:2601.12276v1 Announce Type: new Abstract: The design-build-test cycle is essential for innovation, but physical prototyping is often slow and expensive. Although physics-based simulation and strategic prototyping can reduce cost, meaningful evaluation is frequently constrained until an integrated prototype is built. This paper investigates whether a generative pretrained transformer (GPT) can predict information typically obtained through prototyping, including cost, performance, and perceived usability. We introduce a retrieval-augmented generation (RAG) method to emulate design feedback using OpenAI GPT-4o, grounded in prototyping data scraped from Instructables.com to increase access to relevant precedent. Two studies are reported. First, a controlled experiment compares GPT-RAG and human designers, who receive design sketches and predict cost, performance, and usability; predictions are evaluated against ground-truth results from physical prototypes. Second, we report an applied demonstration in which a physical prototype is produced from GPT-RAG recommendations and compared with a commercial baseline and a topology-optimized design. Results show that GPT-RAG provides more accurate cost and performance estimates than individual or crowd human estimates, while yielding comparable usability insights; the GPT-RAG-informed prototype also outperforms both comparison prototypes. Repeated querying with response averaging significantly improves accuracy, suggesting that LLMs can emulate crowd aggregation effects consistent with the law of large numbers.
|
https://arxiv.org/abs/2601.12276
|
Academic Papers
|
svg
|
b8f288b7f87b51a87951943b076829669d6a10bc3ddee71a4af238ceb5e0f234
|
2026-01-21T00:00:00-05:00
|
An Efficient and Multi-Modal Navigation System with One-Step World Model
|
arXiv:2601.12277v1 Announce Type: new Abstract: Navigation is a fundamental capability for mobile robots. While the current trend is to use learning-based approaches to replace traditional geometry-based methods, existing end-to-end learning-based policies often struggle with 3D spatial reasoning and lack a comprehensive understanding of physical world dynamics. Integrating world models-which predict future observations conditioned on given actions-with iterative optimization planning offers a promising solution due to their capacity for imagination and flexibility. However, current navigation world models, typically built on pure transformer architectures, often rely on multi-step diffusion processes and autoregressive frame-by-frame generation. These mechanisms result in prohibitive computational latency, rendering real-time deployment impossible. To address this bottleneck, we propose a lightweight navigation world model that adopts a one-step generation paradigm and a 3D U-Net backbone equipped with efficient spatial-temporal attention. This design drastically reduces inference latency, enabling high-frequency control while achieving superior predictive performance. We also integrate this model into an optimization-based planning framework utilizing anchor-based initialization to handle multi-modal goal navigation tasks. Extensive closed-loop experiments in both simulation and real-world environments demonstrate our system's superior efficiency and robustness compared to state-of-the-art baselines.
|
https://arxiv.org/abs/2601.12277
|
Academic Papers
|
svg
|
0965d336afd49f06decdbaeca723065e617552e1c69b1d500ec158c228acb432
|
2026-01-21T00:00:00-05:00
|
HCFT: Hierarchical Convolutional Fusion Transformer for EEG Decoding
|
arXiv:2601.12279v1 Announce Type: new Abstract: Electroencephalography (EEG) decoding requires models that can effectively extract and integrate complex temporal, spectral, and spatial features from multichannel signals. To address this challenge, we propose a lightweight and generalizable decoding framework named Hierarchical Convolutional Fusion Transformer (HCFT), which combines dual-branch convolutional encoders and hierarchical Transformer blocks for multi-scale EEG representation learning. Specifically, the model first captures local temporal and spatiotemporal dynamics through time-domain and time-space convolutional branches, and then aligns these features via a cross-attention mechanism that enables interaction between branches at each stage. Subsequently, a hierarchical Transformer fusion structure is employed to encode global dependencies across all feature stages, while a customized Dynamic Tanh normalization module is introduced to replace traditional Layer Normalization in order to enhance training stability and reduce redundancy. Extensive experiments are conducted on two representative benchmark datasets, BCI Competition IV-2b and CHB-MIT, covering both event-related cross-subject classification and continuous seizure prediction tasks. Results show that HCFT achieves 80.83% average accuracy and a Cohen's kappa of 0.6165 on BCI IV-2b, as well as 99.10% sensitivity, 0.0236 false positives per hour, and 98.82% specificity on CHB-MIT, consistently outperforming over ten state-of-the-art baseline methods. Ablation studies confirm that each core component of the proposed framework contributes significantly to the overall decoding performance, demonstrating HCFT's effectiveness in capturing EEG dynamics and its potential for real-world BCI applications.
|
https://arxiv.org/abs/2601.12279
|
Academic Papers
|
svg
|
33ee5be8bdcdfb276fe452c8fe45168c2fce1b39163d28df2084151320ebfff7
|
2026-01-21T00:00:00-05:00
|
Democratizing Music Therapy: LLM-Based Automated EEG Analysis and Progress Tracking for Low-Cost Home Devices
|
arXiv:2601.12280v1 Announce Type: new Abstract: Home-based music therapy devices require accessible and cost-effective solutions for users to understand and track their therapeutic progress. Traditional physiological signal analysis, particularly EEG interpretation, relies heavily on domain experts, creating barriers to scalability and home adoption. Meanwhile, few experts are capable of interpreting physiological signal data while also making targeted music recommendations. While large language models (LLMs) have shown promise in various domains, their application to automated physiological report generation for music therapy represents an unexplored task. We present a prototype system that leverages LLMs to bridge this gap -- transforming raw EEG and cardiovascular data into human-readable therapeutic reports and personalized music recommendations. Unlike prior work focusing on real-time physiological adaptation during listening, our approach emphasizes post-session analysis and interpretable reporting, enabling non-expert users to comprehend their psychophysiological states and track therapeutic outcomes over time. By integrating signal processing modules with LLM-based reasoning agents, the system provides a practical and low-cost solution for short-term progress monitoring in home music therapy contexts. This work demonstrates the feasibility of applying LLMs to a novel task -- democratizing access to physiology-driven music therapy through automated, interpretable reporting.
|
https://arxiv.org/abs/2601.12280
|
Academic Papers
|
svg
|
e135432833f20e5843fac9cbed113c24559da2e833813b1c197aca1ffb010a79
|
2026-01-21T00:00:00-05:00
|
CytoCLIP: Learning Cytoarchitectural Characteristics in Developing Human Brain Using Contrastive Language Image Pre-Training
|
arXiv:2601.12282v1 Announce Type: new Abstract: The functions of different regions of the human brain are closely linked to their distinct cytoarchitecture, which is defined by the spatial arrangement and morphology of the cells. Identifying brain regions by their cytoarchitecture enables various scientific analyses of the brain. However, delineating these areas manually in brain histological sections is time-consuming and requires specialized knowledge. An automated approach is necessary to minimize the effort needed from human experts. To address this, we propose CytoCLIP, a suite of vision-language models derived from pre-trained Contrastive Language-Image Pre-Training (CLIP) frameworks to learn joint visual-text representations of brain cytoarchitecture. CytoCLIP comprises two model variants: one is trained using low-resolution whole-region images to understand the overall cytoarchitectural pattern of an area, and the other is trained on high-resolution image tiles for detailed cellular-level representation. The training dataset is created from NISSL-stained histological sections of developing fetal brains of different gestational weeks. It includes 86 distinct regions for low-resolution images and 384 brain regions for high-resolution tiles. We evaluate the model's understanding of the cytoarchitecture and generalization ability using region classification and cross-modal retrieval tasks. Multiple experiments are performed under various data setups, including data from samples of different ages and sectioning planes. Experimental results demonstrate that CytoCLIP outperforms existing methods. It achieves an F1 score of 0.87 for whole-region classification and 0.91 for high-resolution image tile classification.
|
https://arxiv.org/abs/2601.12282
|
Academic Papers
|
svg
|
d1c29979eb1ef325a51474d8684a4387b8c824f43af340424ed6e19ad2163668
|
2026-01-21T00:00:00-05:00
|
SDiT: Semantic Region-Adaptive for Diffusion Transformers
|
arXiv:2601.12283v1 Announce Type: new Abstract: Diffusion Transformers (DiTs) achieve state-of-the-art performance in text-to-image synthesis but remain computationally expensive due to the iterative nature of denoising and the quadratic cost of global attention. In this work, we observe that denoising dynamics are spatially non-uniform-background regions converge rapidly while edges and textured areas evolve much more actively. Building on this insight, we propose SDiT, a Semantic Region-Adaptive Diffusion Transformer that allocates computation according to regional complexity. SDiT introduces a training-free framework combining (1) semantic-aware clustering via fast Quickshift-based segmentation, (2) complexity-driven regional scheduling to selectively update informative areas, and (3) boundary-aware refinement to maintain spatial coherence. Without any model retraining or architectural modification, SDiT achieves up to 3.0x acceleration while preserving nearly identical perceptual and semantic quality to full-attention inference.
|
https://arxiv.org/abs/2601.12283
|
Academic Papers
|
svg
|
d0ba8f145b771339854565b53d420af0763220da333b47cc94836532c49dcee9
|
2026-01-21T00:00:00-05:00
|
How Safe Is Your Data in Connected and Autonomous Cars: A Consumer Advantage or a Privacy Nightmare ?
|
arXiv:2601.12284v1 Announce Type: new Abstract: The rapid evolution of the automobile sector, driven by advancements in connected and autonomous vehicles (CAVs), has transformed how vehicles communicate, operate, and interact with their surroundings. Technologies such as Vehicle-to-Everything (V2X) communication enable autonomous cars to generate and exchange substantial amounts of data with real-world entities, enhancing safety, improving performance, and delivering personalized user experiences. However, this data-driven ecosystem introduces significant challenges, particularly concerning data privacy, security, and governance. The absence of transparency and comprehensive regulatory frameworks exacerbates issues of unauthorized data access, prolonged retention, and potential misuse, creating tension between consumer benefits and privacy risks. This review paper explores the multifaceted nature of data sharing in CAVs, analyzing its contributions to innovation and its associated vulnerabilities. It evaluates data-sharing mechanisms and communication technologies, highlights the benefits of data exchange across various use cases, examines privacy concerns and risks of data misuse, and critically reviews regulatory frameworks and their inadequacies in safeguarding user privacy. By providing a thorough analysis of the current state of data sharing in the automotive sector, the paper emphasizes the urgent need for robust policies and ethical data management practices. It calls for striking a balance between fostering technological advancements and ensuring secure, consumer-friendly solutions, paving the way for a trustworthy and innovative automotive future.
|
https://arxiv.org/abs/2601.12284
|
Academic Papers
|
svg
|
3e10a5f27b99e15100a0867ea264be02912684233f3c855de7c55235bc47141b
|
2026-01-21T00:00:00-05:00
|
LegacyAvatars: Volumetric Face Avatars For Traditional Graphics Pipelines
|
arXiv:2601.12285v1 Announce Type: new Abstract: We introduce a novel representation for efficient classical rendering of photorealistic 3D face avatars. Leveraging recent advances in radiance fields anchored to parametric face models, our approach achieves controllable volumetric rendering of complex facial features, including hair, skin, and eyes. At enrollment time, we learn a set of radiance manifolds in 3D space to extract an explicit layered mesh, along with appearance and warp textures. During deployment, this allows us to control and animate the face through simple linear blending and alpha compositing of textures over a static mesh. This explicit representation also enables the generated avatar to be efficiently streamed online and then rendered using classical mesh and shader-based rendering on legacy graphics platforms, eliminating the need for any custom engineering or integration.
|
https://arxiv.org/abs/2601.12285
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.