id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
bf6bc7fad104dd645cc3d193fd6393872334e0642d5eec51c46963030eec349d
2026-01-13T00:00:00-05:00
PsyAgent: Constructing Human-like Agents Based on Psychological Modeling and Contextual Interaction
arXiv:2601.06158v1 Announce Type: new Abstract: Human-like agents require modeling how dispositions interact with social structure. We present PsyAgent, which couples a Big Five trait prior with Bourdieu's cognitive-social co-structure. PsyAgent comprises: (i) Individual Structure (IS), a machine-usable profile encoding traits and facets, cognitive style, values, cultural and educational capital, and salient life episodes; and (ii) Multi-Scenario Contexting (MSC), role-relationship-norm frames spanning eight arenas (work, family, friendship, strangers and civic life, solitude and self-regulation, romance, learning, and public expression). At inference, fixed structured prompts bind the active scenario to the agent profile, yielding behavior that is stable yet context-sensitive. We instantiate IS and MSC to synthesize supervision (role-play dialogues, decision probes, feedback trajectories) and then fine-tune a small LLM. The resulting model produces consistent, identifiable persona-aligned behaviors for specified Big Five configurations and matches or exceeds several larger untuned LLMs and other untuned baselines on our metrics: persona consistency, contextual appropriateness, style matching, trait identifiability, and long-horizon stability. Ablations show IS chiefly improves trait fidelity and stylistic stability, while MSC drives norm awareness and decision fit; both are necessary for cross-scenario performance. PsyAgent offers a precise, data-efficient architecture for personality-grounded agents.
https://arxiv.org/abs/2601.06158
Academic Papers
svg
fc8458d5d54f7024445da5fb1b41cb62478944ca0d38074f894ef62f17839c2a
2026-01-13T00:00:00-05:00
Can we Improve Prediction of Psychotherapy Outcomes Through Pretraining With Simulated Data?
arXiv:2601.06159v1 Announce Type: new Abstract: In the context of personalized medicine, machine learning algorithms are growing in popularity. These algorithms require substantial information, which can be acquired effectively through the usage of previously gathered data. Open data and the utilization of synthetization techniques have been proposed to address this. In this paper, we propose and evaluate alternative approach that uses additional simulated data based on summary statistics published in the literature. The simulated data are used to pretrain random forests, which are afterwards fine-tuned on a real dataset. We compare the predictive performance of the new approach to random forests trained only on the real data. A Monte Carlo Cross Validation (MCCV) framework with 100 iterations was employed to investigate significance and stability of the results. Since a first study yielded inconclusive results, a second study with improved methodology (i.e., systematic information extraction and different prediction outcome) was conducted. In Study 1, some pretrained random forests descriptively outperformed the standard random forest. However, this improvement was not significant (t(99) = 0.89, p = 0.19). Contrary to expectations, in Study 2 the random forest trained only with the real data outperformed the pretrained random forests. We conclude with a discussion of challenges, such as the scarcity of informative publications, and recommendations for future research.
https://arxiv.org/abs/2601.06159
Academic Papers
svg
dc7ba22c193c18f996bd62dcf0284cff42cffb3e6f560e8b3ed3405661589a5e
2026-01-13T00:00:00-05:00
Student Guides Teacher: Weak-to-Strong Inference via Spectral Orthogonal Exploration
arXiv:2601.06160v1 Announce Type: new Abstract: While Large Language Models (LLMs) demonstrate near-human capabilities, they often suffer from "Reasoning Collapse" in complex mathematical proving and long-horizon planning. Models tend to degenerate into low-rank Bias Manifold, where stochastic sampling merely produces lexical variations of erroneous logic rather than semantic exploration. This geometric collapse renders the model "blind" to high-value solutions that lie within its Null Space. To address this, we propose Spectral Orthogonal Exploration (SOE), a geometric framework operating on a counter-intuitive "Student Guides Teacher" paradigm. Specifically, we utilize a weak auxiliary agent not for imitation, but as an orthogonal probe. By explicitly navigating the Teacher's Null Space, SOE serves as a geometric bridge, effectively ejecting the model from local optima to explore diverse, high-value solution spaces. Experiments on mathematical benchmarks demonstrate that, relative to baseline methods, our approach improves average accuracy by 62.4% and increases average sampling efficiency by 113.7%, indicating a promising path toward overcoming performance plateaus in advanced reasoning tasks.
https://arxiv.org/abs/2601.06160
Academic Papers
svg
938a7f7593794b9ec156d6d60d17f842e154d08da691768a88ab4d5d581418d2
2026-01-13T00:00:00-05:00
Beyond Accuracy: A Decision-Theoretic Framework for Allocation-Aware Healthcare AI
arXiv:2601.06161v1 Announce Type: new Abstract: Artificial intelligence (AI) systems increasingly achieve expert-level predictive accuracy in healthcare, yet improvements in model performance often fail to produce corresponding gains in patient outcomes. We term this disconnect the allocation gap and provide a decision-theoretic explanation by modelling healthcare delivery as a stochastic allocation problem under binding resource constraints. In this framework, AI acts as decision infrastructure that estimates utility rather than making autonomous decisions. Using constrained optimisation and Markov decision processes, we show how improved estimation affects optimal allocation under scarcity. A synthetic triage simulation demonstrates that allocation-aware policies substantially outperform risk-threshold approaches in realised utility, even with identical predictive accuracy. The framework provides a principled basis for evaluating and deploying healthcare AI in resource-constrained settings.
https://arxiv.org/abs/2601.06161
Academic Papers
svg
c3e0e18ebc9f51947d0765bb3e0f00ec9d7efa043ef7fe1e01e98199849451f9
2026-01-13T00:00:00-05:00
Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models
arXiv:2601.06162v1 Announce Type: new Abstract: Text-to-image diffusion models have achieved remarkable progress, yet their use raises copyright and misuse concerns, prompting research into machine unlearning. However, extending multi-concept unlearning to large-scale scenarios remains difficult due to three challenges: (i) conflicting weight updates that hinder unlearning or degrade generation; (ii) imprecise mechanisms that cause collateral damage to similar content; and (iii) reliance on additional data or modules, creating scalability bottlenecks. To address these, we propose Scalable-Precise Concept Unlearning (ScaPre), a unified framework tailored for large-scale unlearning. ScaPre introduces a conflict-aware stable design, integrating spectral trace regularization and geometry alignment to stabilize optimization, suppress conflicts, and preserve global structure. Furthermore, an Informax Decoupler identifies concept-relevant parameters and adaptively reweights updates, strictly confining unlearning to the target subspace. ScaPre yields an efficient closed-form solution without requiring auxiliary data or sub-models. Comprehensive experiments on objects, styles, and explicit content demonstrate that ScaPre effectively removes target concepts while maintaining generation quality. It forgets up to $\times \mathbf{5}$ more concepts than the best baseline within acceptable quality limits, achieving state-of-the-art precision and efficiency for large-scale unlearning.
https://arxiv.org/abs/2601.06162
Academic Papers
svg
83baff6a9b4d66b6071f83220811f2b25fa38553b0ed9ac7953b762b4dcf23f6
2026-01-13T00:00:00-05:00
Forget-It-All: Multi-Concept Machine Unlearning via Concept-Aware Neuron Masking
arXiv:2601.06163v1 Announce Type: new Abstract: The widespread adoption of text-to-image (T2I) diffusion models has raised concerns about their potential to generate copyrighted, inappropriate, or sensitive imagery learned from massive training corpora. As a practical solution, machine unlearning aims to selectively erase unwanted concepts from a pre-trained model without retraining from scratch. While most existing methods are effective for single-concept unlearning, they often struggle in real-world scenarios that require removing multiple concepts, since extending them to this setting is both non-trivial and problematic, causing significant challenges in unlearning effectiveness, generation quality, and sensitivity to hyperparameters and datasets. In this paper, we take a unique perspective on multi-concept unlearning by leveraging model sparsity and propose the Forget It All (FIA) framework. FIA first introduces Contrastive Concept Saliency to quantify each weight connection's contribution to a target concept. It then identifies Concept-Sensitive Neurons by combining temporal and spatial information, ensuring that only neurons consistently responsive to the target concept are selected. Finally, FIA constructs masks from the identified neurons and fuses them into a unified multi-concept mask, where Concept-Agnostic Neurons that broadly support general content generation are preserved while concept-specific neurons are pruned to remove the targets. FIA is training-free and requires only minimal hyperparameter tuning for new tasks, thereby promoting a plug-and-play paradigm. Extensive experiments across three distinct unlearning tasks demonstrate that FIA achieves more reliable multi-concept unlearning, improving forgetting effectiveness while maintaining semantic fidelity and image quality.
https://arxiv.org/abs/2601.06163
Academic Papers
svg
a1af822f570d682df9a2a11aa01d1003040ff17887e2238926f0e4080ed8ff68
2026-01-13T00:00:00-05:00
Contract2Plan: Verified Contract-Grounded Retrieval-Augmented Optimization for BOM-Aware Procurement and Multi-Echelon Inventory Planning
arXiv:2601.06164v1 Announce Type: new Abstract: Procurement and inventory planning is governed not only by demand forecasts and bills of materials (BOMs), but also by operational terms in contracts and supplier documents (e.g., MOQs, lead times, price tiers, allocation caps, substitution approvals). LLM-based extraction can speed up structuring these terms, but extraction-only or LLM-only decision pipelines are brittle: missed clauses, unit errors, and unresolved conflicts can yield infeasible plans or silent contract violations, amplified by BOM coupling. We introduce Contract2Plan, a verified GenAI-to-optimizer pipeline that inserts a solver-based compliance gate before plans are emitted. The system retrieves clause evidence with provenance, extracts a typed constraint schema with evidence spans, compiles constraints into a BOM-aware MILP, and verifies grounding, eligibility, consistency, and feasibility using solver diagnostics, triggering targeted repair or abstention when automation is unsafe. We formalize which clause classes admit conservative repair with contract-safe feasibility guarantees and which require human confirmation. A self-contained synthetic micro-benchmark (500 instances; T=5) computed by exact enumeration under an execution model with MOQ uplift and emergency purchases shows heavy-tailed regret and nontrivial MOQ-violation incidence for extraction-only planning, motivating verification as a first-class component of contract-grounded planning systems.
https://arxiv.org/abs/2601.06164
Academic Papers
svg
ef3eb5fa4e769b2985cfd53a4233e351054468bf9befe72f2ed44ecaf654d464
2026-01-13T00:00:00-05:00
What Users Leave Unsaid: Under-Specified Queries Limit Vision-Language Models
arXiv:2601.06165v1 Announce Type: new Abstract: Current vision-language benchmarks predominantly feature well-structured questions with clear, explicit prompts. However, real user queries are often informal and underspecified. Users naturally leave much unsaid, relying on images to convey context. We introduce HAERAE-Vision, a benchmark of 653 real-world visual questions from Korean online communities (0.76% survival from 86K candidates), each paired with an explicit rewrite, yielding 1,306 query variants in total. Evaluating 39 VLMs, we find that even state-of-the-art models (GPT-5, Gemini 2.5 Pro) achieve under 50% on the original queries. Crucially, query explicitation alone yields 8 to 22 point improvements, with smaller models benefiting most. We further show that even with web search, under-specified queries underperform explicit queries without search, revealing that current retrieval cannot compensate for what users leave unsaid. Our findings demonstrate that a substantial portion of VLM difficulty stem from natural query under-specification instead of model capability, highlighting a critical gap between benchmark evaluation and real-world deployment.
https://arxiv.org/abs/2601.06165
Academic Papers
svg
4f8abb4541111bf22fa2954422fc087827e89a08ed1e5520d969d0b81c0f6d8b
2026-01-13T00:00:00-05:00
B-FIRE: Binning-Free Diffusion Implicit Neural Representation for Hyper-Accelerated Motion-Resolved MRI
arXiv:2601.06166v1 Announce Type: new Abstract: Accelerated dynamic volumetric magnetic resonance imaging (4DMRI) is essential for applications relying on motion resolution. Existing 4DMRI produces acceptable artifacts of averaged breathing phases, which can blur and misrepresent instantaneous dynamic information. Recovery of such information requires a new paradigm to reconstruct extremely undersampled non-Cartesian k-space data. We propose B-FIRE, a binning-free diffusion implicit neural representation framework for hyper-accelerated MR reconstruction capable of reflecting instantaneous 3D abdominal anatomy. B-FIRE employs a CNN-INR encoder-decoder backbone optimized using diffusion with a comprehensive loss that enforces image-domain fidelity and frequency-aware constraints. Motion binned image pairs were used as training references, while inference was performed on binning-free undersampled data. Experiments were conducted on a T1-weighted StarVIBE liver MRI cohort, with accelerations ranging from 8 spokes per frame (RV8) to RV1. B-FIRE was compared against direct NuFFT, GRASP-CS, and an unrolled CNN method. Reconstruction fidelity, motion trajectory consistency, and inference latency were evaluated.
https://arxiv.org/abs/2601.06166
Academic Papers
svg
ab74bc4f09bfaadce744e4bcdfac6fc4c44e0efff6cbf0e581b676c42850529e
2026-01-13T00:00:00-05:00
Parent-Guided Adaptive Reliability (PGAR): A Behavioural Meta-Learning Framework for Stable and Trustworthy AI
arXiv:2601.06167v1 Announce Type: new Abstract: Parent-Guided Adaptive Reliability (PGAR) is a lightweight behavioural meta-learning framework that adds a supervisory "parent" layer on top of a standard learner to improve stability, calibration, and recovery under disturbances. PGAR computes three reflex-level signals (incident detection, overconfidence correction, and recovery memory) and fuses them into a bounded reliability index in [0,1]. This index continuously modulates the learner's effective learning rate, reducing update magnitude during instability and restoring it as reliability improves. We provide a Lyapunov-based proof sketch establishing bounded adaptation of the reliability dynamics under mild assumptions (smooth loss, descent direction, and bounded reflex outputs). Empirical evaluations on representative learning tasks show improved calibration, reduced loss variance, and faster recovery compared to standard optimizers, while retaining computational simplicity. PGAR functions as a plug-in reliability layer for existing optimization and learning pipelines, supporting interpretable reliability traces in safety-relevant settings.
https://arxiv.org/abs/2601.06167
Academic Papers
svg
3c3e00493714ef4c3e1cb8db5f7f40061ab548ce132728b446d5257134decb67
2026-01-13T00:00:00-05:00
Analyzing the Structure of Handwritten Digits: A Comparative Study of PCA, Factor Analysis, and UMAP
arXiv:2601.06168v1 Announce Type: new Abstract: Handwritten digit images lie in a high-dimensional pixel space but exhibit strong geometric and statistical structure. This paper investigates the latent organization of handwritten digits in the MNIST dataset using three complementary dimensionality reduction techniques: Principal Component Analysis (PCA), Factor Analysis (FA), and Uniform Manifold Approximation and Projection (UMAP). Rather than focusing on classification accuracy, we study how each method characterizes intrinsic dimensionality, shared variation, and nonlinear geometry. PCA reveals dominant global variance directions and enables high-fidelity reconstructions using a small number of components. FA decomposes digits into interpretable latent handwriting primitives corresponding to strokes, loops, and symmetry. UMAP uncovers nonlinear manifolds that reflect smooth stylistic transitions between digit classes. Together, these results demonstrate that handwritten digits occupy a structured low-dimensional manifold and that different statistical frameworks expose complementary aspects of this structure.
https://arxiv.org/abs/2601.06168
Academic Papers
svg
cf21006106f695ad4a18bf8f9d14cc10994fd0531542ea0a9a20e1a93048162c
2026-01-13T00:00:00-05:00
Think Bright, Diffuse Nice: Enhancing T2I-ICL via Inductive-Bias Hint Instruction and Query Contrastive Decoding
arXiv:2601.06169v1 Announce Type: new Abstract: Text-to-Image In-Context Learning (T2I-ICL) enables customized image synthesis via interleaved text-image examples but faces two mutually reinforcing bottlenecks, compliance failure and prior-dominated hallucination, that form a vicious cycle degrading generation quality. Existing methods rely on tailored training, which limits flexibility and raises deployment costs. To address these challenges effectively, we propose TBDN, a training-free framework integrating two complementary closed-loop mechanisms: Hint Instruction (HI) and Query Contrastive Decoding (QCD). HI injects task-aware inductive bias via lightweight prompt engineering to anchor models on contextual mapping rules, thereby mitigating compliance failure. QCD adjusts the decoding distributions of language models by contrasting full-input and query-omitted distributions, suppressing prior-dominated hallucination. TBDN achieves State-of-the-Art performance on CoBSAT and Text-to-Image Fast Mini-ImageNet, with robust generalization across model backbones, prompt designs, and hyperparameters. It also maintains promising performance in concept preservation and prompt following on Dreambench++. By breaking the two bottlenecks, TBDN establishes a simple yet effective framework for efficient and reliable T2I-ICL.
https://arxiv.org/abs/2601.06169
Academic Papers
svg
984e1ec4fa9db72b176c53f05ac5dff92f7ae2889ccdddce570b0dbf77790c2c
2026-01-13T00:00:00-05:00
From Individual Prompts to Collective Intelligence: Mainstreaming Generative AI in the Classroom
arXiv:2601.06171v1 Announce Type: new Abstract: Engineering classrooms are increasingly experimenting with generative AI (GenAI), but most uses remain confined to individual prompting and isolated assistance. This narrow framing risks reinforcing equity gaps and only rewarding the already privileged or motivated students. We argue instead for a shift toward collective intelligence (CI)-focused pedagogy, where GenAI acts as a catalyst for peer-to-peer learning. We implemented Generative CI (GCI) activities in two undergraduate engineering courses, engaging 140 students through thinking routines -- short, repeatable scaffolds developed by Harvard Project Zero to make thinking visible and support collaborative sense-making. Using routines such as Question Sorts and Peel the Fruit, combined with strategic AI consultation, we enabled students to externalize their reasoning, compare interpretations, and iteratively refine ideas. Our dual-pronged approach synthesizes literature from learning sciences, CI, embodied cognition, and philosophy of technology, while also empirically learning through student surveys and engagement observations. Results demonstrate that students value the combination of human collaboration with strategic AI support, recognizing risks of over-reliance while appreciating AI's role in expanding perspectives. Students identified that group work fosters deeper understanding and creative problem-solving than AI alone, with the timing of AI consultation significantly affecting learning outcomes. We offer practical implementation pathways for mainstreaming CI-focused pedagogy that cultivates deeper engagement, resilient problem-solving, and shared ownership of knowledge.
https://arxiv.org/abs/2601.06171
Academic Papers
svg
c22499021a62843c4d697eea21347f6831c78e474d66837be83f470a16f6d7a9
2026-01-13T00:00:00-05:00
The Psychology of Learning from Machines: Anthropomorphic AI and the Paradox of Automation in Education
arXiv:2601.06172v1 Announce Type: new Abstract: As AI tutors enter classrooms at unprecedented speed, their deployment increasingly outpaces our grasp of the psychological and social consequences of such technology. Yet decades of research in automation psychology, human factors, and human-computer interaction provide crucial insights that remain underutilized in educational AI design. This work synthesizes four research traditions -- automation psychology, human factors engineering, HCI, and philosophy of technology -- to establish a comprehensive framework for understanding how learners psychologically relate to anthropomorphic AI tutors. We identify three persistent challenges intensified by Generative AI's conversational fluency. First, learners exhibit dual trust calibration failures -- automation bias (uncritical acceptance) and algorithm aversion (excessive rejection after errors) -- with an expertise paradox where novices overrely while experts underrely. Second, while anthropomorphic design enhances engagement, it can distract from learning and foster harmful emotional attachment. Third, automation ironies persist: systems meant to aid cognition introduce designer errors, degrade skills through disuse, and create monitoring burdens humans perform poorly. We ground this theoretical synthesis through comparative analysis of over 104,984 YouTube comments across AI-generated philosophical debates and human-created engineering tutorials, revealing domain-dependent trust patterns and strong anthropomorphic projection despite minimal cues. For engineering education, our synthesis mandates differentiated approaches: AI tutoring for technical foundations where automation bias is manageable through proper scaffolding, but human facilitation for design, ethics, and professional judgment where tacit knowledge transmission proves irreplaceable.
https://arxiv.org/abs/2601.06172
Academic Papers
svg
6f5afe587dc4fad0376a0db8c18f16c137bb458bceba76d91d8ac58cbdd9ad94
2026-01-13T00:00:00-05:00
The environmental impact of ICT in the era of data and artificial intelligence
arXiv:2601.06174v1 Announce Type: new Abstract: The technology industry promotes artificial intelligence (AI) as a key enabler to solve a vast number of problems, including the environmental crisis. However, when looking at the emissions of datacenters from worldwide service providers, we observe a rapid increase aligned with the advent of AI. Some actors justify it by claiming that the increase of emissions for digital infrastructures is acceptable as it could help the decarbonization of other sectors, e.g., videoconference tools instead of taking the plane for a meeting abroad, or using AI to optimize and reduce energy consumption. With such conflicting claims and ambitions, it is unclear how the net environmental impact of AI could be quantified. The answer is prone to uncertainty for different reasons, among others: lack of transparency, interference with market expectations, lack of standardized methodology for quantifying direct and indirect impact, and the quick evolutions of models and their requirements. This report provides answers and clarifications to these different elements. Firstly, we consider the direct environmental impact of AI from a top-down approach, starting from general information and communication technologies (ICT) and then zooming in on data centers and the different phases of AI development and deployment. Secondly, a framework is introduced on how to assess both the direct and indirect impact of AI. Finally, we finish with good practices and what we can do to reduce AI impact.
https://arxiv.org/abs/2601.06174
Academic Papers
svg
f248dd0fe075c6f6e3c3b772210bd2905ee5c511b9a9b8fef12f45f47ddb25f5
2026-01-13T00:00:00-05:00
A Mixed Methods Systematic Analysis of Issues and Factors Influencing Organizational Cloud Computing Adoption and Usage in the Public Sector: Initial Findings
arXiv:2601.06175v1 Announce Type: new Abstract: Cloud computing has been shown to be an essential enabling technology for public sector organizations PSOs and offers numerous potential benefits, including reduced information technology infrastructure costs, increased innovation potential, and improved resource resilience and scalability. Despite governments' intensifying efforts to realize the benefits of this technology, cloud computing adoption and usage proves to be challenging, posing a variety of organizational and operational issues for PSOs. This systematic analysis constitutes the initial phase of a larger research effort that involves forthcoming case studies of specific public sector cloud stakeholders; it aims to identify and synthesize the available knowledge on organizational cloud computing adoption and utilization in the public sector to provide public sector decision makers and stakeholders with reliable, evidence-based, actionable insights that inform and improve public sector IT practice and policy.
https://arxiv.org/abs/2601.06175
Academic Papers
svg
c5915e101bbf889d00bad2f93863e2f3b7835cefe7ad614298963b04da3f398d
2026-01-13T00:00:00-05:00
TIR-Flow: Active Video Search and Reasoning with Frozen VLMs
arXiv:2601.06176v1 Announce Type: new Abstract: While Large Video-Language Models (Video-LLMs) have achieved remarkable progress in perception, their reasoning capabilities remain a bottleneck. Existing solutions typically resort to a heavy "data engineering" paradigm-synthesizing large-scale Chain-of-Thought (CoT) datasets followed by Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This pipeline primarily optimizes probability sampling efficiency and aligns output distributions, but fails to activate the intrinsic intelligence required for dynamic visual exploration. In this work, we propose TIR-Flow, a novel framework that shifts the paradigm from passive processing to active video searching and reasoning without additional data or parameter updating. Concretely, our framework operates through three synergistic modules: HDD decomposes complex queries into a set of verifiable sub-tasks; HAP actively directs visual attention to gather high-resolution evidence for hypothesis validation; EBA maintains a persistent workspace to accumulate and update the discovered clues for logical reasoning. Extensive experiments on seven benchmarks demonstrate that TIR-Flow significantly outperforms recent strong baselines, delivering an average performance boost of 5.9%, with gains reaching 10.5% on Egoschema. Our analysis confirms that empowering frozen VLMs with System-2-like active perception is a scalable path toward solving long-horizon video reasoning.
https://arxiv.org/abs/2601.06176
Academic Papers
svg
e68ba551bb74fbf75a8536dc3e05c5bf70cf2ab019531cc0d6bf37b58b81c453
2026-01-13T00:00:00-05:00
AutoVulnPHP: LLM-Powered Two-Stage PHP Vulnerability Detection and Automated Localization
arXiv:2601.06177v1 Announce Type: new Abstract: PHP's dominance in web development is undermined by security challenges: static analysis lacks semantic depth, causing high false positives; dynamic analysis is computationally expensive; and automated vulnerability localization suffers from coarse granularity and imprecise context. Additionally, the absence of large-scale PHP vulnerability datasets and fragmented toolchains hinder real-world deployment. We present AutoVulnPHP, an end-to-end framework coupling two-stage vulnerability detection with fine-grained automated localization. SIFT-VulMiner (Structural Inference for Flaw Triage Vulnerability Miner) generates vulnerability hypotheses using AST structures enhanced with data flow. SAFE-VulMiner (Semantic Analysis for Flaw Evaluation Vulnerability Miner) verifies candidates through pretrained code encoder embeddings, eliminating false positives. ISAL (Incremental Sequence Analysis for Localization) pinpoints root causes via syntax-guided tracing, chain-of-thought LLM inference, and causal consistency checks to ensure precision. We contribute PHPVD, the first large-scale PHP vulnerability dataset with 26,614 files (5.2M LOC) across seven vulnerability types. On public benchmarks and PHPVD, AutoVulnPHP achieves 99.7% detection accuracy, 99.5% F1 score, and 81.0% localization rate. Deployed on real-world repositories, it discovered 429 previously unknown vulnerabilities, 351 assigned CVE identifiers, validating its practical effectiveness.
https://arxiv.org/abs/2601.06177
Academic Papers
svg
b1a3c899480c005ab2565e15f25ae01fdc2420bb866a0ed14396f8bfb4daba6e
2026-01-13T00:00:00-05:00
Performance of models for monitoring sustainable development goals from remote sensing: A three-level meta-regression
arXiv:2601.06178v1 Announce Type: new Abstract: Machine learning (ML) is a tool to exploit remote sensing data for the monitoring and implementation of the United Nations' Sustainable Development Goals (SDGs). In this paper, we report on a meta-analysis to evaluate the performance of ML applied to remote sensing data to monitor SDGs. Specifically, we aim to 1) estimate the average performance; 2) determine the degree of heterogeneity between and within studies; and 3) assess how study features influence model performance. Using PRISMA guidelines, a search was performed across multiple academic databases to identify potentially relevant studies. A random sample of 200 was screened by three reviewers, resulting in 86 trials within 20 studies with 14 study features. Overall accuracy was the most reported performance metric. It was analyzed using double arcsine transformation and a three-level random effects model. The average overall accuracy of the best model was 0.90 [0.86, 0.92]. There was considerable heterogeneity in model performance, 64% of which was between studies. The only significant feature was the prevalence of the majority class, which explained 61% of the between-study heterogeneity. None of the other thirteen features added value to the model. The most important contributions of this paper are the following two insights. 1) Overall accuracy is the most popular performance metric, yet arguably the least insightful. Its sensitivity to class imbalance makes it necessary to normalize it, which is far from common practice. 2) The field needs to standardize the reporting. Reporting of the confusion matrix for independent test sets is the most important ingredient for between-study comparisons of ML classifiers. These findings underscore the need for robust and comparable evaluation metrics in machine learning applications to ensure reliable and actionable insights for effective SDG monitoring and policy formulation.
https://arxiv.org/abs/2601.06178
Academic Papers
svg
af60b94edeec7ab9465614e3d89a82e8d1e3535d4282cc2e25096f63b06d2c45
2026-01-13T00:00:00-05:00
MixDPO: Modeling Preference Strength for Pluralistic Alignment
arXiv:2601.06180v1 Announce Type: new Abstract: Preference based alignment objectives implicitly assume that all human preferences are expressed with equal strength. In practice, however, preference strength varies across individuals and contexts -- a phenomenon established in behavioral economics and discrete choice theory. This mismatch limits the ability of existing objectives to faithfully capture heterogeneous human judgments. Inspired by this literature, we introduce Mixed Logit Direct Preference Optimization (MixDPO), a generalization of Direct Preference Optimization that models variation in preference strength. MixDPO enables alignment objectives to capture heterogeneity in how strongly preferences are expressed across training examples. We evaluate MixDPO on three preference datasets using two open-weight language models. Across datasets, MixDPO improves aggregate alignment performance (+11.2 points on Pythia-2.8B) while preserving subgroup level preferences, with the largest gains appearing in settings with higher inferred preference heterogeneity. MixDPO makes preference heterogeneity explicit through learned strength distributions. We release our code for reproducibility.
https://arxiv.org/abs/2601.06180
Academic Papers
svg
f33416ba2e7efa10d04082a47ec7d59d4e27d63c73a9b33d1bd5b781dd199e80
2026-01-13T00:00:00-05:00
Neuro-Symbolic Compliance: Integrating LLMs and SMT Solvers for Automated Financial Legal Analysis
arXiv:2601.06181v1 Announce Type: new Abstract: Financial regulations are increasingly complex, hindering automated compliance-especially the maintenance of logical consistency with minimal human oversight. We introduce a Neuro-Symbolic Compliance Framework that integrates Large Language Models (LLMs) with Satisfiability Modulo Theories (SMT) solvers to enable formal verifiability and optimization-based compliance correction. The LLM interprets statutes and enforcement cases to generate SMT constraints, while the solver enforces consistency and computes the minimal factual modification required to restore legality when penalties arise. Unlike transparency-oriented methods, our approach emphasizes logic-driven optimization, delivering verifiable, legally consistent reasoning rather than post-hoc explanation. Evaluated on 87 enforcement cases from Taiwan's Financial Supervisory Commission (FSC), the system attains 86.2% correctness in SMT code generation, improves reasoning efficiency by over 100x, and consistently corrects violations-establishing a preliminary foundation for optimization-based compliance applications.
https://arxiv.org/abs/2601.06181
Academic Papers
svg
0dfbb63169b124d84ebcecb186b3551d0f05350ffa438981367eeac5c119d62f
2026-01-13T00:00:00-05:00
Geo-Standardizing 3D Modeling of Surface Objects and Related Logical Spaces on Celestial Bodies: Case Studies for Moon and Mars
arXiv:2601.06182v1 Announce Type: new Abstract: Establishing frameworks for promoting the realization of various activities on celestial bodies sustainably is of great significance for different contexts, such as preserving the scientific evidence and space heritage. Therefore, this research first proposes a conceptual model that covers the different types of features, attributes, and relationships between them to comprehensively delineate the surface objects and related logical spaces on celestial bodies. It then implements this conceptual model as a CityJSON extension in such a way that allows for creating the three-dimensional (3D) geodatasets that represent these objects and spaces in a standardized manner. Moreover, the usefulness of this study is demonstrated through creating CityJSON datasets that include 3D models of exemplary surface objects from Moon and Mars, such as a historical landing site and related logical spaces, such as exclusion zones for protecting this site. The results of the current study show that there is a strong potential for forming 3D geodatasets on celestial bodies that can provide a notable foundation for the technical implementation of international agreements and legal frameworks. This work also contributes to the design of planetary spatial data infrastructures (PSDI) by incorporating the third dimension.
https://arxiv.org/abs/2601.06182
Academic Papers
svg
d57e73922a51b3af37c5da786d318c3717b09ba63bcf5614a729b4bf9fc7357f
2026-01-13T00:00:00-05:00
Data-Driven Reduced-Complexity Modeling of Fluid Flows: A Community Challenge
arXiv:2601.06183v1 Announce Type: new Abstract: We introduce a community challenge designed to facilitate direct comparisons between data-driven methods for compression, forecasting, and sensing of complex aerospace flows. The challenge is organized into three tracks that target these complementary capabilities: compression (compact representations for large datasets), forecasting (predicting future flow states from a finite history), and sensing (inferring unmeasured flow states from limited measurements). Across these tracks, multiple challenges span diverse flow datasets and use cases, each emphasizing different model requirements. The challenge is open to anyone, and we invite broad participation to build a comprehensive and balanced picture of what works and where current methods fall short. To support fair comparisons, we provide standardized success metrics, evaluation tools, and baseline implementations, with one classical and one machine-learning baseline per challenge. Final assessments use blind tests on withheld data. We explicitly encourage negative results and careful analyses of limitations. Outcomes will be disseminated through an AIAA Journal Virtual Collection and invited presentations at AIAA conferences.
https://arxiv.org/abs/2601.06183
Academic Papers
svg
8bcea24923344cbd9ff53a8e382984980df5727b4d00c683452d1f3fbd898d8f
2026-01-13T00:00:00-05:00
Attention Mechanism and Heuristic Approach: Context-Aware File Ranking Using Multi-Head Self-Attention
arXiv:2601.06185v1 Announce Type: new Abstract: The identification and ranking of impacted files within software reposi-tories is a key challenge in change impact analysis. Existing deterministic approaches that combine heuristic signals, semantic similarity measures, and graph-based centrality metrics have demonstrated effectiveness in nar-rowing candidate search spaces, yet their recall plateaus. This limitation stems from the treatment of features as linearly independent contributors, ignoring contextual dependencies and relationships between metrics that characterize expert reasoning patterns. To address this limitation, we propose the application of Multi-Head Self-Attention as a post-deterministic scoring refinement mechanism. Our approach learns contextual weighting between features, dynamically adjust-ing importance levels per file based on relational behavior exhibited across candidate file sets. The attention mechanism produces context-aware adjustments that are additively combined with deterministic scores, pre-serving interpretability while enabling reasoning similar to that performed by experts when reviewing change surfaces. We focus on recall rather than precision, as false negatives (missing impacted files) are far more costly than false positives (irrelevant files that can be quickly dismissed during review). Empirical evaluation on 200 test cases demonstrates that the introduc-tion of self-attention improves Top-50 recall from approximately 62-65% to between 78-82% depending on repository complexity and structure, achiev-ing 80% recall at Top-50 files. Expert validation yields improvement from 6.5/10 to 8.6/10 in subjective accuracy alignment. This transformation bridges the reasoning capability gap between deterministic automation and expert judgment, improving recall in repository-aware effort estimation.
https://arxiv.org/abs/2601.06185
Academic Papers
svg
00c1341aab5a876dff8912c028885ae080c020e7038c5e0f55a2a2b17f3f2b17
2026-01-13T00:00:00-05:00
Time-Series Anomaly Classification for Launch Vehicle Propulsion Systems: Fast Statistical Detectors Enhancing LSTM Accuracy and Data Quality
arXiv:2601.06186v1 Announce Type: new Abstract: Supporting Go/No-Go decisions prior to launch requires assessing real-time telemetry data against redline limits established during the design qualification phase. Family data from ground testing or previous flights is commonly used to detect initiating failure modes and their timing; however, this approach relies heavily on engineering judgment and is more error-prone for new launch vehicles. To address these limitations, we utilize Long-Term Short-Term Memory (LSTM) networks for supervised classification of time-series anomalies. Although, initial training labels derived from simulated anomaly data may be suboptimal due to variations in anomaly strength, anomaly settling times, and other factors. In this work, we propose a novel statistical detector based on the Mahalanobis distance and forward-backward detection fractions to adjust the supervised training labels. We demonstrate our method on digital twin simulations of a ground-stage propulsion system with 20.8 minutes of operation per trial and O(10^8) training timesteps. The statistical data relabeling improved precision and recall of the LSTM classifier by 7% and 22% respectively.
https://arxiv.org/abs/2601.06186
Academic Papers
svg
b53dfd7bf25b0616d2848b9dc37f2c295ada6972b2ee8786068cbb337ee8ada3
2026-01-13T00:00:00-05:00
A Unified Attention U-Net Framework for Cross-Modality Tumor Segmentation in MRI and CT
arXiv:2601.06187v1 Announce Type: new Abstract: This study presents a unified Attention U-Net architecture trained jointly on MRI (BraTS 2021) and CT (LIDC-IDRI) datasets to investigate the generalizability of a single model across diverse imaging modalities and anatomical sites. Our proposed pipeline incorporates modality-harmonized preprocessing, attention-gated skip connections, and a modality-aware Focal Tversky loss function. To the best of our knowledge, this study is among the first to evaluate a single Attention U-Net trained simultaneously on separate MRI (BraTS) and CT (LIDC-IDRI) tumor datasets, without relying on modality-specific encoders or domain adaptation. The unified model demonstrates competitive performance in terms of Dice coefficient, IoU, and AUC on both domains, thereby establishing a robust and reproducible baseline for future research in cross-modality tumor segmentation.
https://arxiv.org/abs/2601.06187
Academic Papers
svg
1f90bbfa06cb77e05d1e3d90c03554e28a692472b7e63ca3050624c6a3671215
2026-01-13T00:00:00-05:00
Large-Scale Continual Scheduling and Execution for Dynamic Distributed Satellite Constellation Observation Allocation
arXiv:2601.06188v1 Announce Type: new Abstract: The size and capabilities of Earth-observing satellite constellations are rapidly increasing. Leveraging distributed onboard control, we can enable novel time-sensitive measurements and responses. However, deploying autonomy to satellites requires efficient computation and communication. This work tackles the challenge of efficiently scheduling observations for hundreds of satellites in a dynamic, large-scale problem with millions of variables. We present the Dynamic Multi-Satellite Constellation Observation Scheduling Problem (DCOSP), a new formulation of Dynamic Distributed Constraint Optimization Problems (DDCOP) that models integrated scheduling and execution. DCOSP has a novel optimality condition for which we construct an omniscient offline algorithm for its computation. We also present the Dynamic Incremental Neighborhood Stochastic Search algorithm (D-NSS), an incomplete online decomposition-based DDCOP algorithm that repairs and solves sub-problems when problem dynamics occur. We show through simulation that D-NSS converges to near-optimal solutions and outperforms DDCOP baselines in terms of solution quality, computation time, and message volume. As part of the NASA FAME mission, DCOSP and D-NSS will be the foundation of the largest in-space demonstration of distributed multi-agent AI to date.
https://arxiv.org/abs/2601.06188
Academic Papers
svg
20860931f6b577d3a0f5f5b5d6c562d1452886e0e8fe8a13e0871d72a11b51cb
2026-01-13T00:00:00-05:00
Rational Synthesizers or Heuristic Followers? Analyzing LLMs in RAG-based Question-Answering
arXiv:2601.06189v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) is the prevailing paradigm for grounding Large Language Models (LLMs), yet the mechanisms governing how models integrate groups of conflicting retrieved evidence remain opaque. Does an LLM answer a certain way because the evidence is factually strong, because of a prior belief, or merely because it is repeated frequently? To answer this, we introduce GroupQA, a curated dataset of 1,635 controversial questions paired with 15,058 diversely-sourced evidence documents, annotated for stance and qualitative strength. Through controlled experiments, we characterize group-level evidence aggregation dynamics: Paraphrasing an argument can be more persuasive than providing distinct independent support; Models favor evidence presented first rather than last, and Larger models are increasingly resistant to adapt to presented evidence. Additionally, we find that LLM explanations to group-based answers are unfaithful. Together, we show that LLMs behave consistently as vulnerable heuristic followers, with direct implications for improving RAG system design.
https://arxiv.org/abs/2601.06189
Academic Papers
svg
a48eb3013401b3d46fc96c5228f2bd5cfa240b906c225eb7f4f8b6729d07f385
2026-01-13T00:00:00-05:00
TimeGNN-Augmented Hybrid-Action MARL for Fine-Grained Task Partitioning and Energy-Aware Offloading in MEC
arXiv:2601.06191v1 Announce Type: new Abstract: With the rapid growth of IoT devices and latency-sensitive applications, the demand for both real-time and energy-efficient computing has surged, placing significant pressure on traditional cloud computing architectures. Mobile edge computing (MEC), an emerging paradigm, effectively alleviates the load on cloud centers and improves service quality by offloading computing tasks to edge servers closer to end users. However, the limited computing resources, non-continuous power provisioning (e.g., battery-powered nodes), and highly dynamic systems of edge servers complicate efficient task scheduling and resource allocation. To address these challenges, this paper proposes a multi-agent deep reinforcement learning algorithm, TG-DCMADDPG, and constructs a collaborative computing framework for multiple edge servers, aiming to achieve joint optimization of fine-grained task partitioning and offloading. This approach incorporates a temporal graph neural network (TimeGNN) to model and predict time series of multi-dimensional server state information, thereby reducing the frequency of online interactions and improving policy predictability. Furthermore, a multi-agent deterministic policy gradient algorithm (DC-MADDPG) in a discrete-continuous hybrid action space is introduced to collaboratively optimize task partitioning ratios, transmission power, and priority scheduling strategies. Extensive simulation experiments confirm that TG-DCMADDPG achieves markedly faster policy convergence, superior energy-latency optimization, and higher task completion rates compared with existing state-of-the-art methods, underscoring its robust scalability and practical effectiveness in dynamic and constrained MEC scenarios.
https://arxiv.org/abs/2601.06191
Academic Papers
svg
4c6f94f246f202328c56b87ad104fdba7d745b78d7dce0c615c72e20797057af
2026-01-13T00:00:00-05:00
MLB: A Scenario-Driven Benchmark for Evaluating Large Language Models in Clinical Applications
arXiv:2601.06193v1 Announce Type: new Abstract: The proliferation of Large Language Models (LLMs) presents transformative potential for healthcare, yet practical deployment is hindered by the absence of frameworks that assess real-world clinical utility. Existing benchmarks test static knowledge, failing to capture the dynamic, application-oriented capabilities required in clinical practice. To bridge this gap, we introduce a Medical LLM Benchmark MLB, a comprehensive benchmark evaluating LLMs on both foundational knowledge and scenario-based reasoning. MLB is structured around five core dimensions: Medical Knowledge (MedKQA), Safety and Ethics (MedSE), Medical Record Understanding (MedRU), Smart Services (SmartServ), and Smart Healthcare (SmartCare). The benchmark integrates 22 datasets (17 newly curated) from diverse Chinese clinical sources, covering 64 clinical specialties. Its design features a rigorous curation pipeline involving 300 licensed physicians. Besides, we provide a scalable evaluation methodology, centered on a specialized judge model trained via Supervised Fine-Tuning (SFT) on expert annotations. Our comprehensive evaluation of 10 leading models reveals a critical translational gap: while the top-ranked model, Kimi-K2-Instruct (77.3% accuracy overall), excels in structured tasks like information extraction (87.8% accuracy in MedRU), performance plummets in patient-facing scenarios (61.3% in SmartServ). Moreover, the exceptional safety score (90.6% in MedSE) of the much smaller Baichuan-M2-32B highlights that targeted training is equally critical. Our specialized judge model, trained via SFT on a 19k expert-annotated medical dataset, achieves 92.1% accuracy, an F1-score of 94.37%, and a Cohen's Kappa of 81.3% for human-AI consistency, validating a reproducible and expert-aligned evaluation protocol. MLB thus provides a rigorous framework to guide the development of clinically viable LLMs.
https://arxiv.org/abs/2601.06193
Academic Papers
svg
40495125cd0dc3a9cdb54a53b95281a177e0b09a0be09204d95806b1378dc44d
2026-01-13T00:00:00-05:00
Political Alignment in Large Language Models: A Multidimensional Audit of Psychometric Identity and Behavioral Bias
arXiv:2601.06194v1 Announce Type: new Abstract: As large language models (LLMs) are increasingly integrated into social decision-making, understanding their political positioning and alignment behavior is critical for safety and fairness. This study presents a sociotechnical audit of 26 prominent LLMs, triangulating their positions across three psychometric inventories (Political Compass, SapplyValues, 8 Values) and evaluating their performance on a large-scale news labeling task ($N \approx 27{,}000$). Our results reveal a strong clustering of models in the Libertarian-Left region of the ideological space, encompassing 96.3% of the cohort. Alignment signals appear to be consistent architectural traits rather than stochastic noise ($\eta^2 > 0.90$); however, we identify substantial discrepancies in measurement validity. In particular, the Political Compass exhibits a strong negative correlation with cultural progressivism ($r=-0.64$) when compared against multi-axial instruments, suggesting a conflation of social conservatism with authoritarianism in this context. We further observe a significant divergence between open-weights and closed-source models, with the latter displaying markedly higher cultural progressivism scores ($p<10^{-25}$). In downstream media analysis, models exhibit a systematic "center-shift," frequently categorizing neutral articles as left-leaning, alongside an asymmetric detection capability in which "Far Left" content is identified with greater accuracy (19.2%) than "Far Right" content (2.0%). These findings suggest that single-axis evaluations are insufficient and that multidimensional auditing frameworks are necessary to characterize alignment behavior in deployed LLMs. Our code and data will be made public.
https://arxiv.org/abs/2601.06194
Academic Papers
svg
08b2680150933df9440d10f76d0e085bec240d512a879dae398c8026cef4d6bb
2026-01-13T00:00:00-05:00
EntroLnn: Entropy-Guided Liquid Neural Networks for Operando Refinement of Battery Capacity Fade Trajectories
arXiv:2601.06195v1 Announce Type: new Abstract: Battery capacity degradation prediction has long been a central topic in battery health analytics, and most studies focus on state of health (SoH) estimation and end of life (EoL) prediction. This study extends the scope to online refinement of the entire capacity fade trajectory (CFT) through EntroLnn, a framework based on entropy-guided transformable liquid neural networks (LNNs). EntroLnn treats CFT refinement as an integrated process rather than two independent tasks for pointwise SoH and EoL. We introduce entropy-based features derived from online temperature fields, applied for the first time in battery analytics, and combine them with customized LNNs that model temporal battery dynamics effectively. The framework enhances both static and dynamic adaptability of LNNs and achieves robust and generalizable CFT refinement across different batteries and operating conditions. The approach provides a high fidelity battery health model with lightweight computation, achieving mean absolute errors of only 0.004577 for CFT and 18 cycles for EoL prediction. This work establishes a foundation for entropy-informed learning in battery analytics and enables self-adaptive, lightweight, and interpretable battery health prediction in practical battery management systems.
https://arxiv.org/abs/2601.06195
Academic Papers
svg
4fea0a29dd105802fbfc0170baae49136b39f259100c088d1cf343b47786dd70
2026-01-13T00:00:00-05:00
Manifold-based Sampling for In-Context Hallucination Detection in Large Language Models
arXiv:2601.06196v1 Announce Type: new Abstract: Large language models (LLMs) frequently generate factually incorrect or unsupported content, commonly referred to as hallucinations. Prior work has explored decoding strategies, retrieval augmentation, and supervised fine-tuning for hallucination detection, while recent studies show that in-context learning (ICL) can substantially influence factual reliability. However, existing ICL demonstration selection methods often rely on surface-level similarity heuristics and exhibit limited robustness across tasks and models. We propose MB-ICL, a manifold-based demonstration sampling framework for selecting in-context demonstrations that leverages latent representations extracted from frozen LLMs. By jointly modeling local manifold structure and class-aware prototype geometry, MB-ICL selects demonstrations based on their proximity to learned prototypes rather than lexical or embedding similarity alone. Across factual verification (FEVER) and hallucination detection (HaluEval) benchmarks, MB-ICL outperforms standard ICL selection baselines in the majority of evaluated settings, with particularly strong gains on dialogue and summarization tasks. The method remains robust under temperature perturbations and model variation, indicating improved stability compared to heuristic retrieval strategies. While lexical retrieval can remain competitive in certain question-answering regimes, our results demonstrate that manifold-based prototype selection provides a reliable and training light approach for hallucination detection without modifying LLM parameters, offering a principled direction for improved ICL demonstration selection.
https://arxiv.org/abs/2601.06196
Academic Papers
svg
0e8758b4ebf21fe245c9a51ec49a3c3ae74aeb2a97f0c56f4be2d99996d7d185
2026-01-13T00:00:00-05:00
AI Safeguards, Generative AI and the Pandora Box: AI Safety Measures to Protect Businesses and Personal Reputation
arXiv:2601.06197v1 Announce Type: new Abstract: Generative AI has unleashed the power of content generation and it has also unwittingly opened the pandora box of realistic deepfake causing a number of social hazards and harm to businesses and personal reputation. The investigation & ramification of Generative AI technology across industries, the resolution & hybridization detection techniques using neural networks allows flagging of the content. Good detection techniques & flagging allow AI safety - this is the main focus of this paper. The research provides a significant method for efficiently detecting dark side problems by imposing a Temporal Consistency Learning (TCL) technique. Through pretrained Temporal Convolutional Networks (TCNs) model training and performance comparison, this paper showcases that TCN models outperforms the other approaches and achieves significant accuracy for five dark side problems. Findings highlight how important it is to take proactive measures in identification to reduce any potential risks associated with generative artificial intelligence.
https://arxiv.org/abs/2601.06197
Academic Papers
svg
37237ed6bb6f26276411c37672be46196f8ef0f098a79772961f4139f7bab44d
2026-01-13T00:00:00-05:00
How Does India Cook Biryani?
arXiv:2601.06198v1 Announce Type: new Abstract: Biryani, one of India's most celebrated dishes, exhibits remarkable regional diversity in its preparation, ingredients, and presentation. With the growing availability of online cooking videos, there is unprecedented potential to study such culinary variations using computational tools systematically. However, existing video understanding methods fail to capture the fine-grained, multimodal, and culturally grounded differences in procedural cooking videos. This work presents the first large-scale, curated dataset of biryani preparation videos, comprising 120 high-quality YouTube recordings across 12 distinct regional styles. We propose a multi-stage framework leveraging recent advances in vision-language models (VLMs) to segment videos into fine-grained procedural units and align them with audio transcripts and canonical recipe text. Building on these aligned representations, we introduce a video comparison pipeline that automatically identifies and explains procedural differences between regional variants. We construct a comprehensive question-answer (QA) benchmark spanning multiple reasoning levels to evaluate procedural understanding in VLMs. Our approach employs multiple VLMs in complementary roles, incorporates human-in-the-loop verification for high-precision tasks, and benchmarks several state-of-the-art models under zero-shot and fine-tuned settings. The resulting dataset, comparison methodology, and QA benchmark provide a new testbed for evaluating VLMs on structured, multimodal reasoning tasks and open new directions for computational analysis of cultural heritage through cooking videos. We release all data, code, and the project website at https://farzanashaju.github.io/how-does-india-cook-biryani/.
https://arxiv.org/abs/2601.06198
Academic Papers
svg
68b2b6e01f0a9e25f23d36c27d058a79bff050870d7cd87cff86703358723438
2026-01-13T00:00:00-05:00
Leveraging Membership Inference Attacks for Privacy Measurement in Federated Learning for Remote Sensing Images
arXiv:2601.06200v1 Announce Type: new Abstract: Federated Learning (FL) enables collaborative model training while keeping training data localized, allowing us to preserve privacy in various domains including remote sensing. However, recent studies show that FL models may still leak sensitive information through their outputs, motivating the need for rigorous privacy evaluation. In this paper, we leverage membership inference attacks (MIA) as a quantitative privacy measurement framework for FL applied to remote sensing image classification. We evaluate multiple black-box MIA techniques, including entropy-based attacks, modified entropy attacks, and the likelihood ratio attack, across different FL algorithms and communication strategies. Experiments conducted on two public scene classification datasets demonstrate that MIA effectively reveals privacy leakage not captured by accuracy alone. Our results show that communication-efficient FL strategies reduce MIA success rates while maintaining competitive performance. These findings confirm MIA as a practical metric and highlight the importance of integrating privacy measurement into FL system design for remote sensing applications.
https://arxiv.org/abs/2601.06200
Academic Papers
svg
1a20a8513ac325a855677718b1e2f339c6bde92e047b7d0966238ecf2274e57f
2026-01-13T00:00:00-05:00
RiskBridge: Turning CVEs into Business-Aligned Patch Priorities
arXiv:2601.06201v1 Announce Type: new Abstract: Enterprises are confronted with an unprece- dented escalation in cybersecurity vulnerabil- ities, with thousands of new CVEs disclosed each month. Conventional prioritization frame- works such as CVSS offer static severity met- rics that fail to account for exploit probabil- ity, compliance urgency, and operational im- pact, resulting in inefficient and delayed re- mediation. This paper introduces RiskBridge, an explainable and compliance-aware vulner- ability management framework that integrates multi-source intelligence from CVSS v4, EPSS, and CISA KEV to produce dynamic, business- aligned patch priorities. RiskBridge employs a probabilistic Zero-Day Exposure Simulation (ZDES) model to fore- cast near-term exploit likelihood, a Policy-as- Code Engine to translate regulatory mandates (e.g., PCI DSS, NIST SP 800-53) into auto- mated SLA logic, and an ROI-driven Opti- mizer to maximize cumulative risk reduction per remediation effort. Experimental evalua- tions using live CVE datasets demonstrate an 88% reduction in residual risk, an 18-day improvement in SLA compliance, and a 35% increase in remediation efficiency compared to state-of-the-art commercial baselines. These findings validate RiskBridge as a prac- tical and auditable decision-intelligence sys- tem that unifies probabilistic modeling, com- pliance reasoning, and optimization analytics. The framework represents a step toward auto- mated, explainable, and business-centric vul- nerability management in modern enterprise environments
https://arxiv.org/abs/2601.06201
Academic Papers
svg
7c0f0a501d3ca0015b6640b27d796a82c3a2ae741a596baabb5dc396ae6a45e5
2026-01-13T00:00:00-05:00
QwenStyle: Content-Preserving Style Transfer with Qwen-Image-Edit
arXiv:2601.06202v1 Announce Type: new Abstract: Content-Preserving Style transfer, given content and style references, remains challenging for Diffusion Transformers (DiTs) due to its internal entangled content and style features. In this technical report, we propose the first content-preserving style transfer model trained on Qwen-Image-Edit, which activates Qwen-Image-Edit's strong content preservation and style customization capability. We collected and filtered high quality data of limited specific styles and synthesized triplets with thousands categories of style images in-the-wild. We introduce the Curriculum Continual Learning framework to train QwenStyle with such mixture of clean and noisy triplets, which enables QwenStyle to generalize to unseen styles without degradation of the precise content preservation capability. Our QwenStyle V1 achieves state-of-the-art performance in three core metrics: style similarity, content consistency, and aesthetic quality.
https://arxiv.org/abs/2601.06202
Academic Papers
svg
455143d093647c2cd5ce0da0766ae7d46ea3b8f470313d2b47fc591bac1dbef9
2026-01-13T00:00:00-05:00
Cascading multi-agent anomaly detection in surveillance systems via vision-language models and embedding-based classification
arXiv:2601.06204v1 Announce Type: new Abstract: Intelligent anomaly detection in dynamic visual environments requires reconciling real-time performance with semantic interpretability. Conventional approaches address only fragments of this challenge. Reconstruction-based models capture low-level deviations without contextual reasoning, object detectors provide speed but limited semantics, and large vision-language systems deliver interpretability at prohibitive computational cost. This work introduces a cascading multi-agent framework that unifies these complementary paradigms into a coherent and interpretable architecture. Early modules perform reconstruction-gated filtering and object-level assessment, while higher-level reasoning agents are selectively invoked to interpret semantically ambiguous events. The system employs adaptive escalation thresholds and a publish-subscribe communication backbone, enabling asynchronous coordination and scalable deployment across heterogeneous hardware. Extensive evaluation on large-scale monitoring data demonstrates that the proposed cascade achieves a threefold reduction in latency compared to direct vision-language inference, while maintaining high perceptual fidelity (PSNR = 38.3 dB, SSIM = 0.965) and consistent semantic labeling. The framework advances beyond conventional detection pipelines by combining early-exit efficiency, adaptive multi-agent reasoning, and explainable anomaly attribution, establishing a reproducible and energy-efficient foundation for scalable intelligent visual monitoring.
https://arxiv.org/abs/2601.06204
Academic Papers
svg
3f0d750e8b4b077379f77b35566c3430a6363b066354ae642d7bf5a3135aa42f
2026-01-13T00:00:00-05:00
Towards Public Administration Research Based on Interpretable Machine Learning
arXiv:2601.06205v1 Announce Type: new Abstract: Causal relationships play a pivotal role in research within the field of public administration. Ensuring reliable causal inference requires validating the predictability of these relationships, which is a crucial precondition. However, prediction has not garnered adequate attention within the realm of quantitative research in public administration and the broader social sciences. The advent of interpretable machine learning presents a significant opportunity to integrate prediction into quantitative research conducted in public administration. This article delves into the fundamental principles of interpretable machine learning while also examining its current applications in social science research. Building upon this foundation, the article further expounds upon the implementation process of interpretable machine learning, encompassing key aspects such as dataset construction, model training, model evaluation, and model interpretation. Lastly, the article explores the disciplinary value of interpretable machine learning within the field of public administration, highlighting its potential to enhance the generalization of inference, facilitate the selection of optimal explanations for phenomena, stimulate the construction of theoretical hypotheses, and provide a platform for the translation of knowledge. As a complement to traditional causal inference methods, interpretable machine learning ushers in a new era of credibility in quantitative research within the realm of public administration.
https://arxiv.org/abs/2601.06205
Academic Papers
svg
c4b43221be9ddde20f336f5651612df583dafcc7dd4543aa984798f09121e1ac
2026-01-13T00:00:00-05:00
When Imbalance Comes Twice: Active Learning under Simulated Class Imbalance and Label Shift in Binary Semantic Segmentation
arXiv:2601.06209v1 Announce Type: new Abstract: The aim of Active Learning is to select the most informative samples from an unlabelled set of data. This is useful in cases where the amount of data is large and labelling is expensive, such as in machine vision or medical imaging. Two particularities of machine vision are first, that most of the images produced are free of defects, and second, that the amount of images produced is so big that we cannot store all acquired images. This results, on the one hand, in a strong class imbalance in defect distribution and, on the other hand, in a potential label shift caused by limited storage. To understand how these two forms of imbalance affect active learning algorithms, we propose a simulation study based on two open-source datasets. We artificially create datasets for which we control the levels of class imbalance and label shift. Three standard active learning selection strategies are compared: random sampling, entropy-based selection, and core-set selection. We demonstrate that active learning strategies, and in particular the entropy-based and core-set selections, remain interesting and efficient even for highly imbalanced datasets. We also illustrate and measure the loss of efficiency that occurs in the situation a strong label shift.
https://arxiv.org/abs/2601.06209
Academic Papers
svg
fce314f595e74aff9aaa4d09a63f7169f51b2e53d82e8c0a3355284721c341ff
2026-01-13T00:00:00-05:00
Large Multimodal Model-Aided Scheduling for 6G Autonomous Communications
arXiv:2601.06211v1 Announce Type: new Abstract: Recently, large language models (LLMs) have gained significant attention for their ability to generate fast and accurate answer to the given query. These models have evolved into large multimodal models (LMMs), which can interpret and analyze multimodal inputs such as images and text. With the exponential growth of AI functionalities in autonomous devices, the central unit (CU), a digital processing unit performing AI inference, needs to handle LMMs to effectively control these devices. To ensure seamless command delivery to devices, the CU must perform the scheduling, which involves resource block (RB) allocation for data transmission and modulation and coding scheme (MCS) index selection based on the channel conditions. This task is challenging in many practical environments in 6G, where even small user movement can cause abrupt channel changes. In this paper, we propose a novel LMM-based scheduling technique to address this challenge. Our key idea is to leverage LMM to predict future channel parameters (e.g., distance, angles, and path gain) by analyzing the visual sensing information as well as pilot signals. By exploiting LMMs to predict the presence of reliable path and geometric information of users from the visual sensing information, and then combining these with past channel states from pilot signals, we can accurately predict future channel parameters. Using these predictions, we can preemptively make channel-aware scheduling decisions. From the numerical evaluations, we show that the proposed technique achieves more than 30% throughput gain over the conventional scheduling techniques.
https://arxiv.org/abs/2601.06211
Academic Papers
svg
7506d70ebf7b13d64cad85434ea002a6d46a970a8337ef8ca028cdafabeedd89
2026-01-13T00:00:00-05:00
Akasha 2: Hamiltonian State Space Duality and Visual-Language Joint Embedding Predictive Architectur
arXiv:2601.06212v1 Announce Type: new Abstract: We present Akasha 2, a state-of-the-art multimodal architecture that integrates Hamiltonian State Space Duality (H-SSD) with Visual-Language Joint Embedding Predictive Architecture (VL-JEPA). The system leverages the Mamba-3 Selective State Space Model (SSM) augmented by a Sparse Mixture of Hamiltonian Experts (SMoE-HE) that enforces latent physical conservation laws through symplectic integration. For visual synthesis, we introduce Hamiltonian Flow Matching (HFM) and persistent 3D Gaussian Splatting (3DGS), enabling ultra-low latency (<50ms) on mobile hardware. This work establishes a new paradigm in latent world models, achieving unprecedented spatiotemporal coherence through a holographic memory architecture. Our approach demonstrates that incorporating physics-inspired inductive biases into neural architectures yields significant improvements: state-of-the-art video prediction (FVD: 287), 4x faster visual synthesis than diffusion models, and 3-18x inference speedup over transformer baselines while maintaining energy conservation over extended horizons.
https://arxiv.org/abs/2601.06212
Academic Papers
svg
533aac873ccc0b3b565b558fa793a3295d72eed83f52ce8ed9ad0d963d403648
2026-01-13T00:00:00-05:00
Cyber Threat Detection and Vulnerability Assessment System using Generative AI and Large Language Model
arXiv:2601.06213v1 Announce Type: new Abstract: Background: Cyber-attacks have evolved rapidly in recent years, many individuals and business owners have been affected by cyber-attacks in various ways. Cyber-attacks include various threats such as ransomware, malware, phishing, and Denial of Service (DoS)-related attacks. Challenges: Traditional models such as Generative Artificial Intelligence (AI) and Security Bidirectional Encoder Representations from Transformers (BERT) were implemented to detect cyber threats. However, the existing Security BERT model has a limited contextual understanding of text data, which has less impact on detecting cyber-attacks. Proposed Methodology: To overcome the above-mentioned challenges, Robustly Optimized Bidirectional Encoder Representations from Transformers Pretraining Approach (RoBERTa) model is proposed which consists of diverse words of vocabulary understanding. Initially, data are extracted from a Packet Capture (PCAP) file and encrypted using Fully Harmonic Encryption (FHE). Subsequently, a Byte-level and Byte Pair Encoding (BBPE) tokenizer was used to generate tokens and help maintain the vocabulary for the encrypted values. Then, these values are applied to the RoBERTa model of the transformer with extensive training. Finally, Softmax is used for the detection and classification of attacks. The proposed RoBERTa model achieved better results than the existing BERT model in terms of accuracy (0.99), recall (0.91), and precision (0.89) respectively.
https://arxiv.org/abs/2601.06213
Academic Papers
svg
10f9c6c44cbfdad5407bfa37eac7e3762115b88c7d28a2e53195d4a1a98ac275
2026-01-13T00:00:00-05:00
Dynamics-inspired Structure Hallucination for Protein-protein Interaction Modeling
arXiv:2601.06214v1 Announce Type: new Abstract: Protein-protein interaction (PPI) represents a central challenge within the biology field, and accurately predicting the consequences of mutations in this context is crucial for drug design and protein engineering. Deep learning (DL) has shown promise in forecasting the effects of such mutations, but is hindered by two primary constraints. First, the structures of mutant proteins are often elusive to acquire. Secondly, PPI takes place dynamically, which is rarely integrated into the DL architecture design. To address these obstacles, we present a novel framework named Refine-PPI with two key enhancements. First, we introduce a structure refinement module trained by a mask mutation modeling (MMM) task on available wild-type structures, which is then transferred to produce the inaccessible mutant structures. Second, we employ a new kind of geometric network, called the probability density cloud network (PDC-Net), to capture 3D dynamic variations and encode the atomic uncertainty associated with PPI. Comprehensive experiments on SKEMPI.v2 substantiate the superiority of Refine-PPI over all existing tools for predicting free energy change. These findings underscore the effectiveness of our hallucination strategy and the PDC module in addressing the absence of mutant protein structure and modeling geometric uncertainty.
https://arxiv.org/abs/2601.06214
Academic Papers
svg
04441a8596a039d90b6405da00d1697f5bafe4005574301f79b1f498d3980368
2026-01-13T00:00:00-05:00
LLM Agents in Law: Taxonomy, Applications, and Challenges
arXiv:2601.06216v1 Announce Type: new Abstract: Large language models (LLMs) have precipitated a dramatic improvement in the legal domain, yet the deployment of standalone models faces significant limitations regarding hallucination, outdated information, and verifiability. Recently, LLM agents have attracted significant attention as a solution to these challenges, utilizing advanced capabilities such as planning, memory, and tool usage to meet the rigorous standards of legal practice. In this paper, we present a comprehensive survey of LLM agents for legal tasks, analyzing how these architectures bridge the gap between technical capabilities and domain-specific needs. Our major contributions include: (1) systematically analyzing the technical transition from standard legal LLMs to legal agents; (2) presenting a structured taxonomy of current agent applications across distinct legal practice areas; (3) discussing evaluation methodologies specifically for agentic performance in law; and (4) identifying open challenges and outlining future directions for developing robust and autonomous legal assistants.
https://arxiv.org/abs/2601.06216
Academic Papers
svg
e1af818baec89f4d822a482559410e76a90375f317109ba8972a33a0be33bc18
2026-01-13T00:00:00-05:00
CEEMDAN-Based Multiscale CNN for Wind Turbine Gearbox Fault Detection
arXiv:2601.06217v1 Announce Type: new Abstract: Wind turbines play a critical role in the shift toward sustainable energy generation. Their operation relies on multiple interconnected components, and a failure in any of these can compromise the entire system's functionality. Detecting faults accurately is challenging due to the intricate, non-linear, and non-stationary nature of vibration signals, influenced by dynamic loading, environmental variations, and mechanical interactions. As such, effective signal processing techniques are essential for extracting meaningful features to enhance diagnostic accuracy. This study presents a hybrid approach for fault detection in wind turbine gearboxes, combining Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and a Multiscale Convolutional Neural Network (MSCNN). CEEMDAN is employed to decompose vibration signals into intrinsic mode functions, isolating critical features at different time-frequency scales. These are then input into the MSCNN, which performs deep hierarchical feature extraction and classification. The proposed method achieves an F1 Score of 98.95\%, evaluated on real-world datasets, and demonstrates superior performance in both detection accuracy and computational speed compared to existing approaches. This framework offers a balanced solution for reliable and efficient fault diagnosis in wind turbine systems.
https://arxiv.org/abs/2601.06217
Academic Papers
svg
8c02895e98e05c0603153abf855a3f51dd2a5ee27aa25003ee76b8b0bfb2c452
2026-01-13T00:00:00-05:00
Two-step Authentication: Multi-biometric System Using Voice and Facial Recognition
arXiv:2601.06218v1 Announce Type: new Abstract: We present a cost-effective two-step authentication system that integrates face identification and speaker verification using only a camera and microphone available on common devices. The pipeline first performs face recognition to identify a candidate user from a small enrolled group, then performs voice recognition only against the matched identity to reduce computation and improve robustness. For face recognition, a pruned VGG-16 based classifier is trained on an augmented dataset of 924 images from five subjects, with faces localized by MTCNN; it achieves 95.1% accuracy. For voice recognition, a CNN speaker-verification model trained on LibriSpeech (train-other-360) attains 98.9% accuracy and 3.456% EER on test-clean. Source code and trained models are available at https://github.com/NCUE-EE-AIAL/Two-step-Authentication-Multi-biometric-System.
https://arxiv.org/abs/2601.06218
Academic Papers
svg
b075ea51f8eb5064abbe8a0abc0948bfc6f4a3886b1138480ce4fc5d94ae40cc
2026-01-13T00:00:00-05:00
AI-Powered Algorithms for the Prevention and Detection of Computer Malware Infections
arXiv:2601.06219v1 Announce Type: new Abstract: The rise in frequency and complexity of malware attacks are viewed as a major threat to modern digital infrastructure, which means that traditional signature-based detection methods are becoming less effective. As cyber threats continue to evolve, there is a growing need for intelligent systems to accurately and proactively identify and prevent malware infections. This study presents a new hybrid context-aware malware detection framework(HCAMDF) based on artificial intelligence (AI), which combines static file analysis, dynamic behavioural analysis, and contextual metadata to provide more accurate and timely detection. HCADMF has a multi-layer architecture, which consists of lightweight static classifiers such as Long Short Term Memory (LSTM) for real-time behavioral analysis, and an ensemble risk scoring through the integration of multiple layers of prediction. Experimental evaluations of the new/methodology with benchmark datasets, EMBER and CIC-MalMem2022, showed that the new approach provides superior performances with an accuracy of 97.3%, only a 1.5% false positive rate and minimal detection delay compared to several existing machine learning(ML) and deep learning(DL) established methods in the same fields. The results show strong evidence that hybrid AI can detect both existing and novel malware variants, and lay the foundation on intelligent security systems that can enable real-time detection and adapt to a rapidly evolving threat landscape.
https://arxiv.org/abs/2601.06219
Academic Papers
svg
4c84b82f123e40ae2b04ba2c46601f3a2970f7e0aed958056b4ca136db9f69c6
2026-01-13T00:00:00-05:00
Breaking Model Lock-in: Cost-Efficient Zero-Shot LLM Routing via a Universal Latent Space
arXiv:2601.06220v1 Announce Type: new Abstract: The rapid proliferation of Large Language Models (LLMs) has led to a fragmented and inefficient ecosystem, a state of ``model lock-in'' where seamlessly integrating novel models remains a significant bottleneck. Current routing frameworks require exhaustive, costly retraining, hindering scalability and adaptability. We introduce ZeroRouter, a new paradigm for LLM routing that breaks this lock-in. Our approach is founded on a universal latent space, a model-agnostic representation of query difficulty that fundamentally decouples the characterization of a query from the profiling of a model. This allows for zero-shot onboarding of new models without full-scale retraining. ZeroRouter features a context-aware predictor that maps queries to this universal space and a dual-mode optimizer that balances accuracy, cost, and latency. Our framework consistently outperforms all baselines, delivering higher accuracy at lower cost and latency.
https://arxiv.org/abs/2601.06220
Academic Papers
svg
e93f1f4bcc31226128b59b64de69e040b08b13a4dd9219ba5dce6988494e06bd
2026-01-13T00:00:00-05:00
LDTC: Lifelong deep temporal clustering for multivariate time series
arXiv:2601.06221v1 Announce Type: new Abstract: Clustering temporal and dynamically changing multivariate time series from real-world fields, called temporal clustering for short, has been a major challenge due to inherent complexities. Although several deep temporal clustering algorithms have demonstrated a strong advantage over traditional methods in terms of model learning and clustering results, the accuracy of the few algorithms are not satisfactory. None of the existing algorithms can continuously learn new tasks and deal with the dynamic data effectively and efficiently in the sequential tasks learning. To bridge the gap and tackle these issues, this paper proposes a novel algorithm \textbf{L}ifelong \textbf{D}eep \textbf{T}emporal \textbf{C}lustering (\textbf{LDTC}), which effectively integrates dimensionality reduction and temporal clustering into an end-to-end deep unsupervised learning framework. Using a specifically designed autoencoder and jointly optimizing for both the latent representation and clustering objective, the LDTC can achieve high-quality clustering results. Moreover, unlike any previous work, the LDTC is uniquely equipped with the fully dynamic model expansion and rehearsal-based techniques to effectively learn new tasks and to tackle the dynamic data in the sequential tasks learning without the catastrophic forgetting or degradation of the model accuracy. Experiments on seven real-world multivariate time series datasets show that the LDTC is a promising method for dealing with temporal clustering issues effectively and efficiently.
https://arxiv.org/abs/2601.06221
Academic Papers
svg
c7a75910388616ff513e2cdab111fad4a68870b526c05c03c84501abb123b995
2026-01-13T00:00:00-05:00
SAPL: Semantic-Agnostic Prompt Learning in CLIP for Weakly Supervised Image Manipulation Localization
arXiv:2601.06222v1 Announce Type: new Abstract: Malicious image manipulation threatens public safety and requires efficient localization methods. Existing approaches depend on costly pixel-level annotations which make training expensive. Existing weakly supervised methods rely only on image-level binary labels and focus on global classification, often overlooking local edge cues that are critical for precise localization. We observe that feature variations at manipulated boundaries are substantially larger than in interior regions. To address this gap, we propose Semantic-Agnostic Prompt Learning (SAPL) in CLIP, which learns text prompts that intentionally encode non-semantic, boundary-centric cues so that CLIPs multimodal similarity highlights manipulation edges rather than high-level object semantics. SAPL combines two complementary modules Edge-aware Contextual Prompt Learning (ECPL) and Hierarchical Edge Contrastive Learning (HECL) to exploit edge information in both textual and visual spaces. The proposed ECPL leverages edge-enhanced image features to generate learnable textual prompts via an attention mechanism, embedding semantic-irrelevant information into text features, to guide CLIP focusing on manipulation edges. The proposed HECL extract genuine and manipulated edge patches, and utilize contrastive learning to boost the discrimination between genuine edge patches and manipulated edge patches. Finally, we predict the manipulated regions from the similarity map after processing. Extensive experiments on multiple public benchmarks demonstrate that SAPL significantly outperforms existing approaches, achieving state-of-the-art localization performance.
https://arxiv.org/abs/2601.06222
Academic Papers
svg
f7208c0b026a97b3e1e8529d39d48186de6567c42bd36255d9f109b09a1aaceb
2026-01-13T00:00:00-05:00
Toward Safe and Responsible AI Agents: A Three-Pillar Model for Transparency, Accountability, and Trustworthiness
arXiv:2601.06223v1 Announce Type: new Abstract: This paper presents a conceptual and operational framework for developing and operating safe and trustworthy AI agents based on a Three-Pillar Model grounded in transparency, accountability, and trustworthiness. Building on prior work in Human-in-the-Loop systems, reinforcement learning, and collaborative AI, the framework defines an evolutionary path toward autonomous agents that balances increasing automation with appropriate human oversight. The paper argues that safe agent autonomy must be achieved through progressive validation, analogous to the staged development of autonomous driving, rather than through immediate full automation. Transparency and accountability are identified as foundational requirements for establishing user trust and for mitigating known risks in generative AI systems, including hallucinations, data bias, and goal misalignment, such as the inversion problem. The paper further describes three ongoing work streams supporting this framework: public deliberation on AI agents conducted by the Stanford Deliberative Democracy Lab, cross-industry collaboration through the Safe AI Agent Consortium, and the development of open tooling for an agent operating environment aligned with the Three-Pillar Model. Together, these contributions provide both conceptual clarity and practical guidance for enabling the responsible evolution of AI agents that operate transparently, remain aligned with human values, and sustain societal trust.
https://arxiv.org/abs/2601.06223
Academic Papers
svg
836d2fe269db3d82026eddd87b69f8ca2fdb2a25be7f9d3e81d52f26a08f2d46
2026-01-13T00:00:00-05:00
Ground What You See: Hallucination-Resistant MLLMs via Caption Feedback, Diversity-Aware Sampling, and Conflict Regularization
arXiv:2601.06224v1 Announce Type: new Abstract: While Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse tasks, their practical deployment is severely hindered by hallucination issues, which become particularly acute during Reinforcement Learning (RL) optimization. This paper systematically analyzes the root causes of hallucinations in MLLMs under RL training, identifying three critical factors: (1) an over-reliance on chained visual reasoning, where inaccurate initial descriptions or redundant information anchor subsequent inferences to incorrect premises; (2) insufficient exploration diversity during policy optimization, leading the model to generate overly confident but erroneous outputs; and (3) destructive conflicts between training samples, where Neural Tangent Kernel (NTK) similarity causes false associations and unstable parameter updates. To address these challenges, we propose a comprehensive framework comprising three core modules. First, we enhance visual localization by introducing dedicated planning and captioning stages before the reasoning phase, employing a quality-based caption reward to ensure accurate initial anchoring. Second, to improve exploration, we categorize samples based on the mean and variance of their reward distributions, prioritizing samples with high variance to focus the model on diverse and informative data. Finally, to mitigate sample interference, we regulate NTK similarity by grouping sample pairs and applying an InfoNCE loss to push overly similar pairs apart and pull dissimilar ones closer, thereby guiding gradient interactions toward a balanced range. Experimental results demonstrate that our proposed method significantly reduces hallucination rates and effectively enhances the inference accuracy of MLLMs.
https://arxiv.org/abs/2601.06224
Academic Papers
svg
f0b80d822891f0e2ed4acc907d74b3ae52d1325f1039ce40751720fc450abd43
2026-01-13T00:00:00-05:00
Classroom AI: Large Language Models as Grade-Specific Teachers
arXiv:2601.06225v1 Announce Type: new Abstract: Large Language Models (LLMs) offer a promising solution to complement traditional teaching and address global teacher shortages that affect hundreds of millions of children, but they fail to provide grade-appropriate responses for students at different educational levels. We introduce a framework for finetuning LLMs to generate age-appropriate educational content across six grade levels, from lower elementary to adult education. Our framework successfully adapts explanations to match students' comprehension capacities without sacrificing factual correctness. This approach integrates seven established readability metrics through a clustering method and builds a comprehensive dataset for grade-specific content generation. Evaluations across multiple datasets with 208 human participants demonstrate substantial improvements in grade-level alignment, achieving a 35.64 percentage point increase compared to prompt-based methods while maintaining response accuracy. AI-assisted learning tailored to different grade levels has the potential to advance educational engagement and equity.
https://arxiv.org/abs/2601.06225
Academic Papers
svg
ff6964089bd170d81d72069ca3c395943c0de49dde844582890e940053b11a6b
2026-01-13T00:00:00-05:00
Projecting Out the Malice: A Global Subspace Approach to LLM Detoxification
arXiv:2601.06226v1 Announce Type: new Abstract: Large language models (LLMs) exhibit exceptional performance but pose inherent risks of generating toxic content, restricting their safe deployment. While traditional methods (e.g., alignment) adjust output preferences, they fail to eliminate underlying toxic regions in parameters, leaving models vulnerable to adversarial attacks. Prior mechanistic studies characterize toxic regions as "toxic vectors" or "layer-wise subspaces", yet our analysis identifies critical limitations: i) Removed toxic vectors can be reconstructed via linear combinations of non-toxic vectors, demanding targeting of entire toxic subspace; ii) Contrastive objective over limited samples inject noise into layer-wise subspaces, hindering stable extraction. These highlight the challenge of identifying robust toxic subspace and removing them. Therefore, we propose GLOSS (GLobal tOxic Subspace Suppression), a lightweight method that mitigates toxicity by identifying and eliminating this global subspace from FFN parameters. Experiments on LLMs (e.g., Qwen3) show GLOSS achieves SOTA detoxification while preserving general capabilities without requiring large-scale retraining. WARNING: This paper contains context which is toxic in nature.
https://arxiv.org/abs/2601.06226
Academic Papers
svg
45335948c0d6204b4d375e90c8ba729f2330b44d5dd08b93c7594d921491a534
2026-01-13T00:00:00-05:00
When Smaller Wins: Dual-Stage Distillation and Pareto-Guided Compression of Liquid Neural Networks for Edge Battery Prognostics
arXiv:2601.06227v1 Announce Type: new Abstract: Battery management systems increasingly require accurate battery health prognostics under strict on-device constraints. This paper presents DLNet, a practical framework with dual-stage distillation of liquid neural networks that turns a high-capacity model into compact and edge-deployable models for battery health prediction. DLNet first applies Euler discretization to reformulate liquid dynamics for embedded compatibility. It then performs dual-stage knowledge distillation to transfer the teacher model's temporal behavior and recover it after further compression. Pareto-guided selection under joint error-cost objectives retains student models that balance accuracy and efficiency. We evaluate DLNet on a widely used dataset and validate real-device feasibility on an Arduino Nano 33 BLE Sense using int8 deployment. The final deployed student achieves a low error of 0.0066 when predicting battery health over the next 100 cycles, which is 15.4% lower than the teacher model. It reduces the model size from 616 kB to 94 kB with 84.7% reduction and takes 21 ms per inference on the device. These results support a practical smaller wins observation that a small model can match or exceed a large teacher for edge-based prognostics with proper supervision and selection. Beyond batteries, the DLNet framework can extend to other industrial analytics tasks with strict hardware constraints.
https://arxiv.org/abs/2601.06227
Academic Papers
svg
d50f373bb3046d54d563a83f2ba640dd9f17df3d42879b11ec082c4faf36cf1a
2026-01-13T00:00:00-05:00
Synthetic FMCW Radar Range Azimuth Maps Augmentation with Generative Diffusion Model
arXiv:2601.06228v1 Announce Type: new Abstract: The scarcity and low diversity of well-annotated automotive radar datasets often limit the performance of deep-learning-based environmental perception. To overcome these challenges, we propose a conditional generative framework for synthesizing realistic Frequency-Modulated Continuous-Wave radar Range-Azimuth Maps. Our approach leverages a generative diffusion model to generate radar data for multiple object categories, including pedestrians, cars, and cyclists. Specifically, conditioning is achieved via Confidence Maps, where each channel represents a semantic class and encodes Gaussian-distributed annotations at target locations. To address radar-specific characteristics, we incorporate Geometry Aware Conditioning and Temporal Consistency Regularization into the generative process. Experiments on the ROD2021 dataset demonstrate that signal reconstruction quality improves by \SI{3.6}{dB} in Peak Signal-to-Noise Ratio over baseline methods, while training with a combination of real and synthetic datasets improves overall mean Average Precision by 4.15% compared with conventional image-processing-based augmentation. These results indicate that our generative framework not only produces physically plausible and diverse radar spectrum but also substantially improves model generalization in downstream tasks.
https://arxiv.org/abs/2601.06228
Academic Papers
svg
6ac9b61dc0f6533f82d69fb09921e1dd6c51a9acb273467b94ba668352df289f
2026-01-13T00:00:00-05:00
Triadic Concept Analysis for Logic Interpretation of Simple Artificial Networks
arXiv:2601.06229v1 Announce Type: new Abstract: An artificial neural network (ANN) is a numerical method used to solve complex classification problems. Due to its high classification power, the ANN method often outperforms other classification methods in terms of accuracy. However, an ANN model lacks interpretability compared to methods that use the symbolic paradigm. Our idea is to derive a symbolic representation from a simple ANN model trained on minterm values of input objects. Based on ReLU nodes, the ANN model is partitioned into cells. We convert the ANN model into a cell-based, three-dimensional bit tensor. The theory of Formal Concept Analysis applied to the tensor yields concepts that are represented as logic trees, expressing interpretable attribute interactions. Their evaluations preserve the classification power of the initial ANN model.
https://arxiv.org/abs/2601.06229
Academic Papers
svg
cae2fa770a669b110e3a2425875d828c08c5867bfafb9db64107baa6174097a4
2026-01-13T00:00:00-05:00
Employ SmartNICs' Data Path Accelerators for Ordered Key-Value Stores
arXiv:2601.06231v1 Announce Type: new Abstract: Remote in-memory key-value (KV) stores serve as a cornerstone for diverse modern workloads, and high-speed range scans are frequently a requirement. However, current architectures rarely achieve a simultaneous balance of peak efficiency, architectural simplicity, and native support for ordered operations. Conventional host-centric frameworks are restricted by kernel-space network stacks and internal bus latencies. While hash-based alternatives that utilize OS-bypass or run natively on SmartNICs offer high throughput, they lack the data structures necessary for range queries. Distributed RDMA-based systems provide performance and range functionality but often depend on stateful clients, which introduces complexity in scaling and error handling. Alternatively, SmartNIC implementations that traverse trees located in host memory are hampered by high DMA round-trip latencies. This paper introduces a KV store that leverages the on-path Data Path Accelerators (DPAs) of the BlueField-3 SmartNIC to eliminate operating system overhead while facilitating stateless clients and range operations. These DPAs ingest network requests directly from NIC buffers to navigate a lock-free learned index residing in the accelerator's local memory. By deferring value retrieval from the host-side tree replica until the leaf level is reached, the design minimizes PCIe crossings. Write operations are staged in DPA memory and migrated in batches to the host, where structural maintenance is performed before being transactionally stitched back to the SmartNIC. Coupled with a NIC-resident read cache, the system achieves 33 million operations per second (MOPS) for point lookups and 13 MOPS for range queries. Our analysis demonstrates that this architecture matches or exceeds the performance of contemporary state-of-the-art solutions, while we identify hardware refinements that could further accelerate performance.
https://arxiv.org/abs/2601.06231
Academic Papers
svg
d5bb225ed670f286b0811f040546b1799724f4711119faee648e64db8419e6f5
2026-01-13T00:00:00-05:00
Multi-Agent Framework for Controllable and Protected Generative Content Creation: Addressing Copyright and Provenance in AI-Generated Media
arXiv:2601.06232v1 Announce Type: new Abstract: The proliferation of generative AI systems creates unprecedented opportunities for content creation while raising critical concerns about controllability, copyright infringement, and content provenance. Current generative models operate as "black boxes" with limited user control and lack built-in mechanisms to protect intellectual property or trace content origin. We propose a novel multi-agent framework that addresses these challenges through specialized agent roles and integrated watermarking. Our system orchestrates Director, Generator, Reviewer, Integration, and Protection agents to ensure user intent alignment while embedding digital provenance markers. We demonstrate feasibility through two case studies: creative content generation with iterative refinement and copyright protection for AI-generated art in commercial contexts. Preliminary feasibility evidence from prior work indicates up to 23\% improvement in semantic alignment and 95\% watermark recovery rates. This work contributes to responsible generative AI deployment, positioning multi-agent systems as a solution for trustworthy creative workflows in legal and commercial applications.
https://arxiv.org/abs/2601.06232
Academic Papers
svg
1848152e7c87231cd78e919665e6fa2391da95fee6409bae6521e6af8d74bc0f
2026-01-13T00:00:00-05:00
PCoKG: Personality-aware Commonsense Reasoning with Debate
arXiv:2601.06234v1 Announce Type: new Abstract: Most commonsense reasoning models overlook the influence of personality traits, limiting their effectiveness in personalized systems such as dialogue generation. To address this limitation, we introduce the Personality-aware Commonsense Knowledge Graph (PCoKG), a structured dataset comprising 521,316 quadruples. We begin by employing three evaluators to score and filter events from the ATOMIC dataset, selecting those that are likely to elicit diverse reasoning patterns across different personality types. For knowledge graph construction, we leverage the role-playing capabilities of large language models (LLMs) to perform reasoning tasks. To enhance the quality of the generated knowledge, we incorporate a debate mechanism consisting of a proponent, an opponent, and a judge, which iteratively refines the outputs through feedback loops. We evaluate the dataset from multiple perspectives and conduct fine-tuning and ablation experiments using multiple LLM backbones to assess PCoKG's robustness and the effectiveness of its construction pipeline. Our LoRA-based fine-tuning results indicate a positive correlation between model performance and the parameter scale of the base models. Finally, we apply PCoKG to persona-based dialogue generation, where it demonstrates improved consistency between generated responses and reference outputs. This work bridges the gap between commonsense reasoning and individual cognitive differences, enabling the development of more personalized and context-aware AI systems.
https://arxiv.org/abs/2601.06234
Academic Papers
svg
f69c8e62785d4673b6790695adb4e77278b2307186944443f72675cfd8847a61
2026-01-13T00:00:00-05:00
An Intelligent AI glasses System with Multi-Agent Architecture for Real-Time Voice Processing and Task Execution
arXiv:2601.06235v1 Announce Type: new Abstract: This paper presents an AI glasses system that integrates real-time voice processing, artificial intelligence(AI) agents, and cross-network streaming capabilities. The system employs dual-agent architecture where Agent 01 handles Automatic Speech Recognition (ASR) and Agent 02 manages AI processing through local Large Language Models (LLMs), Model Context Protocol (MCP) tools, and Retrieval-Augmented Generation (RAG). The system supports real-time RTSP streaming for voice and video data transmission, eye tracking data collection, and remote task execution through RabbitMQ messaging. Implementation demonstrates successful voice command processing with multilingual support and cross-platform task execution capabilities.
https://arxiv.org/abs/2601.06235
Academic Papers
svg
e2352a67dc5f40d3dafecde505a2d06a91701f77edb8d04dd73faef5bd6e7862
2026-01-13T00:00:00-05:00
Data-Dependent Goal Modeling for ML-Enabled Law Enforcement Systems
arXiv:2601.06237v1 Announce Type: new Abstract: Investigating serious crimes is inherently complex and resource-constrained. Law enforcement agencies (LEAs) grapple with overwhelming volumes of offender and incident data, making effective suspect identification difficult. Although machine learning (ML)-enabled systems have been explored to support LEAs, several have failed in practice. This highlights the need to align system behavior with stakeholder goals early in development, motivating the use of Goal-Oriented Requirements Engineering (GORE). This paper reports our experience applying the GORE framework KAOS to designing an ML-enabled system for identifying suspects in online child sexual abuse. We describe how KAOS supported early requirements elaboration, including goal refinement, object modeling, agent assignment, and operationalization. A key finding is the central role of data elicitation: data requirements constrain refinement choices and candidate agents while influencing how goals are linked, operationalized, and satisfied. Conversely, goal elaboration and agent assignment shape data quality expectations and collection needs. Our experience highlights the iterative, bidirectional dependencies between goals, data, and ML performance. We contribute a reference model for integrating GORE with data-driven system development, and identify gaps in KAOS, particularly the need for explicit support for data elicitation and quality management. These insights inform future extensions of KAOS and, more broadly, the application of formal GORE methods to ML-enabled systems for high-stakes societal contexts.
https://arxiv.org/abs/2601.06237
Academic Papers
svg
62f3c20a6bc989cfb6eb3a1df0922fa94d5c6e3cc4c2f7af4bc2c577234a0b60
2026-01-13T00:00:00-05:00
SPINAL -- Scaling-law and Preference Integration in Neural Alignment Layers
arXiv:2601.06238v1 Announce Type: new Abstract: Direct Preference Optimization (DPO) is a principled, scalable alternative to RLHF for aligning large language models from pairwise preferences, but its internal geometric footprint remains undercharacterized, limiting audits, checkpoint comparisons, and failure prediction. We introduce SPINAL (Scaling-law and Preference Integration in Neural Alignment Layers), a diagnostic that measures how alignment reshapes representations across depth by tracing localized structural change layer by layer. Across model families, DPO produces a layerwise calibration effect concentrated in the final decoder blocks (often layers 21-30), where preference gradients most directly affect the next-token distribution. SPINAL encodes each checkpoint as a depth trace over (layer index, contraction score, transport score). The contraction score summarizes how quickly the tail of a layer's spectrum decays (how fast small modes vanish); higher values indicate stronger contraction into fewer effective directions. The transport score summarizes how much the token distribution shifts between adjacent layers using a bounded overlap measure; lower values indicate shorter, smoother steps through representation space. Aligned checkpoints show a late-layer ramp-up in contraction and a smooth reduction in transport, consistent with tightened and stabilized policy mass, while unaligned models trace higher-curvature, more entropic, and geometrically incoherent depth paths. Overall, alignment is geometrically localized: the final layers encode the dominant preference-induced corrections. SPINAL turns this localization into a practical audit signal, quantifying where alignment concentrates, how strongly it manifests, and when it begins to destabilize during training.
https://arxiv.org/abs/2601.06238
Academic Papers
svg
bc4db04cdd83aa76af97f93604dbcc718418583db90ca457755e7daaef925edd
2026-01-13T00:00:00-05:00
A survey of facial recognition techniques
arXiv:2601.06239v1 Announce Type: new Abstract: As multimedia content is quickly growing, the field of facial recognition has become one of the major research fields, particularly in the recent years. The most problematic area to researchers in image processing and computer vision is the human face which is a complex object with myriads of distinctive features that can be used to identify the face. The survey of this survey is particularly focused on most challenging facial characteristics, including differences in the light, ageing, variation in poses, partial occlusion, and facial expression and presents methodological solutions. The factors, therefore, are inevitable in the creation of effective facial recognition mechanisms used on facial images. This paper reviews the most sophisticated methods of facial detection which are Hidden Markov Models, Principal Component Analysis (PCA), Elastic Cluster Plot Matching, Support Vector Machine (SVM), Gabor Waves, Artificial Neural Networks (ANN), Eigenfaces, Independent Component Analysis (ICA), and 3D Morphable Model. Alongside the works mentioned above, we have also analyzed the images of a number of facial databases, namely JAFEE, FEI, Yale, LFW, AT&amp;T (then called ORL), and AR (created by Martinez and Benavente), to analyze the results. However, this survey is aimed at giving a thorough literature review of face recognition, and its applications, and some experimental results are provided at the end after a detailed discussion.
https://arxiv.org/abs/2601.06239
Academic Papers
svg
37e3bfdda1710573a472ee86957b9770089a0fff0db697d53cc6ffceb2bf36b8
2026-01-13T00:00:00-05:00
Agentic AI Microservice Framework for Deepfake and Document Fraud Detection in KYC Pipelines
arXiv:2601.06241v1 Announce Type: new Abstract: The rapid proliferation of synthetic media, presentation attacks, and document forgeries has created significant vulnerabilities in Know Your Customer (KYC) workflows across financial services, telecommunications, and digital-identity ecosystems. Traditional monolithic KYC systems lack the scalability and agility required to counter adaptive fraud. This paper proposes an Agentic AI Microservice Framework that integrates modular vision models, liveness assessment, deepfake detection, OCR-based document forensics, multimodal identity linking, and a policy driven risk engine. The system leverages autonomous micro-agents for task decomposition, pipeline orchestration, dynamic retries, and human-in-the-loop escalation. Experimental evaluations demonstrate improved detection accuracy, reduced latency, and enhanced resilience against adversarial inputs. The framework offers a scalable blueprint for regulated industries seeking robust, real-time, and privacy-preserving KYC verification.
https://arxiv.org/abs/2601.06241
Academic Papers
svg
7f5577814cadf16d1462ef9ca510bb3c1f636f28d3c5503d4e14f3807e2301f4
2026-01-13T00:00:00-05:00
Matrix Factorization Framework for Community Detection under the Degree-Corrected Block Model
arXiv:2601.06262v1 Announce Type: new Abstract: Community detection is a fundamental task in data analysis. Block models form a standard approach to partition nodes according to a graph model, facilitating the analysis and interpretation of the network structure. By grouping nodes with similar connection patterns, they enable the identification of a wide variety of underlying structures. The degree-corrected block model (DCBM) is an established model that accounts for the heterogeneity of node degrees. However, existing inference methods for the DCBM are heuristics that are highly sensitive to initialization, typically done randomly. In this work, we show that DCBM inference can be reformulated as a constrained nonnegative matrix factorization problem. Leveraging this insight, we propose a novel method for community detection and a theoretically well-grounded initialization strategy that provides an initial estimate of communities for inference algorithms. Our approach is agnostic to any specific network structure and applies to graphs with any structure representable by a DCBM, not only assortative ones. Experiments on synthetic and real benchmark networks show that our method detects communities comparable to those found by DCBM inference, while scaling linearly with the number of edges and communities; for instance, it processes a graph with 100,000 nodes and 2,000,000 edges in approximately 4 minutes. Moreover, the proposed initialization strategy significantly improves solution quality and reduces the number of iterations required by all tested inference algorithms. Overall, this work provides a scalable and robust framework for community detection and highlights the benefits of a matrix-factorization perspective for the DCBM.
https://arxiv.org/abs/2601.06262
Academic Papers
svg
3a362356d32725128ad0a18be114f07309479bb4cd268bd3ef18c92c3aa25c31
2026-01-13T00:00:00-05:00
Self-Admitted Technical Debt in LLM Software: An Empirical Comparison with ML and Non-ML Software
arXiv:2601.06266v1 Announce Type: new Abstract: Self-admitted technical debt (SATD), referring to comments flagged by developers that explicitly acknowledge suboptimal code or incomplete functionality, has received extensive attention in machine learning (ML) and traditional (Non-ML) software. However, little is known about how SATD manifests and evolves in contemporary Large Language Model (LLM)-based systems, whose architectures, workflows, and dependencies differ fundamentally from both traditional and pre-LLM ML software. In this paper, we conduct the first empirical study of SATD in the LLM era, replicating and extending prior work on ML technical debt to modern LLM-based systems. We compare SATD prevalence across LLM, ML, and non-ML repositories across a total of 477 repositories (159 per category). We perform survival analysis of SATD introduction and removal to understand the dynamics of technical debt across different development paradigms. Surprisingly, despite their architectural complexity, our results reveal that LLM repositories accumulate SATD at similar rates to ML systems (3.95% vs. 4.10%). However, we observe that LLM repositories remain debt-free 2.4x longer than ML repositories (a median of 492 days vs. 204 days), and then start to accumulate technical debt rapidly. Moreover, our qualitative analysis of 377 SATD instances reveals three new forms of technical debt unique to LLM-based development that have not been reported in prior research: Model-Stack Workaround Debt, Model Dependency Debt, and Performance Optimization Debt. Finally, by mapping SATD to stages of the LLM development pipeline, we observe that debt concentrates
https://arxiv.org/abs/2601.06266
Academic Papers
svg
1413198506408db5afe5d9011b4a69ab428d2aca00be4308801cecfbbe00e432
2026-01-13T00:00:00-05:00
Automated QoR improvement in OpenROAD with coding agents
arXiv:2601.06268v1 Announce Type: new Abstract: EDA development and innovation has been constrained by scarcity of expert engineering resources. While leading LLMs have demonstrated excellent performance in coding and scientific reasoning tasks, their capacity to advance EDA technology itself has been largely untested. We present AuDoPEDA, an autonomous, repository-grounded coding system built atop OpenAI models and a Codex-class agent that reads OpenROAD, proposes research directions, expands them into implementation steps, and submits executable diffs. Our contributions include (i) a closed-loop LLM framework for EDA code changes; (ii) a task suite and evaluation protocol on OpenROAD for PPA-oriented improvements; and (iii) end-to-end demonstrations with minimal human oversight. Experiments in OpenROAD achieve routed wirelength reductions of up to 5.9%, and effective clock period reductions of up to 10.0%.
https://arxiv.org/abs/2601.06268
Academic Papers
svg
aa1cc5913aa1c8895a12b9d0833d22316ada7007f02356a448afda1899eb7c58
2026-01-13T00:00:00-05:00
FairSCOSCA: Fairness At Arterial Signals -- Just Around The Corner
arXiv:2601.06275v1 Announce Type: new Abstract: Traffic signal control at intersections, especially in arterial networks, is a key lever for mitigating the growing issue of traffic congestion in cities. Despite the widespread deployment of SCOOTS and SCATS, which prioritize efficiency, fairness has remained largely absent from their design logic, often resulting in unfair outcomes for certain road users, such as excessive waiting times. Fairness however, is a major driver of public acceptance for implementation of new controll systems. Therefore, this work proposes FairSCOSCA, a fairness-enhancing extension to these systems, featuring two novel yet practical design adaptations grounded in multiple normative fairness definitions: (1) green phase optimization incorporating cumulative waiting times, and (2) early termination of underutilized green phases. Those extensions ensure fairer distributions of green times. Evaluated in a calibrated microsimulation case study of the arterial network in Esslingen am Neckar (Germany), FairSCOSCA demonstrates substantial improvements across multiple fairness dimensions (Egalitarian, Rawlsian, Utilitarian, and Harsanyian) without sacrificing traffic efficiency. Compared against Fixed-Cycle, Max-Pressure, and standard SCOOTS/SCATS controllers, FairSCOSCA significantly reduces excessive waiting times, delay inequality and horizontal discrimination between arterial and feeder roads. This work contributes to the growing literature on equitable traffic control by bridging the gap between fairness theory and the practical enhancement of globally deployed signal systems. Open source implementation available on GitHub.
https://arxiv.org/abs/2601.06275
Academic Papers
svg
56366885878cd1e6d3d9e8c020fe607c28a1828f0bd52180a8f2d47dfb349089
2026-01-13T00:00:00-05:00
Automated Generation of Accurate Privacy Captions From Android Source Code Using Large Language Models
arXiv:2601.06276v1 Announce Type: new Abstract: Privacy captions are short sentences that succinctly describe what personal information is used, how it is used, and why, within an app. These captions can be utilized in various notice formats, such as privacy policies, app rationales, and app store descriptions. However, inaccurate captions may mislead users and expose developers to regulatory fines. Existing approaches to generating privacy notices or just privacy captions include using questionnaires, templates, static analysis, or machine learning. However, these approaches either rely heavily on developers' inputs and thus strain their efforts, use limited source code context, leading to the incomplete capture of app privacy behaviors, or depend on potentially inaccurate privacy policies as a source for creating notices. In this work, we address these limitations by developing Privacy Caption Generator (PCapGen), an approach that - i) automatically identifies and extracts large and precise source code context that implements privacy behaviors in an app, ii) uses a Large Language Model (LLM) to describe coarse- and fine-grained privacy behaviors, and iii) generates accurate, concise, and complete privacy captions to describe the privacy behaviors of the app. Our evaluation shows PCapGen generates concise, complete, and accurate privacy captions as compared to the baseline approach. Furthermore, privacy experts choose PCapGen captions at least 71\% of the time, whereas LLMs-as-judge prefer PCapGen captions at least 76\% of the time, indicating strong performance of our approach.
https://arxiv.org/abs/2601.06276
Academic Papers
svg
cbf45f3e01c7910e4c3764d36ed8da9e5d787470907ab91f985e8cb884b5ec43
2026-01-13T00:00:00-05:00
EyeTheia: A Lightweight and Accessible Eye-Tracking Toolbox
arXiv:2601.06279v1 Announce Type: new Abstract: We introduce EyeTheia, a lightweight and open deep learning pipeline for webcam-based gaze estimation, designed for browser-based experimental platforms and real-world cognitive and clinical research. EyeTheia enables real-time gaze tracking using only a standard laptop webcam, combining MediaPipe-based landmark extraction with a convolutional neural network inspired by iTracker and optional user-specific fine-tuning. We investigate two complementary strategies: adapting a model pretrained on mobile data and training the same architecture from scratch on a desktop-oriented dataset. Validation results on MPIIFaceGaze show comparable performance between both approaches prior to calibration, while lightweight user-specific fine-tuning consistently reduces gaze prediction error. We further evaluate EyeTheia in a realistic Dot-Probe task and compare it to the commercial webcam-based tracker SeeSo SDK. Results indicate strong agreement in left-right gaze allocation during stimulus presentation, despite higher temporal variability. Overall, EyeTheia provides a transparent and extensible solution for low-cost gaze tracking, suitable for scalable and reproducible experimental and clinical studies. The code, trained models, and experimental materials are publicly available.
https://arxiv.org/abs/2601.06279
Academic Papers
svg
6a4cfc0b4b927b51086c021555daea84bb02dabb7b763bb6664ed5ec532a90f5
2026-01-13T00:00:00-05:00
The Potential of Erroneous Outbound Traffic Analysis to Unveil Silent Internal Anomalies
arXiv:2601.06280v1 Announce Type: new Abstract: Passive measurement has traditionally focused on inbound traffic to detect malicious activity, based on the assumption that threats originate externally. In this paper, we offer a complementary perspective by examining outbound traffic, and argue that a narrow subset -- what we term erroneous outbound traffic -- is a lighter and revealing yet overlooked data source for identifying a broad range of security threats and network problems. This traffic consists of packets sent by internal hosts that either receive no response, trigger ICMP errors, or are ICMP error messages themselves generated in response to unsolicited requests. To demonstrate its potential, we collect and analyse erroneous traffic from a large network, uncovering a variety of previously unnoticed issues, including misconfigurations, obsolete deployments and compromised hosts.
https://arxiv.org/abs/2601.06280
Academic Papers
svg
1e181cd9a8bff26cc06c511285080fa246f1d13c22fc22e288ad601b6066a0fd
2026-01-13T00:00:00-05:00
Mining Quantum Software Patterns in Open-Source Projects
arXiv:2601.06281v1 Announce Type: new Abstract: Quantum computing has become an active research field in recent years, as its applications in fields such as cryptography, optimization, and materials science are promising. Along with these developments, challenges and opportunities exist in the field of Quantum Software Engineering, as the development of frameworks and higher-level abstractions has attracted practitioners from diverse backgrounds. Unlike initial quantum frameworks based on the circuit model, recent frameworks and libraries leverage higher-level abstractions for creating quantum programs. This paper presents an empirical study of 985 Jupyter Notebooks from 80 open-source projects to investigate how quantum patterns are applied in practice. Our work involved two main stages. First, we built a knowledge base from three quantum computing frameworks (Qiskit, PennyLane, and Classiq). This process led us to identify and document 9 new patterns that refine and extend the existing quantum computing pattern catalog. Second, we developed a reusable semantic search tool to automatically detect these patterns across our large-scale dataset, providing a practitioner-focused analysis. Our results show that developers use patterns in three levels: from foundational circuit utilities, to common algorithmic primitives (e.g., Amplitude Amplification), up to domain-specific applications for finance and optimization. This indicates a maturing field where developers are increasingly using high-level building blocks to solve real-world problems.
https://arxiv.org/abs/2601.06281
Academic Papers
svg
73914bb3c9c432060859a32b869673687edc76b530b1e7e81a698229585476dc
2026-01-13T00:00:00-05:00
Amory: Building Coherent Narrative-Driven Agent Memory through Agentic Reasoning
arXiv:2601.06282v1 Announce Type: new Abstract: Long-term conversational agents face a fundamental scalability challenge as interactions extend over time: repeatedly processing entire conversation histories becomes computationally prohibitive. Current approaches attempt to solve this through memory frameworks that predominantly fragment conversations into isolated embeddings or graph representations and retrieve relevant ones in a RAG style. While computationally efficient, these methods often treat memory formation minimally and fail to capture the subtlety and coherence of human memory. We introduce Amory, a working memory framework that actively constructs structured memory representations through enhancing agentic reasoning during offline time. Amory organizes conversational fragments into episodic narratives, consolidates memories with momentum, and semanticizes peripheral facts into semantic memory. At retrieval time, the system employs coherence-driven reasoning over narrative structures. Evaluated on the LOCOMO benchmark for long-term reasoning, Amory achieves considerable improvements over previous state-of-the-art, with performance comparable to full context reasoning while reducing response time by 50%. Analysis shows that momentum-aware consolidation significantly enhances response quality, while coherence-driven retrieval provides superior memory coverage compared to embedding-based approaches.
https://arxiv.org/abs/2601.06282
Academic Papers
svg
49588cb3964e13d28a0500493f4ad664ba393b7f92c811934a4f1dfbac6b7ba4
2026-01-13T00:00:00-05:00
NAS-GS: Noise-Aware Sonar Gaussian Splatting
arXiv:2601.06285v1 Announce Type: new Abstract: Underwater sonar imaging plays a crucial role in various applications, including autonomous navigation in murky water, marine archaeology, and environmental monitoring. However, the unique characteristics of sonar images, such as complex noise patterns and the lack of elevation information, pose significant challenges for 3D reconstruction and novel view synthesis. In this paper, we present NAS-GS, a novel Noise-Aware Sonar Gaussian Splatting framework specifically designed to address these challenges. Our approach introduces a Two-Ways Splatting technique that accurately models the dual directions for intensity accumulation and transmittance calculation inherent in sonar imaging, significantly improving rendering speed without sacrificing quality. Moreover, we propose a Gaussian Mixture Model (GMM) based noise model that captures complex sonar noise patterns, including side-lobes, speckle, and multi-path noise. This model enhances the realism of synthesized images while preventing 3D Gaussian overfitting to noise, thereby improving reconstruction accuracy. We demonstrate state-of-the-art performance on both simulated and real-world large-scale offshore sonar scenarios, achieving superior results in novel view synthesis and 3D reconstruction.
https://arxiv.org/abs/2601.06285
Academic Papers
svg
56e0dfb470133d4394abfb188038e106dc072ad9ec09197048daecf5885b9efb
2026-01-13T00:00:00-05:00
Walk the PLANC: Physics-Guided RL for Agile Humanoid Locomotion on Constrained Footholds
arXiv:2601.06286v1 Announce Type: new Abstract: Bipedal humanoid robots must precisely coordinate balance, timing, and contact decisions when locomoting on constrained footholds such as stepping stones, beams, and planks -- even minor errors can lead to catastrophic failure. Classical optimization and control pipelines handle these constraints well but depend on highly accurate mathematical representations of terrain geometry, making them prone to error when perception is noisy or incomplete. Meanwhile, reinforcement learning has shown strong resilience to disturbances and modeling errors, yet end-to-end policies rarely discover the precise foothold placement and step sequencing required for discontinuous terrain. These contrasting limitations motivate approaches that guide learning with physics-based structure rather than relying purely on reward shaping. In this work, we introduce a locomotion framework in which a reduced-order stepping planner supplies dynamically consistent motion targets that steer the RL training process via Control Lyapunov Function (CLF) rewards. This combination of structured footstep planning and data-driven adaptation produces accurate, agile, and hardware-validated stepping-stone locomotion on a humanoid robot, substantially improving reliability compared to conventional model-free reinforcement-learning baselines.
https://arxiv.org/abs/2601.06286
Academic Papers
svg
5f90953b8fca2a3e6c41f5b81724f995c70145e082f23494a794b33715515732
2026-01-13T00:00:00-05:00
Perception Test 2025: Challenge Summary and a Unified VQA Extension
arXiv:2601.06287v1 Announce Type: new Abstract: The Third Perception Test challenge was organised as a full-day workshop alongside the IEEE/CVF International Conference on Computer Vision (ICCV) 2025. Its primary goal is to benchmark state-of-the-art video models and measure the progress in multimodal perception. This year, the workshop featured 2 guest tracks as well: KiVA (an image understanding challenge) and Physic-IQ (a video generation challenge). In this report, we summarise the results from the main Perception Test challenge, detailing both the existing tasks as well as novel additions to the benchmark. In this iteration, we placed an emphasis on task unification, as this poses a more challenging test for current SOTA multimodal models. The challenge included five consolidated tracks: unified video QA, unified object and point tracking, unified action and sound localisation, grounded video QA, and hour-long video QA, alongside an analysis and interpretability track that is still open for submissions. Notably, the unified video QA track introduced a novel subset that reformulates traditional perception tasks (such as point tracking and temporal action localisation) as multiple-choice video QA questions that video-language models can natively tackle. The unified object and point tracking merged the original object tracking and point tracking tasks, whereas the unified action and sound localisation merged the original temporal action localisation and temporal sound localisation tracks. Accordingly, we required competitors to use unified approaches rather than engineered pipelines with task-specific models. By proposing such a unified challenge, Perception Test 2025 highlights the significant difficulties existing models face when tackling diverse perception tasks through unified interfaces.
https://arxiv.org/abs/2601.06287
Academic Papers
svg
cb4fe7b08c85093366daeee732b79e25d7274fe49bc4acf22f68367f45934f0b
2026-01-13T00:00:00-05:00
AIConfigurator: Lightning-Fast Configuration Optimization for Multi-Framework LLM Serving
arXiv:2601.06288v1 Announce Type: new Abstract: Optimizing Large Language Model (LLM) inference in production systems is increasingly difficult due to dynamic workloads, stringent latency/throughput targets, and a rapidly expanding configuration space. This complexity spans not only distributed parallelism strategies (tensor/pipeline/expert) but also intricate framework-specific runtime parameters such as those concerning the enablement of CUDA graphs, available KV-cache memory fractions, and maximum token capacity, which drastically impact performance. The diversity of modern inference frameworks (e.g., TRT-LLM, vLLM, SGLang), each employing distinct kernels and execution policies, makes manual tuning both framework-specific and computationally prohibitive. We present AIConfigurator, a unified performance-modeling system that enables rapid, framework-agnostic inference configuration search without requiring GPU-based profiling. AIConfigurator combines (1) a methodology that decomposes inference into analytically modelable primitives - GEMM, attention, communication, and memory operations while capturing framework-specific scheduling dynamics; (2) a calibrated kernel-level performance database for these primitives across a wide range of hardware platforms and popular open-weights models (GPT-OSS, Qwen, DeepSeek, LLama, Mistral); and (3) an abstraction layer that automatically resolves optimal launch parameters for the target backend, seamlessly integrating into production-grade orchestration systems. Evaluation on production LLM serving workloads demonstrates that AIConfigurator identifies superior serving configurations that improve performance by up to 40% for dense models (e.g., Qwen3-32B) and 50% for MoE architectures (e.g., DeepSeek-V3), while completing searches within 30 seconds on average. Enabling the rapid exploration of vast design spaces - from cluster topology down to engine specific flags.
https://arxiv.org/abs/2601.06288
Academic Papers
svg
84e6f2ef5dfb649e1f85862e3606a2ecdef3a9d75fbd1b8a51899b11ae3d68dc
2026-01-13T00:00:00-05:00
How well can off-the-shelf LLMs elucidate molecular structures from mass spectra using chain-of-thought reasoning?
arXiv:2601.06289v1 Announce Type: new Abstract: Mass spectrometry (MS) is a powerful analytical technique for identifying small molecules, yet determining complete molecular structures directly from tandem mass spectra (MS/MS) remains a long-standing challenge due to complex fragmentation patterns and the vast diversity of chemical space. Recent progress in large language models (LLMs) has shown promise for reasoning-intensive scientific tasks, but their capability for chemical interpretation is still unclear. In this work, we introduce a Chain-of-Thought (CoT) prompting framework and benchmark that evaluate how LLMs reason about mass spectral data to predict molecular structures. We formalize expert chemists' reasoning steps-such as double bond equivalent (DBE) analysis, neutral loss identification, and fragment assembly-into structured prompts and assess multiple state-of-the-art LLMs (Claude-3.5-Sonnet, GPT-4o-mini, and Llama-3 series) in a zero-shot setting using the MassSpecGym dataset. Our evaluation across metrics of SMILES validity, formula consistency, and structural similarity reveals that while LLMs can produce syntactically valid and partially plausible structures, they fail to achieve chemical accuracy or link reasoning to correct molecular predictions. These findings highlight both the interpretive potential and the current limitations of LLM-based reasoning for molecular elucidation, providing a foundation for future work that combines domain knowledge and reinforcement learning to achieve chemically grounded AI reasoning.
https://arxiv.org/abs/2601.06289
Academic Papers
svg
d16f936801c5b36b613c95855605c15f9caeab5592af5f8b0a28c8afbd4f71d3
2026-01-13T00:00:00-05:00
A Structure-Preserving Numerical Scheme for Optimal Control and Design of Mixing in Incompressible Flows
arXiv:2601.06294v1 Announce Type: new Abstract: We develop a structure-preserving computational framework for optimal mixing control in incompressible flows. Our approach exactly conserves the continuous system's key invariants (mass and $L^2$-energy), while also maintaining discrete state-adjoint duality at every time step. These properties are achieved by integrating a centered finite-volume discretization in space with a time-symmetric Crank-Nicolson integrator for both the forward advection and its adjoint, all inside a gradient-based optimization loop. The result is a numerical solver that is faithful to the continuous optimality conditions and efficiently computes mixing-enhancing controls. In our numerical tests, the optimized time-dependent stirring produces a nearly exponential decay of a chosen mix-norm, achieving orders-of-magnitude faster mixing than any single steady flow. To our knowledge, this work provides the first evidence that enforcing physical structure at the discrete level can lead to both exact conservation and highly effective mixing outcomes in optimal flow design.
https://arxiv.org/abs/2601.06294
Academic Papers
svg
e2f73b6c1efac3e149ba9e45ae78cee7861fe96c5326e7bd318154f6fa04a46c
2026-01-13T00:00:00-05:00
Separation Results for Constant-Depth and Multilinear Ideal Proof Systems
arXiv:2601.06299v1 Announce Type: new Abstract: In this work, we establish separation theorems for several subsystems of the Ideal Proof System (IPS), an algebraic proof system introduced by Grochow and Pitassi (J. ACM, 2018). Separation theorems are well-studied in the context of classical complexity theory, Boolean circuit complexity, and algebraic complexity. In an important work of Forbes, Shpilka, Tzameret, and Wigderson (ToC, 2021), two proof techniques were introduced to prove lower bounds for subsystems of the IPS, namely the functional method and the multiples method. We use these techniques and obtain the following results. Hierarchy theorem for constant-depth IPS: Recently, Limaye, Srinivasan, and Tavenas (J. ACM 2025) proved a hierarchy theorem for constant-depth algebraic circuits. We adapt the result and prove a hierarchy theorem for constant-depth $\mathsf{IPS}$. We show that there is an unsatisfiable multilinear instance refutable by a depth-$\Delta$ $\mathsf{IPS}$ such that any depth-($\Delta/10)$ $\mathsf{IPS}$ refutation for it must have superpolynomial size. This result is proved by building on the multiples method. Separation theorems for multilinear IPS: In an influential work, Raz (ToC, 2006) unconditionally separated two algebraic complexity classes, namely multilinear $\mathsf{NC}^{1}$ from multilinear $\mathsf{NC}^{2}$. In this work, we prove a similar result for a well-studied fragment of multilinear-$\mathsf{IPS}$. Specifically, we present an unsatisfiable instance such that its functional refutation, i.e., the unique multilinear polynomial agreeing with the inverse of the polynomial over the Boolean cube, has a small multilinear-$\mathsf{NC}^{2}$ circuit. However, any multilinear-$\mathsf{NC}^{1}$ $\mathsf{IPS}$ refutation ($\mathsf{IPS}_{\mathsf{LIN}}$) for it must have superpolynomial size. This result is proved by building on the functional method.
https://arxiv.org/abs/2601.06299
Academic Papers
svg
f0bdc22a5c1777793be290b69254e56602e9149d4fc17f84d648609489b70b0c
2026-01-13T00:00:00-05:00
$\texttt{AMEND++}$: Benchmarking Eligibility Criteria Amendments in Clinical Trials
arXiv:2601.06300v1 Announce Type: new Abstract: Clinical trial amendments frequently introduce delays, increased costs, and administrative burden, with eligibility criteria being the most commonly amended component. We introduce \textit{eligibility criteria amendment prediction}, a novel NLP task that aims to forecast whether the eligibility criteria of an initial trial protocol will undergo future amendments. To support this task, we release $\texttt{AMEND++}$, a benchmark suite comprising two datasets: $\texttt{AMEND}$, which captures eligibility-criteria version histories and amendment labels from public clinical trials, and $\verb|AMEND_LLM|$, a refined subset curated using an LLM-based denoising pipeline to isolate substantive changes. We further propose $\textit{Change-Aware Masked Language Modeling}$ (CAMLM), a revision-aware pretraining strategy that leverages historical edits to learn amendment-sensitive representations. Experiments across diverse baselines show that CAMLM consistently improves amendment prediction, enabling more robust and cost-effective clinical trial design.
https://arxiv.org/abs/2601.06300
Academic Papers
svg
7c4a85fd27af648087fc76ee6f43db9b1ec524c144c53a468aae3c36e9483cfd
2026-01-13T00:00:00-05:00
Beyond BeautifulSoup: Benchmarking LLM-Powered Web Scraping for Everyday Users
arXiv:2601.06301v1 Announce Type: new Abstract: Web scraping has historically required technical expertise in HTML parsing, session management, and authentication circumvention, which limited large-scale data extraction to skilled developers. We argue that large language models (LLMs) have democratized web scraping, enabling low-skill users to execute sophisticated operations through simple natural language prompts. While extensive benchmarks evaluate these tools under optimal expert conditions, we show that without extensive manual effort, current LLM-based workflows allow novice users to scrape complex websites that would otherwise be inaccessible. We systematically benchmark what everyday users can do with off-the-shelf LLM tools across 35 sites spanning five security tiers, including authentication, anti-bot, and CAPTCHA controls. We devise and evaluate two distinct workflows: (a) LLM-assisted scripting, where users prompt LLMs to generate traditional scraping code but maintain manual execution control, and (b) end-to-end LLM agents, which autonomously navigate and extract data through integrated tool use. Our results demonstrate that end-to-end agents have made complex scraping accessible - requiring as little as a single prompt with minimal refinement (less than 5 changes) to complete workflows. We also highlight scenarios where LLM-assisted scripting may be simpler and faster for static sites. In light of these findings, we provide simple procedures for novices to use these workflows and gauge what adversaries could achieve using these.
https://arxiv.org/abs/2601.06301
Academic Papers
svg
e63871169947e6bc06c00f8d4d1b5bb984ee14cdd87356308faaf1292d99d6bb
2026-01-13T00:00:00-05:00
Why LoRA Fails to Forget: Regularized Low-Rank Adaptation Against Backdoors in Language Models
arXiv:2601.06305v1 Announce Type: new Abstract: Low-Rank Adaptation (LoRA) is widely used for parameter-efficient fine-tuning of large language models, but it is notably ineffective at removing backdoor behaviors from poisoned pretrained models when fine-tuning on clean dataset. Contrary to the common belief that this weakness is caused primarily by low rank, we show that LoRA's vulnerability is fundamentally spectral. Our analysis identifies two key factors: LoRA updates (i) possess insufficient spectral strength, with singular values far below those of pretrained weights, and (ii) exhibit unfavorable spectral alignment, weakly matching clean-task directions while retaining overlap with trigger-sensitive subspaces. We further establish a critical scaling threshold beyond which LoRA can theoretically suppress trigger-induced activations, and we show empirically that standard LoRA rarely reaches this regime. We introduce Regularized Low-Rank Adaptation (RoRA), which improves forgetting by increasing spectral strength and correcting alignment through clean-strengthened regularization, trigger-insensitive constraints, and post-training spectral rescaling. Experiments across multiple NLP benchmarks and attack settings show that RoRA substantially reduces attack success rates while maintaining clean accuracy.
https://arxiv.org/abs/2601.06305
Academic Papers
svg
c866ef408c1fd5169fb27301da9b3296dd4bce2814ce8f2443001b395d366012
2026-01-13T00:00:00-05:00
SyntaxMind at BLP-2025 Task 1: Leveraging Attention Fusion of CNN and GRU for Hate Speech Detection
arXiv:2601.06306v1 Announce Type: new Abstract: This paper describes our system used in the BLP-2025 Task 1: Hate Speech Detection. We participated in Subtask 1A and Subtask 1B, addressing hate speech classification in Bangla text. Our approach employs a unified architecture that integrates BanglaBERT embeddings with multiple parallel processing branches based on GRUs and CNNs, followed by attention and dense layers for final classification. The model is designed to capture both contextual semantics and local linguistic cues, enabling robust performance across subtasks. The proposed system demonstrated high competitiveness, obtaining 0.7345 micro F1-Score (2nd place) in Subtask 1A and 0.7317 micro F1-Score (5th place) in Subtask 1B.
https://arxiv.org/abs/2601.06306
Academic Papers
svg
e835302ef7e33b37e4de065d5d45c6974e8272da9d7eff18d0f9313cb0de015b
2026-01-13T00:00:00-05:00
A Rising Tide Lifts All Boats: MTQE Rewards for Idioms Improve General Translation Quality
arXiv:2601.06307v1 Announce Type: new Abstract: Non-compositional expressions (e.g., idioms, proverbs, and metaphors) pose significant challenges for neural machine translation systems because their meanings cannot be derived from individual words alone. These expressions encode rich, cultural meaning, and have both figurative and literal meanings, making accurate translation difficult. Because models are fairly good at translating compositional text, we investigate GRPO-style fine-tuning using Machine Translation Quality Estimation (MTQE) models as reward functions to train models to better translate idioms. Using Chinese and Hindi idiom datasets, we find that idiom translation abilities improve by ~14 points, general, non-idiomatic translation implicitly improves by ~8 points, and cross-lingual translation abilities (trained on one language, evaluated on another) improves by ~6 points. Overall, our work quantifies the non-compositional translation gap and offers insights for developing LLMs with stronger cross-cultural and figurative language understanding.
https://arxiv.org/abs/2601.06307
Academic Papers
svg
449ff86c69f2ccce5a04bc21a2846ab18524e03d2c0d42b29940bada8a990204
2026-01-13T00:00:00-05:00
VideoWeave: A Data-Centric Approach for Efficient Video Understanding
arXiv:2601.06309v1 Announce Type: new Abstract: Training video-language models is often prohibitively expensive due to the high cost of processing long frame sequences and the limited availability of annotated long videos. We present VideoWeave, a simple yet effective approach to improve data efficiency by constructing synthetic long-context training samples that splice together short, captioned videos from existing datasets. Rather than modifying model architectures or optimization objectives, VideoWeave reorganizes available video-text pairs to expand temporal diversity within fixed compute. We systematically study how different data composition strategies like random versus visually clustered splicing and caption enrichment affect downstream performance on downstream video question answering. Under identical compute constraints, models trained with VideoWeave achieve higher accuracy than conventional video finetuning. Our results highlight that reorganizing training data, rather than altering architectures, may offer a simple and scalable path for training video-language models. We link our code for all experiments here.
https://arxiv.org/abs/2601.06309
Academic Papers
svg
1cbe416f52b69b337f7569025d840eca8da0a01737b5565fab6fd07fc2df9788
2026-01-13T00:00:00-05:00
C-EQ-ALINEA: Distributed, Coordinated, and Equitable Ramp Metering Strategy for Sustainable Freeway Operations
arXiv:2601.06311v1 Announce Type: new Abstract: Ramp metering is a widely deployed traffic management strategy for improving freeway efficiency, yet conventional approaches often lead to highly uneven delay distributions across on-ramps, undermining user acceptance and long-term sustainability. While existing fairness-aware ramp metering methods can mitigate such disparities, they typically rely on centralized optimization, detailed traffic models, or data-intensive learning frameworks, limiting their real-world applicability, particularly in networks operating legacy ALINEA-based systems. This paper proposes C-EQ-ALINEA, a decentralized, coordinated, and equity-aware extension of the classical ALINEA feedback controller. The approach introduces lightweight information exchange among neighbouring ramps, enabling local coordination that balances congestion impacts without centralized control, additional infrastructure, or complex optimization. C-EQ-ALINEA preserves the simplicity and robustness of ALINEA while explicitly addressing multiple notions of fairness, including Harsanyian, Egalitarian, Rawlsian, and Aristotelian perspectives. The method is evaluated in a calibrated 24-hour microsimulation of Amsterdam's A10 ring road using SUMO. Results demonstrate that C-EQ-ALINEA substantially improves the equity of delay distributions across ramps and users, while maintaining (in several configurations surpassing) the efficiency of established coordinated strategies such as METALINE. These findings indicate that meaningful fairness gains can be achieved through minimal algorithmic extensions to widely deployed controllers, offering a practical and scalable pathway toward sustainable and socially acceptable freeway operations. Open source implementation available on GitHub.
https://arxiv.org/abs/2601.06311
Academic Papers
svg
061358a1dd34196e70a8a6f4271edd0243b3d69021efcf96a8220d9ab80736fa
2026-01-13T00:00:00-05:00
Koopman Model Dimension Reduction via Variational Bayesian Inference and Graph Search
arXiv:2601.06315v1 Announce Type: new Abstract: Koopman operator recently gained increasing attention in the control systems community for its abilities to bridge linear and nonlinear systems. Data driven Koopman operator approximations have established themselves as key enablers for system identification and model predictive control. Nonetheless, such methods commonly entail a preselected definition of states in the function space leading to high dimensional overfitting models and degraded long term prediction performances. We address this problem by proposing a hierarchical probabilistic approach for the Koopman model identification problem. In our method, elements of the model are treated as random variables and the posterior estimates are found using variational Bayesian (VB) inference updates. Our model distinguishes from others in the integration of inclusion flags. By the help of the inclusion flags, we intuitively threshold the probability of each state in the model. We then propose a graph search based algorithm to reduce the preselected states of the Koopman model. We demonstrate that our reduction method overcomes numerical instabilities and the reduced models maintain performance in simulated and real experiments.
https://arxiv.org/abs/2601.06315
Academic Papers
svg
978c620040a2bb1eb28379017026b5ea893195361107a6434fff06f63e42065b
2026-01-13T00:00:00-05:00
Annotating Dimensions of Social Perception in Text: The First Sentence-Level Dataset of Warmth and Competence
arXiv:2601.06316v1 Announce Type: new Abstract: Warmth (W) (often further broken down into Trust (T) and Sociability (S)) and Competence (C) are central dimensions along which people evaluate individuals and social groups (Fiske, 2018). While these constructs are well established in social psychology, they are only starting to get attention in NLP research through word-level lexicons, which do not completely capture their contextual expression in larger text units and discourse. In this work, we introduce Warmth and Competence Sentences (W&amp;C-Sent), the first sentence-level dataset annotated for warmth and competence. The dataset includes over 1,600 English sentence--target pairs annotated along three dimensions: trust and sociability (components of warmth), and competence. The sentences in W&amp;C-Sent are from social media and often express attitudes and opinions about specific individuals or social groups (the targets of our annotations). We describe the data collection, annotation, and quality-control procedures in detail, and evaluate a range of large language models (LLMs) on their ability to identify trust, sociability, and competence in text. W&amp;C-Sent provides a new resource for analyzing warmth and competence in language and supports future research at the intersection of NLP and computational social science.
https://arxiv.org/abs/2601.06316
Academic Papers
svg
c12e1829e206474d28575ac092b1d773e21ede7ca0a785722aef16497b9f068d
2026-01-13T00:00:00-05:00
Random is Faster than Systematic in Multi-Objective Local Search
arXiv:2601.06318v1 Announce Type: new Abstract: Local search is a fundamental method in operations research and combinatorial optimisation. It has been widely applied to a variety of challenging problems, including multi-objective optimisation where multiple, often conflicting, objectives need to be simultaneously considered. In multi-objective local search algorithms, a common practice is to maintain an archive of all non-dominated solutions found so far, from which the algorithm iteratively samples a solution to explore its neighbourhood. A central issue in this process is how to explore the neighbourhood of a selected solution. In general, there are two main approaches: 1) systematic exploration and 2) random sampling. The former systematically explores the solution's neighbours until a stopping condition is met -- for example, when the neighbourhood is exhausted (i.e., the best improvement strategy) or once a better solution is found (i.e., first improvement). In contrast, the latter randomly selects and evaluates only one neighbour of the solution. One may think systematic exploration may be more efficient, as it prevents from revisiting the same neighbours multiple times. In this paper, however, we show that this may not be the case. We first empirically demonstrate that the random sampling method is consistently faster than the systematic exploration method across a range of multi-objective problems. We then give an intuitive explanation for this phenomenon using toy examples, showing that the superior performance of the random sampling method relies on the distribution of ``good neighbours''. Next, we show that the number of such neighbours follows a certain probability distribution during the search. Lastly, building on this distribution, we provide a theoretical insight for why random sampling is more efficient than systematic exploration, regardless of whether the best improvement or first improvement strategy is used.
https://arxiv.org/abs/2601.06318
Academic Papers
svg
fd7c7d1e06b0f97ef7aab2f4ab2f753c4587db8817050d1c35b6635a2a929acc
2026-01-13T00:00:00-05:00
SourceNet: Interpretable Sim-to-Real Inference on Variable-Geometry Sensor Arrays for Earthquake Source Inversion
arXiv:2601.06320v1 Announce Type: new Abstract: Inferring high-dimensional physical states from sparse, ad-hoc sensor arrays is a fundamental challenge across AI for Science, as they are complicated by irregular geometries and the profound Sim-to-Real gap in physical modeling. Taking earthquake source characterization as a representative challenge, we address limitations in conventional deep learning: CNNs demand fixed grids, while pooling-based architectures (e.g., DeepSets) struggle to capture the relational wave physics. Here, we propose SourceNet, a Transformer-based framework that treats the sensor array as a flexible set to model arbitrary geometries. To bridge the reality gap, we introduce Physics-Structured Domain Randomization (PSDR). Instead of forcing feature alignment, PSDR randomizes the governing physical dynamics by varying velocity structures, propagation effects, and sensor availability, to force the model to learn robust representations invariant to unmodeled environmental heterogeneity. By pre-training on 100,000 synthetic events and fine-tuning on ~2,000 real world events, SourceNet achieves state-of-the-art precision on held-out real data. This demonstrates exceptional data efficiency, and matches classical solvers while enabling real-time processing. Remarkably, interpretability analysis reveals that the model shows scientific-agent-like features: it autonomously discovers geometric information bottlenecks and learns an attention policy that prioritizes sparse sensor placements, effectively recovering principles of optimal experimental design from data alone.
https://arxiv.org/abs/2601.06320
Academic Papers
svg
8ccd56915b8022e7ef8b732901caf2c1d5ab9f9abf10bbaa53efbd381c527ac3
2026-01-13T00:00:00-05:00
A Data-Driven Surrogate Modeling and Sensor/Actuator Placement Framework for Flexible Spacecraft
arXiv:2601.06325v1 Announce Type: new Abstract: Flexible spacecraft structures present significant challenges for physical and control system design due to nonlinear dynamics, mission constraints, environmental variables, and changing operational conditions. This paper presents a data-driven framework for constructing reduced-order surrogate models of a flexible spacecraft using the method of Dynamic Mode Decomposition (DMD), followed by optimal sensor/actuator pair placement. Highfidelity simulation data from a nonlinear flexible spacecraft model, including coupled rigid-body and elastic modes, are captured by defining a mesh of nodes over the spacecraft body. The data-driven methods are then used to construct a modal model from the time histories of these node points. Optimal sensor/actuator placement for controllability and observability is performed via a nonlinear programming technique that maximizes the singular values of the Hankel matrix. Finally, the sensor placement and dynamics modeling approach is iterated to account for changes in the dynamic system introduced by sensor/actuator physical mass. The proposed methodology enables initialization of physical modeling without requiring a direct analytical model and provides a practical solution for onboard implementation in model-based control and estimation systems. Results demonstrate optimal design methodology with substantial model-order reduction while preserving dynamic fidelity, and provide insight into effective sensor-actuator configurations for estimation and control.
https://arxiv.org/abs/2601.06325
Academic Papers
svg
f365f3ef06ed232100a6a6aa578312e33b1c30f9f1e343cfa8dd3536943ef4f5
2026-01-13T00:00:00-05:00
From Lagging to Leading: Validating Hard Braking Events as High-Density Indicators of Segment Crash Risk
arXiv:2601.06327v1 Announce Type: new Abstract: Identifying high crash risk road segments and accurately predicting crash incidence is fundamental to implementing effective safety countermeasures. While collision data inherently reflects risk, the infrequency and inconsistent reporting of crashes present a major challenge to robust risk prediction models. The proliferation of connected vehicle technology offers a promising avenue to leverage high-density safety metrics for enhanced crash forecasting. A Hard-Braking Event (HBE), interpreted as an evasive maneuver, functions as a potent proxy for elevated driving risk due to its demonstrable correlation with underlying crash causal factors. Crucially, HBE data is significantly more readily available across the entire road network than conventional collision records. This study systematically evaluated the correlation at individual road segment level between police-reported collisions and aggregated and anonymized HBEs identified via the Google Android Auto platform, utilizing datasets from California and Virginia. Empirical evidence revealed that HBEs occur at a rate magnitudes higher than traffic crashes. Employing the state-of-the-practice Negative-Binomial regression models, the analysis established a statistically significant positive correlation between the HBE rate and the crash rate: road segments exhibiting a higher frequency of HBEs were consistently associated with a greater incidence of crashes. This sophisticated model incorporated and controlled for various confounding factors, including road type, speed profile, proximity to ramps, and road segment slope. The HBEs derived from connected vehicle technology thus provide a scalable, high-density safety surrogate metric for network-wide traffic safety assessment, with the potential to optimize safer routing recommendations and inform the strategic deployment of active safety countermeasures.
https://arxiv.org/abs/2601.06327
Academic Papers
svg
7f00bbc72c84fc4ecf4f8eb5f9ed0918c2d80035888b7ff1c862c92d24deb756
2026-01-13T00:00:00-05:00
ToolGym: an Open-world Tool-using Environment for Scalable Agent Testing and Data Curation
arXiv:2601.06328v1 Announce Type: new Abstract: Tool-using LLM agents still struggle in open-world settings with large tool pools, long-horizon objectives, wild constraints, and unreliable tool states. For scalable and realistic training and testing, we introduce an open-world tool-using environment, built on 5,571 format unified tools across 204 commonly used apps. It includes a task creation engine that synthesizes long-horizon, multi-tool workflows with wild constraints, and a state controller that injects interruptions and failures to stress-test robustness. On top of this environment, we develop a tool select-then-execute agent framework with a planner-actor decomposition to separate deliberate reasoning and self-correction from step-wise execution. Comprehensive evaluation of state-of-the-art LLMs reveals the misalignment between tool planning and execution abilities, the constraint following weakness of existing LLMs, and DeepSeek-v3.2's strongest robustness. Finally, we collect 1,170 trajectories from our environment to fine-tune LLMs, achieving superior performance to baselines using 119k samples, indicating the environment's value as both a realistic benchmark and a data engine for tool-using agents. Our code and data will be publicly released.
https://arxiv.org/abs/2601.06328
Academic Papers
svg
298cccee64875ff5632926a08cca1c74b1b92960f410111f5b9f6483efd0ec8b
2026-01-13T00:00:00-05:00
On the Fallacy of Global Token Perplexity in Spoken Language Model Evaluation
arXiv:2601.06329v1 Announce Type: new Abstract: Generative spoken language models pretrained on large-scale raw audio can continue a speech prompt with appropriate content while preserving attributes like speaker and emotion, serving as foundation models for spoken dialogue. In prior literature, these models are often evaluated using ``global token perplexity'', which directly applies the text perplexity formulation to speech tokens. However, this practice overlooks fundamental differences between speech and text modalities, possibly leading to an underestimation of the speech characteristics. In this work, we propose a variety of likelihood- and generative-based evaluation methods that serve in place of naive global token perplexity. We demonstrate that the proposed evaluations more faithfully reflect perceived generation quality, as evidenced by stronger correlations with human-rated mean opinion scores (MOS). When assessed under the new metrics, the relative performance landscape of spoken language models is reshaped, revealing a significantly reduced gap between the best-performing model and the human topline. Together, these results suggest that appropriate evaluation is critical for accurately assessing progress in spoken language modeling.
https://arxiv.org/abs/2601.06329
Academic Papers
svg
6e534eddd060b0f66b1e2aadf3396d613afdb86ba5475aaa4a99f40b8e00801a
2026-01-13T00:00:00-05:00
Rethinking Inter-Process Communication with Memory Operation Offloading
arXiv:2601.06331v1 Announce Type: new Abstract: As multimodal and AI-driven services exchange hundreds of megabytes per request, existing IPC runtimes spend a growing share of CPU cycles on memory copies. Although both hardware and software mechanisms are exploring memory offloading, current IPC stacks lack a unified runtime model to coordinate them effectively. This paper presents a unified IPC runtime suite that integrates both hardware- and software-based memory offloading into shared-memory communication. The system characterizes the interaction between offload strategies and IPC execution, including synchronization, cache visibility, and concurrency, and introduces multiple IPC modes that balance throughput, latency, and CPU efficiency. Through asynchronous pipelining, selective cache injection, and hybrid coordination, the system turns offloading from a device-specific feature into a general system capability. Evaluations on real-world workloads show instruction count reductions of up to 22%, throughput improvements of up to 2.1x, and latency reductions of up to 72%, demonstrating that coordinated IPC offloading can deliver tangible end-to-end efficiency gains in modern data-intensive systems.
https://arxiv.org/abs/2601.06331
Academic Papers
svg
bb66294b71cfc725af67bee23319d4cabcab84fbb39babf35d5bbb732554eb2b
2026-01-13T00:00:00-05:00
Kolmogorov-Arnold Networks-Based Tolerance-Aware Manufacturability Assessment Integrating Design-for-Manufacturing Principles
arXiv:2601.06334v1 Announce Type: new Abstract: Manufacturability assessment is a critical step in bridging the persistent gap between design and production. While artificial intelligence (AI) has been widely applied to this task, most existing frameworks rely on geometry-driven methods that require extensive preprocessing, suffer from information loss, and offer limited interpretability. This study proposes a methodology that evaluates manufacturability directly from parametric design features, enabling explicit incorporation of dimensional tolerances without requiring computer-aided design (CAD) processing. The approach employs Kolmogorov-Arnold Networks (KANs) to learn functional relationships between design parameters, tolerances, and manufacturability outcomes. A synthetic dataset of 300,000 labeled designs is generated to evaluate performance across three representative scenarios: hole drilling, pocket milling, and combined drilling-milling, while accounting for machining constraints and design-for-manufacturing (DFM) rules. Benchmarking against fourteen machine learning (ML) and deep learning (DL) models shows that KAN achieves the highest performance in all scenarios, with AUC values of 0.9919 for drilling, 0.9841 for milling, and 0.9406 for the combined case. The proposed framework provides high interpretability through spline-based functional visualizations and latent-space projections, enabling identification of the design and tolerance parameters that most strongly influence manufacturability. An industrial case study further demonstrates how the framework enables iterative, parameter-level design modifications that transform a non-manufacturable component into a manufacturable one.
https://arxiv.org/abs/2601.06334
Academic Papers
svg